id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.04172
Crosstalk-Based Parameterized Quantum Circuit Approximation
In this paper, we propose an ansatz approximation approach for variational quantum algorithms (VQAs) that uses one of the hardware's main attributes, its crosstalk behavior, as its main approximation driver. By utilizing crosstalk-adaptive scheduling, we are able to apply a circuit-level approximation/optimization to our ansatz. Our design procedure involves first characterizing the hardware's crosstalk and then approximating the circuit by a desired level of crosstalk mitigation, all while effectively reducing its duration and gate counts. We demonstrate the effect of crosstalk mitigation on expressibility, trainability, and entanglement: key components that drive the utility of parameterized circuits. We tested our approach on real quantum hardware against a base configuration, and our results showed superior performance for the circuit-level optimized ansatz over a base ansatz for two quantum chemistry benchmarks. We take into consideration that applications vary in their response to crosstalk, and we believe that this approximation strategy can be used to create ansatze that are expressive, trainable, and with crosstalk mitigation levels tailored for specific workloads.
Mohannad Ibrahim, Nicholas T. Bronn, Gregory T. Byrd
2023-05-07T03:05:19Z
http://arxiv.org/abs/2305.04172v2
# Crosstalk-Based Parameterized Quantum Circuit Approximation ###### Abstract In this paper, we propose an ansatz approximation approach for variational quantum algorithms (VQAs) that uses one of the hardware's main attributes, its crosstalk behavior, as its main approximation driver. By utilizing crosstalk-adaptive scheduling, we are able to apply a circuit-level approximation/optimization to our ansatz. Our design procedure involves first characterizing the hardware's crosstalk and then approximating the circuit by a desired level of crosstalk mitigation, all while effectively reducing its duration and gate counts. We demonstrate the effect of crosstalk mitigation on expressibility, trainability, and entanglement: key components that drive the utility of parameterized circuits. We tested our approach on real quantum hardware against a base configuration, and our results showed superior performance for the circuit-level optimized ansatz over a base ansatz for two quantum chemistry benchmarks. We take into consideration that applications vary in their response to crosstalk, and we believe that this approximation strategy can be used to create ansatze that are expressive, trainable, and with crosstalk mitigation levels tailored for specific workloads. Quantum computing, variational quantum algorithms (VQAs), parameterized quantum circuits (PQCs), Crosstalk ## I Introduction Near-term quantum computers are characterized by a limited number of qubits, in the range of 10s to 100s with current technology. Because there are not enough qubits to implement full-scale quantum error correction, system and environmental noise is exposed to the algorithm, which limits the useful depth of quantum circuits. Variational quantum algorithms are a promising approach for current hardware, utilizing reasonably short-depth circuits and tunable parameters that can help mitigate systemic noise. A _variational quantum algorithm (VQA)_ is a hybrid scheme of computation that allocates tasks to both quantum and classical computing resources and coordinates the execution between the two through a tight feedback loop. The quantum computer's task is to prepare and measure relevant quantum states generated by the so-called _ansatz_ or _Parameterized Quantum Circuit (PQC)_. The classical computer's task is to update/optimize the circuit parameters, which are then fed back into the quantum computer to prepare a new state. This cycle is repeated until some convergence criteria are satisfied. VQAs have been applied to a wide variety of applications [1] such as quantum chemistry [2, 3, 4, 5], combinatorial optimization [6, 7, 8], and machine learning [9, 10, 11, 12, 13]. A widely-used VQA is the _Variational Quantum Eigensolver (VQE)_[14], which seeks to find the minimum eigenvalue of a matrix. When used in quantum simulation, the matrix is typically the Hamiltonian of a system. However, the algorithm is not just limited to finding low energy eigenstates; it can be extended to minimize any objective function that is expressible as a quantum observable [15, 16, 17]. In this paper, we propose a different approach for VQA optimization by integrating one of the quantum hardware's characteristics, particularly its crosstalk noise, in the design process of PQCs. We propose an approximation strategy that uses the hardware's crosstalk behavior to create approximate versions of a PQC with different levels of crosstalk mitigation. We refer to PQCs created using this technique as Crosstalk-optimized (Xtalk) PQCs. Multiple hardware features, such as the native gate set, topology, and noise model, are unique to each machine. We chose crosstalk as our main hardware characteristic as it is recognized as a major challenge in quantum computing, mitigable by both hardware and software techniques [18, 19, 20, 21]. Additionally, crosstalk-adaptive scheduling techniques [20, 21] make it possible for this attribute to be easily integrated into PQC design, as we will further discuss in later sections. Crosstalk can be defined as a mixture of unwanted interactions between coupled qubits in a quantum device [20, 21]. It can appear in many architectures such as trapped ions [22] and superconducting systems [18, 19, 23]. Crosstalk errors can manifest in many ways: as an exchange of excitation, leakage to non-computational states [24], or an order of magnitude worse gate fidelities [20], which are all detrimental to quantum programs. In superconducting systems, crosstalk can occur for multiple reasons. IBM superconducting devices, for example, use fixed-frequency resonators to couple their fixed-frequency transmon qubits. This coupling produces an always-on \(ZZ\) interaction proportional to the coupling strength or transmon-transmon exchange \(J\)[18, 25]. This always-on \(ZZ\) interaction is a major source of error in fixed-frequency devices. Besides reducing the two-qubit gate fidelities below the levels set by coherence, interactions with spectator qubits (qubits that are not part of the two-qubit interaction) cause unwanted entanglement to accumulate across the system [19]. Crosstalk mitigation is, therefore, a major goal in quantum hardware design and fabrication. For example, IBM machines employ a set of crosstalk mitigation techniques, such as limiting the device's connectivity--which effectively simplifies frequency allocation and limits the number of spectators to a two-qubit interaction while admitting a quantum error correction code -- and laser annealing of Josephson junctions [26, 27]. Fixed-frequency transmon architectures such as the one proposed in [28] utilize tunable couplers to achieve faster two-qubit gates and to address errors due to the always-on \(ZZ\) term. Other tunable superconducting devices, such as Google's Sycamore processor [29], where both qubits and couplers are tunable, scalable optimization techniques are utilized to extract error-reducing frequencies [30]. On a pulse level, experimental results in [19] proved that adding rotary echoes to the two-qubit cross-resonance interaction suppresses errors arising from the static \(ZZ\) term. Despite such efforts, crosstalk still exists in today's quantum machines, as we will see in Section III, and is still one of the main scalability-related challenges. Leveraging crosstalk-aware scheduling, we develop a novel ansatz approximation mechanism that is capable of creating different versions of a base configuration with different levels of crosstalk mitigation, described in Section IV. In Section V, we evaluate the effectiveness of the approach. Our crosstalk-based ansatze outperform the base configuration for two quantum chemistry benchmarks, speeding up execution by up to \(2.9\times\) and \(1.83\times\) on average. Moreoever, we explore the connection between crosstalk and trainability by demonstrating that circuits experiencing more crosstalk have lower trainability. ## II Background ### _Randomized Benchmarking_ Characterizing the noise affecting a quantum system is useful in many ways. It allows for many optimizations to a workload's execution on the system and for good error-correction schemes. Randomized Benchmarking (RB) is a widely-used, scalable technique for partially characterizing a quantum system's noise. It is used by quantum hardware developers to benchmark known gate sets such as Clifford and CNOT-Dihedral [31, 32, 33, 34]. The RB protocol can be summarized in four steps [32]: **Step 1:**: \(K\) sequences of different lengths (\(m\)) are generated. Each sequence consists of random gates from a specific gate set (e.g., Clifford) and a computed inverse to return the qubits to their initial state. **Step 2:**: The sequences are executed on the hardware under investigation. Each sequence is modeled for later processing with a variable \(S_{\mathrm{i_{m}}}\) that accounts for the error rate of each operation in the sequence. **Step 3:**: The survival probability \(\mathrm{Tr}[E_{\psi}S_{\mathrm{i_{m}}}(\rho_{\psi})]\) of each of the \(K\) sequences is measured, where \(\rho_{\psi}\) is the initial state (taking into account initial state preparation errors), and \(E_{\psi}\) is the positive operator-valued measurement (POVM) that takes into account measurement errors. The average fidelity for the sequences \(K_{m}\) is then calculated \[F_{\mathrm{seq}}=(m,|\psi\rangle)=Tr[E_{\psi}S_{\mathrm{K_{m}}}(\rho_{\psi})] \tag{1}\] where the average sequence operation \(S_{\mathrm{K_{m}}}=\frac{1}{K_{m}}\sum_{\mathrm{i_{m}}}S_{\mathrm{i_{m}}}\). **Step 4:**: The experiment is repeated for different sequence lengths (\(m\)). The average sequence fidelities obtained in Step 3 are fitted to \[F_{\mathrm{seq}}^{(0)}=(m,|\psi\rangle)=A_{0}\alpha^{m}+B_{0} \tag{2}\] where \(F^{(0)}\) is the gate-independent and time-independent "simpler" fitting model. \(A_{0}\) and \(B_{0}\) coefficients encode the state preparation and measurement errors, respectively. The average error rate or Error per Clifford (EPC) is determined by the parameter \(\alpha\) through \[\mathrm{EPC}=1-\alpha-\frac{1-\alpha}{2^{n}} \tag{3}\] where \(n\) is the number of the qubits. _Simultaneous_ RB (SRB) [35], which consists of RB experiments run simultaneously on sets of qubits, allows for further investigations of a system's noise properties by comparing to RB experiments run individually. It enables the measurement of crosstalk and "conditional" error rates: gate errors on a qubit while nearby qubits are active. Utilizing SRB to control and Fig. 1: **(a)** The coupling map of _ibmq_guadalupe_. **(b)** Simultaneous Randomized Benchmarking (SRB) for CNOT\({}_{0,1}\) and CNOT\({}_{2,3}\) with \(\{C_{0},...,C_{m}\}\) being the random Cliffords and \(C_{m+1}\) the inverting Clifford. The mapping of the gates on the backend is indicated by the green and red highlighting in (a). Operations indicated by the colored regions can be run in parallel as they do not share resources. Note also that the coupling between qubits \(1\) and \(2\) cannot be used when both qubits are busy. handle errors arising from crosstalk and unwanted interactions in multi-qubit systems has been proposed in [20]. We further explain their proposed methodology in the next section. ### _Expressibility, Trainability & Entanglement_ To evaluate whether a PQC can prepare the target quantum state, different metrics have been proposed [36, 37, 38, 9, 13]. In this section, we describe three qualitative metrics used in this paper to estimate a PQC's _expressibility_, _trainability_, and _entanglement_. **Expressibility** is a metric first proposed by Sim _et al._[36] to evaluate a PQC's ability to produce quantum states that closely represent the Hilbert space. This is done by comparing the distribution of states obtained from a PQC's parameterized unitary to the maximally expressive uniform _(Haar)_ random states. It is estimated using the Kullback-Leibler (KL) [39] divergence as follows \[\mathrm{Expr}=D_{\text{KL}|}(\hat{P}_{\text{PQC}}(F;\vec{\theta})||P_{\text{ Haar}}(F)) \tag{4}\] where \(P_{\text{Haar}}(F)\) is the probability distribution of fidelities \(F\) for the Haar random state and \(\hat{P}_{\text{PQC}}(F;\vec{\theta})\) is the probability distribution of the fidelities of quantum states prepared by the PQC. As a divergence measure, a smaller expressibility value indicates a more expressive circuit. **Trainability**: the trainability of PQCs is crucial for achieving good performance in VQAs. However, simply increasing the expressiveness of a PQC does not always lead to better performance. It is crucial to characterize the optimization landscape of a VQA and use efficient training routines to ensure good performance. Interestingly, perfectly expressive ansatze often have flatter optimization landscapes and are less trainable [38]. The trainability of PQCs, particularly hardware-efficient ones, has been studied extensively, with earlier work by McClean _et al._[40] demonstrating the phenomenon of barren plateaus, where gradients vanish exponentially as the number of qubits increases. This observation has been further investigated by Cerezo _et al._[15] to show that the occurrence of barren plateaus is cost-function-dependent for shallow ansatze. Other factors, such as noise and entanglement, can also impact barren plateaus [41, 42, 43]. In Section V-A2, we evaluate our PQCs' trainability using cost-function-dependent barren plateau analysis. Specifically, we calculate \(Var[\partial_{i}C]\), which represents the variance of the partial derivative of a cost function \(C\) with respect to parameter \(\theta_{i}\) for \(n\) sampled circuits. The magnitude of the variance reflects the concentration of the partial derivative around zero, with smaller values indicating less trainability for the PQC. **Entanglement** measurement quantifies the amount of entanglement contained in a quantum state. Highly-entangled PQCs can capture non-trivial correlations in quantum data and efficiently represent solution spaces for tasks like ground state preparation or data classification [44, 45, 2, 36, 10]. However, excessive entanglement can lead to concentration of measure, making PQCs too random and less trainable. In recent works, entanglement has been investigated as a primary source of barren plateaus and its tradeoffs with trainability vary across optimization problems [42, 43]. Thus, a comprehensive understanding of the role of entanglement in VQAs is important [46, 47, 48, 49]. In this paper, we use the bipartite entanglement entropy, which is the Von Neumann entropy of the reduced density matrix of any of the subsystems, to estimate the spread \(S\) of circuit entanglement \[S=Tr[\rho_{\alpha}\log_{2}\rho_{\alpha}] \tag{5}\] where \(\rho_{\alpha}\) is the reduced density matrix of \((n-1)/2\) connected qubits containing as many cost function qubits as possible [43]. ## III Crosstalk Characterization using SRB Following the method proposed by Murali _et al._[20], we characterize quantum devices' crosstalk using SRB. We focus on the effect of crosstalk on simultaneous two-qubit gates only in this paper, as we are interested in approximating the entangling layers of a PQC, which are normally comprised of two-qubit gates only. Note that this is different from crosstalk characterization at the two-qubit Hamiltonian level, achieved through tomography-based techniques [50, 19]. Here, we are examining the quantum device's crosstalk behavior at a gate level. Fig. 1(a) shows the coupling map of IBM's \(16\)-qubit device _ibmq_quadalupe_, which we use to demonstrate our technique. We utilize Qiskit Experiment's [51] RB infrastructure for our experiments. First, we perform _Interleaved_ RB (IRB) to measure the independent error rates of CNOT gates applied on each backend pair, with no operations executed in parallel. IRB interleaves the gate under investigation (CNOT) with multiple random sequences generated from the gate set [31, 32, 33]. As we discussed in Section II-A, the measured results are fitted to a theoretical model that accounts for the measured qubits' ground state population, from which a CNOT's error is estimated. The results from this experiment are shown in Fig. 2(a). Fig. 2: **(a)** Individual Error-per-Clifford (EPC) rates for CNOTs executed on _ibmq_quadalupe_. **(b)** Conditional Error-per-Clifford (EPC) rates when two CNOTS are executed simultaneously. Cells are shaded according to the relative conditional-to-independent error rates. Next, we perform SRB by performing IRB on two CNOTs simultaneously, as shown in Fig. 1(b). Two operations can be parallelized if they do not share a quantum resource (both qubits and couplers in this case), as shown by the colored links in Fig. 1(a). SRB allows us to measure the conditional error rate of \(\text{CNOT}_{0,1}\) in the presence of \(\text{CNOT}_{2,3}\) and vice versa. When the conditional error rate is higher than the independent rate, we generally attribute that difference to crosstalk interference. Fig. 2 illustrates the _independent_ and _conditional_ error rates for CNOT gates executed on _ibmq_quadalupe_. The experiment reveals that multiple parallel CNOTs incur crosstalk at different levels of severity that can degrade their error rate by up to \(3.14\times\). The number of experiments is reduced by performing SRB on CNOT pairs that are only one hop away from each other on the coupling map. Experimental results from [20] prove that crosstalk noise is significant only at this distance for IBM machines, which comes as a natural result of the device's limited connectivity. Additionally, results from [52] demonstrated that simultaneous single- and two-qubit gates SRB show minor changes in error rates due to crosstalk. However, the method is still applicable if such a level of characterization is desired. ## IV Crosstalk-Based PQC Approximation In this section, we describe our PQC approximation approach in detail. Our goal is to integrate crosstalk in the design process of PQCs. Such integration allows for further understanding and evaluation of the effects of noise in general, and crosstalk in specific, on the performance of VQAs, and characteristics of PQCs such as _expressibility_[36] and _trainability_[15, 40]. In Section V, we demonstrate how crosstalk closely affects all of these aspects. Our approach is aimed at Hardware Efficient Anstaze (HEA), as they do not typically encode any problem-specific data in their structure, allowing for more approximation and rearrangement flexibility. ### _Approximation to Alternating Layered Ansatz_ The first step in this approach is to break any ordering or dependency constraints in the PQC's entangling layer. This is achieved in Qiskit's transpiler [53] by parsing the PQC's Directed Acyclic Graph (DAG) representation and identifying all operations in its first entangling layer (or sub-layer). Next, the operations are scheduled on the backend's qubits (mapped) with the maximum allowed parallelism, disregarding their ordering or commutativity constraints, resulting in a transformed DAG. In other words, this step approximates the PQC to a structure similar to an Alternating Layered Ansatz (ALA) [15, 40, 54]. Fig. 3(a) and (b) show an example of a PQC configuration before and after applying this approximation step. ### _Overview of Crosstalk-Adaptive Scheduling_ In this section, we give an overview of _XtalkSched_[20], a crosstalk-adaptive scheduling algorithm that aims to mitigate the impacts of crosstalk and decoherence on a quantum program simultaneously. We employ this algorithm to extract crosstalk-mitigated sub-layers from the approximated PQC we obtained in the previous step. XtalkSched models scheduling of the quantum circuit as a constrained optimization problem and solves it using Satisfiability Modulo Theory (SMT) [55]. Its cost function incorporates crosstalk data, program dependencies, and machine calibration (independent error rates, coherence time, and gate duration). To model crosstalk error, the optimizer connects the independent gate error rates with the different overlap scenarios that can happen when multiple gates are executed simultaneously. It does so by creating an overlap set for each gate _Olap_(\(g_{i}\)) that tracks all gates that can possibly overlap with it. Each gate pair is then assigned an overlap indicator \(\sigma_{ij}\) that is set to \(1\) when gates \(g_{i}\) and \(g_{j}\) overlap and \(0\) otherwise. These overlap indicators are used to formulate the gate error constraints. For example, consider a scenario where _Olap_(\(g_{1}\))\(=\{g_{2},g_{3}\}\) which creates four possible scheduling scenarios (See [20, eq. (3)-(6)]). As such, the overlapping scenario will determine whether the optimizer picks an independent error rate for \(g_{1}\) (i.e., \(E(g_{1})\)) or a conditional error rate (e.g., \(E(g_{1}|g_{2})\)), which represents the error rate of \(g_{1}\) in the presence of \(g_{2}\). For any overlap scenarios, the scheduler is configured to pick the maximum error rate possible for a gate. Qubit decoherence errors are accounted for by computing the _lifetime_ of each qubit in the schedule \(q_{i}.t\), which is the difference between the start time of the first operation and the finish time of the last one executed on \(q_{i}\). If a program performs a computation for time \(t\) on a qubit, the probability of error from \(T_{1}\) and \(T_{2}\) losses are \((1-e^{\frac{t}{T_{1}}})\) and \((1-e^{\frac{t}{T_{2}}})\) respectively. With that, the decoherence error is calculated as \[q_{i}.\epsilon=1-e^{\frac{q_{i}.t}{q_{i}.T}} \tag{6}\] Fig. 3: **(a)** The base PQC. **(b)** The PQC after applying the approximation step described in Section IV-A. The PQCs **(c)**, **(d)**, and **(e)** show the approximated PQC after applying Xtalk scheduling with **high**, **medium**, and **low** crosstalk tolerance respectively. where \(T\) is the minimum of \(T_{1}\) and \(T_{2}\), corresponding to the maximum available compute time on \(q_{i}\). The optimizer also adds constraints to satisfy data dependencies as well by ensuring that if two gates \(g_{i}\) and \(g_{j}\) operate on the same set of qubits, the program order is satisfied. Finally, the overall cost function can be represented as follows \[\min\ \left(\omega\sum_{\begin{subarray}{c}\forall g\in G\\ \text{Gate errors (Crosstalk)}\end{subarray}}\ -\ (1-\omega)\sum_{ \begin{subarray}{c}\forall q\in Q\\ \text{Decoherence errors}\end{subarray}}(q.t/q.T)\right) \tag{7}\] where the first term aims to minimize the gate error \(g.\epsilon\) of each gate from the program gates \(G\). The second term minimizes the decoherence error of each of the program qubits \(Q\) according to (6). Finally, \(\omega\in[0,1]\) is a user-set parameter controlling the weight (importance) of each term, which can be tuned per application to balance between gate errors and decoherence and achieve better results. ### _Alternating Crosstalk-Mitigated Layers_ In this step, we apply XtalkSched to the approximated circuit obtained in the first step (Section IV-A) to extract "crosstalk-mitigated" sub-layers that we can use in our Xtalk PQCs. To enable the scheduler to accurately provide us with different levels of crosstalk mitigation, we modify two parameters: * First, we increase the threshold used to calculate each gate's _Olap_ set. This ensures that all conditional error rates larger than \(1\) are accounted for, essentially making the scheduler more sensitive to crosstalk. * With all possible overlaps now considered, alternating the value of \(\omega\) in (7) gives us the desired outcome of different levels of crosstalk mitigation. Dropping \(\omega\) to \(0\) forces the scheduler to create sub-layers with maximum parallelism, as the cost function will only optimize for decoherence. Increasing \(\omega\) balances between the two error terms up until \(1\), at which the scheduler optimizes for gate errors only (including crosstalk) and completely serializes the execution. Once the schedules are obtained, the scheduler adds controls in the form of _barriers_ as a post-processing step. This is important to our approach as it facilitates extracting the sub-layers from the scheduled IR. Figures 3(c), (d), and (e) show the result of scheduling the approximated circuit with three levels of crosstalk tolerance: _high_ (\(\omega=0\)), _medium_ (\(\omega=0.5\)), and _low_ (\(\omega=1\)) respectively. Fig. 5 shows two base configurations we use in our evaluations (Section V). Fig. 5(b) shows the single-layer configuration for \(\textit{base}_{1}\), which contains the base entanglement layer we used in the previous step (before approximation) while Fig. 5(c) shows the single-layer configuration for \(\textit{base}_{2}\), which has the approximated entanglement layer (before applying XtalkSched). With the sub-layers obtained from the scheduler, we can construct our _Xtalk_pqcs_ as shown in Fig. 4. The first \(m\) layers of the PQC are from the base configuration. As _Xtalk_pqcs_ can possibly have dispersed connectivity across a large number of layers, we add this option to help the optimized PQCs achieve better expressibility and entanglement. Our experimental analysis revealed that parameters such as expressibility saturate within \(3\)-\(5\) layers. Therefore, we add up to \(5\) base layers to our Xtalk circuit to make it more expressive. The rest of the circuit is constructed by alternating between single-qubit rotation layers and \(R\) sub-layers obtained by the XtalkScheduler for \(L-m\) times, where \(L\) is the total number of layers. Fig. 4: The _Xtalk_pqc_ configuration. \(m\) specifies the number of base layers. \(R\) is the number of sub-layers obtained by the scheduler and is determined by the configured crosstalk mitigation level. High crosstalk mitigation will lead to larger \(R\) and vice-versa. Fig. 5: **(a) Base PQC configuration. (b) and (c) show the layer configuration for \(\textit{base}_{1}\) and \(\textit{base}_{2}\), respectively.** ## V Results and Evaluation In this section, we first analyze different PQC configurations for expressibility, trainability, and entanglement. Next, we evaluate other circuit parameters such as duration, depth, and gate counts. Finally, we assess the PQCs' VQE performance on real hardware for two quantum chemistry benchmarks. We evaluate five PQC configurations: _base\({}_{1}\)_, _base\({}_{2}\)_, _high_Xtalk_, _medium_Xtalk_, and _low_Xtalk_. Fig. 5 shows the configuration for _base_ PQCs. The _Xtalk_ circuits {_high_Xtalk_, _medium_Xtalk_, _low_Xtalk_} follow the configuration shown in Fig. 4. The sub-layers are obtained by applying different levels of crosstalk-mitigation, with _high_Xtalk_, for example, corresponding to the lowest level of mitigation (Fig. 3(c)) and so on. ### _Expressibility, Trainability, and Entanglement_ #### V-A1 Expressibility We first evaluate the effect of adding base layers to our _Xtalk_pqcs_ on their expressibility. As mentioned in Section IV-C, the sparsely connected sub-layers used in _Xtalk_pqcs_ can possibly produce less expressive ansatze. An easy solution to this would be to add a few layers from the more expressive base configuration. Fig. 6 shows the % increase in expressibility due to adding two base\({}_{1}\) layers (\(m=2\)) for different \(5\)- and \(7\)-qubit _Xtalk_pqcs_. We pick a value of \(2\) as our empirical analysis reveals that expressibility for various ansatz nears saturation at a number of layers in the range of \(3\) to \(5\). We see the highest increase in expressibility for shallower PQCs utilizing the sparsely connected _Medium_ and _Low_Xtalk_ sub-layers (\(79.7\) and \(93.91\%\), respectively). We also see that the \(7\)-qubit _Medium_Xtalk_ PQC with (\(m=0\)) is less susceptible to the addition of base layers than its \(5\)-qubit counterpart. This is an outcome of _XtalkSchedule_'s performance with different circuit sizes. The \(5\)-qubit Medium and _Low_Xtalk_ have very similar sub-layers' structure. On the other hand, the \(7\)-qubit PQCs give the scheduler more freedom to create different levels of crosstalk sensitivity. This led to the \(7\)-qubit _Medium_Xtalk_ with (\(m=0\)) having more comparable expressibility to its (\(m=2\)) version and a lower increase. The relative difference drops as we increase the number of layers, which is expected as the expressibility of PQCs with (\(m=0\)) gradually increases. On average, the addition of \(2\) base layers increases expressibility by \(8.26\), \(23.98\), and \(32.84\%\) for _High_, _Medium_, and _Low_Xtalk_ configurations. Fig. 7 shows the expressibility of the five PQC configurations used in our evaluations. We see that _Xtalk_pqcs_ with (\(m=0\)) achieve similar (or better) expressibility to the base configurations. However, as demonstrated in prior research [38, 56], more expressibility does not directly lead to better performance. In fact, it can sometimes worsen a PQC's trainability [38] for deep configurations. Thus, we leave identifying the number of \(m\) layers leading to optimal expressibility as future work. #### V-A2 Trainability We conduct a cost-function-based trainability analysis [15] for the ground state preparation problem \[C_{G}=1-p_{[0\rangle^{\otimes N}} \tag{8}\] where \(N\) is the total number of qubits, and \(p_{[0\rangle^{\otimes N}}\) is the probability of measuring the \(\left|00...0\right\rangle_{N}\) state. For the local cost function, we only consider the probability of a subset of qubits \[C_{L}=1-p_{[0\rangle^{\otimes N_{C}}} \tag{9}\] where \(N_{C}\) is the number of cost-function qubits. Fig. 6: The effect of adding _base\({}_{1}\)_ layers on expressibility. The bars show the % increase in expressibility for \(5\)- and \(7\)-qubit _Xtalk_pqcs_ with (\(m=2\)) over the same configurations with (\(m=0\)) across different numbers of layers. Fig. 7: The expressibility of the different PQC configurations. The triangle and circular markers show the expressibility for _Xtalk_pqcs_ with (\(m=0\)) and (\(m=2\)), respectively. Qiskit's QASM simulator, which we use for this analysis, is currently limited to simulating independent gate errors. This poses a challenge for analyzing the impact of crosstalk on trainability. To address this issue, we modify the simulation to enable accounting for conditional error rates. This can be cheaply done by keeping a map of "EPC_multipliers" using the values indicated by Fig. 2(b) color map. Next, for each DAG layer containing a parallel set of CNOTs, the multipliers map can be used to adjust their error rates. For example, consider a case when \(\text{CNOT}_{0,1}\) is executing in parallel with \(\text{CNOT}_{2,3}\) and \(\text{CNOT}_{4,7}\) for which it has the multipliers 1.217 and 1.006, respectively. Then, its error rate for this particular instance will be its independent \(EPC\times 1.217\times 1.006\). We analyze the trainability for _High_, _Medium_, and _Low_Xtalk_ at different local cost function sizes (\(n_{C}\)), as shown in Fig. 8. We see that crosstalk does indeed affect trainability, with the variance of the partial gradients decreasing by the increased crosstalk noise in the circuit. This suggests that barren plateaus can also be "Crosstalk-induced", in addition to previous findings in [41] that only considered local Pauli noise. This observation is only available through the Xtalk-enabled simulation (blue lines in Fig. 8), as the standard QASM simulation shows little to no variation between the different PQCs. Additionally, we observe that the effect of crosstalk decreases as we increase the cost function qubits (\(n_{C}>2\)), as indicated by the shrinking size of the shaded regions. #### V-A3 Entanglement Fig. 8(a) shows the trend of entanglement entropy for five PQC configurations with \(9\)-qubits and a \(4\)-\(5\) partition. As expected, the base configurations create more entanglement compared to the Xtalk approach. Similar to expressibility, the entanglement trend is inversely proportional to the sparsity of the _Xtalk_pqc_'s sub-layers, with _High_Xtalk_'s trend line approaching base as depth increases. As previous work [46, 47, 48, 49] states, different applications might require different levels of entanglement. A simple method to account for this entanglement loss in our _Xtalk_pqc_s, if an application necessitates it, is to increase \(m\). Fig. 8(b) shows the trend in entanglement entropy for _Xtalk_pqc_s with (\(m=\frac{1}{3}L\)); entanglement quickly approaches the levels achieved by the base configurations. ### _Experimental Setup_ We conducted our experiments on _ibmq_guadalupe_, a \(16\)-qubit backend available through IBM Quantum Service with average \(T_{1}\) and \(T_{2}\) times of \(102.67\)\(\mu\)s and \(108.06\)\(\mu\)s respec Fig. 8: The change in variance of the partial cost function derivative for _High_, _Medium_Xtalk_, and _Low_Xtalk_ at different local cost function sizes (\(n_{C}\)), for shallow configurations (\(L=\log_{2}(N)\)), where \(N\) is the number of PQC qubits. The figure also demonstrates the effect of crosstalk on trainability through experimenting with two types of simulation: _Standard_ QASM simulation and _Xtalk-enabled_ QASM simulation as described in Section V-A2. tively, and an average CNOT error rate of \(1.013\)e-\(2\) during the time of writing this paper. Note that these values fluctuate and are monitored through daily calibrations available through Qiskit. We utilize Qiskit Runtime [57], a programming model that allows for faster execution of quantum workloads on the cloud, to run our algorithm benchmarks. ### _Circuit Parameters_ We record the total gate count and duration of each PQC with different configurations. We compile each configuration with three Qiskit optimization levels and report the best gate count and duration for each. Fig. 10 shows the percentage of total gate count reduction for each _Xtalk_pqc_ (with \(m=2\)) compared to _base_. It is important to note that both base configurations (_base\({}_{1}\)_ and _base\({}_{2}\)_) have very similar gate counts as they both share the same pre-compiled number of gates and their suitability to 1D mapping (mapping to a line of qubits). Additionally, our Xtalk-based approach does not change the number of single-qubit gates; thus, the gate reductions observed are ultimately two-qubit gate reductions. We see an expected outcome that all _Xtalk_pqcs_ reduce the number of two-qubit gates, as each layer (after the first two layers) has a number of operations less than _base_. Therefore, the percentage of reduction grows with increasing the number of layers for each configuration, as indicated by the figure. Overall, _Xtalk_pqcs_ have an average gate count reduction of \(5.7\), \(7.97\), and \(8.57\%\) for _High_, _Medium_, and _Low_Xtalk_ configurations, respectively, with up to \(21.46\%\) for the \(15\)-qubit _Low_Xtalk_ PQC. This specific higher-than-average value of \(21.46\%\) is not directly attributed to our Xtalk approach. We argue that it is due to the compiler's utilization of the reduced number of gates in its optimization approach. Fig. 11 shows the average speedups achieved by _base\({}_{2}\)_, _High_, _Medium_, and _Low_Xtalk_ compared to _base\({}_{1}\)_. We first note the difference between _base\({}_{1}\)_ and _base\({}_{2}\)_. Unlike total gate count, _base\({}_{2}\)_ PQCs have lower depths as a result of the approximation to an alternating structure. Therefore, we see that _base\({}_{2}\)_ speeds up the execution time with an average of \(1.23\times\) at different PQC sizes and up to \(1.48\times\). _Xtalk_pqcs_, on the other hand, observe higher average speedups due to their lower depths compared to both base PQCs. The average speedups are \(1.66\times\), \(1.94\times\), and \(1.88\times\) for _High_, _Medium_, and _Low_Xtalk_, respectively, with up to \(2.93\times\) and \(2.86\times\) reported for the latter two configurations at (\(L=3\), \(n=15\)). We also observe that the rate of speedup drops for the \(15\)-qubit _base\({}_{2}\)_ and _High_Xtalk_ (\(1.13\times\) and \(1.15\times\) respectively). This is due to the mapping of the circuits on the \(16\)-qubit _ibmq_guadalupe_. As the circuit size nears the backend's total number of qubits, the compiler will be unable to perform 1D mapping. Therefore, it is more likely to add SWAP gates to satisfy all operations in _base\({}_{2}\)_ and _High_ compared to _Medium_ and _Low_Xtalk_, which will be easier to map due to their lower number of operations and hence, possible easier mappings to the limited connectivity. ### _Algorithm Performance_ We evaluate the performance of the PQCs for finding the ground state energy of the H\({}_{2}\) and LiH molecules through VQE, which corresponds to finding the minimum eigenvalue of Hermitian matrices characterizing these molecules. We configured our experiments and PQCs as follows. We ran all our benchmarks on _ibmq_guadalupe_ accessed through IBM Cloud. We picked the Simultaneous Perturbation Stochastic Approximation (SPSA) [58] as the classical optimizer, with a maximum of \(100\) iterations. The number of PQC layers \(L\) was set to \(5\) for both benchmarks, with (\(m=2\)) for _Xtalk_pqcs_. We obtained both Hamiltonians using BravyiKitaev (BK) fermionic mapping technique [59] with Active-Space reduction [60], resulting in \(4\)- and \(6\)-qubit Hamiltonians for H\({}_{2}\) and LiH, respectively. We chose BK-based Hamiltonians over other mapping techniques (e.g., Jordan-Wigner or Parity [61]) that result in lower-qubit Hamiltonians and, in return, might perform better on hardware [50]. The reason for Fig. 9: The trend lines of entanglement entropy \(S\) for the five PQC configurations vs. number of layers. **(a)** Shows the entropy’s trend with a fixed number of _base_ layers for the _Xtalk_pqcs_ while **(b)** shows the trend with a number of _base_ layers that equals \(1/3\) of total layers. this choice is that our three Xtalk variants (_High_, _Medium_, and _Low_) are more distinguishable at larger PQC sizes.1 Figures. 11(a) and 11(b) show VQE results for H\({}_{2}\) and LiH molecules respectively. Although all PQC configurations do not reach the target ground state energies, _Xtalk_pqcs_ clearly outperform base configurations for both benchmarks. We make two observations from the figures. First, both figures confirm the advantage of our Xtalk approach and the effect of crosstalk mitigation on algorithm performance. Second, Fig. 11(b) shows that the best performing PQC is _Medium_Xtalk_. This suggests that the level of crosstalk-based approximation should be tailored to each application to achieve the best results, which we leave as future work. Footnote 1: The experiment’s primary goal is to explore the differences between Xtalk variants, not to get the most chemically accurate result. ## VI Related Work Crosstalk-based approaches for compilation and execution have been investigated in [20, 21, 62]. Niu _et al._[62] investigated parallel execution techniques in noisy quantum hardware by comparing the state-of-the-art methods and discussing their shortcomings with the impact of various aspects. Consequently, they proposed a Quantum Crosstalk-aware Parallel workload execution method (QuCP) with no crosstalk characterization overhead and additionally utilized their method to parallelize Zero Noise Extrapolation (ZNE) workloads and reduce their cost. Ding _et al._[21] introduced a systematic methodology for software mitigation of crosstalk due to the frequency crowding phenomenon. Their strategy allows for fixed coupler architectures to have matching levels of reliability to tunable coupler architectures, thus simplifying quantum machines' fabrication. While this work trades parallelism with higher gate fidelity when needed, it dramatically improves the resilience of tunable qubits in fixed-couple hardware. Hardware-oriented compilation and approximation for VQAs have gained much attraction recently [63, 64, 65]. Wang _et al._[63] proposed a noise-adaptive co-search framework for variational circuits and qubit mapping, which utilizes iterative pruning to remove redundant gates in the searched circuits. Their investigation suggested several routes for more theoretical and experimental exploration in variational quantum algorithms, with one route, the variational ansatz, being optimized to alleviate barren plateaus. Patel _et al._[64] proposed a method for reducing CNOT gate count by generating approximations for quantum circuits through partitioning for scalability. Their method reduces the circuit length with approximate synthesis while improving fidelity by running circuits representing key samples in the approximation space. The work proposed by Li _et al._[65] leverages Pauli strings to identify program components and introduce optimizations at the algorithm, compiler, and hardware levels, for a family of chemistry problems. ## VII Conclusions In this paper, we examined a new approach to embedding a machine's characteristics in the VQA ansatze design. We developed a strategy to approximate PQCs to a more hardware-efficient version by utilizing the hardware's crosstalk characteristics. Our approach aims at creating a version of the ansatz that inherently mitigates crosstalk by utilizing crosstalk-based scheduling. The methodology can be used to create approximated PQCs with various levels of crosstalk mitigation. Our analysis shows that crosstalk mitigation enhances the performance of VQE. We utilized a combination of hardware and algorithmic PQC analysis parameters to evaluate our Xtalk approach. Our results demonstrate that the Xtalk approach maintains similar expressibility to a pre-approximated _base_ and is more trainable for local a cost function, all while speeding up the execution by an average of \(1.83\times\) (up to \(2.93\times\)) and reducing the total gate count by an average of Fig. 11: Average speed-up compared to _base\({}_{1}\)_ for _base\({}_{2}\)_ and _Xtalk_pqcs_ with (\(m=2\)) at different circuit sizes. Fig. 10: Decrease (%) of total gate count over _base_ for different _Xtalk_pqcs_ with (\(m=2\)). \(7.9\%\) (up to \(21.46\%\)). Moreover, our algorithm performance results show that _Xtalk_pqcs_ clearly outperform _base_ for estimating the ground state of two chemical molecules using VQE. Furthermore, the results hint that, although _Xtalk_pqcs_ generally perform better than _base_, the level of crosstalk mitigation used to construct a _Xtalk_pqc_ is not directly proportional to its algorithmic performance. Therefore, a method that closely ties the approximation degree to the application's performance should will be explored as future work. ## VIII Acknowledgement M.I. would would like to thank the NSF QISE-NET Fellowship for funding through the grant DMR 17-47426, and the IBM Quantum Hub at NC State for access to _ibmq_guadalupe_.
2303.09900
Stability of Rankin-Selberg local $γ$-factors for split classical groups: the symplectic case
Given a split classical group of symplectic type and a split general linear group over a local field $F$, we use Langlands-Shahidi method to construct their Rankin-Selberg local $\gamma$-factors and prove the corresponding analytic stability for generic representations. The idea generalizes the work of J. Cogdell, F. Shahidi, T.-L. Tsai in 2017 and D. She in 2023 in the study of asymptotic behaviors of partial Bessel functions. Different from the known cases, suppose $P=MN$ is the maximal parabolic subgroup with Levi component $M\simeq \mathrm{GL}_r\times\mathrm{Sp}_{2m}$ that defines the local factors, the action of the maximal unipotent subgroup of $M$ on $N$ have non-trivial stabilizers, and the space of integration for the corresponding local coefficient is no longer isomorphic to a torus. We will separate its toric part out in our cases and show that it plays the same role as the torus over which the integral representing the local coefficient is taken in the known cases. This is a new phenomenon with sufficient generality and we believe that it may provide us with a possible direction towards a uniform proof of stability of Langlands-Shahidi $\gamma$-factors in our future work.
Taiwang Deng, Dongming She
2023-03-17T11:30:19Z
http://arxiv.org/abs/2303.09900v1
Stability of Rankin-Selberg local \(\gamma\)-factors for split classical groups: the symplectic case ###### Abstract. Given a split classical group of symplectic type and a split general linear group over a local field \(F\), we use Langlands-Shahidi method to construct their Rankin-Selberg local \(\gamma\)-factors and prove the corresponding analytic stability for generic representations. The idea generalizes the work of J. Cogdell, F. Shahidi, T.-L. Tsai in 2017 and D. She in 2023 in the study of asymptotic behaviors of partial Bessel functions. Different from the known cases, suppose \(P=MN\) is the maximal parabolic subgroup with Levi component \(M\simeq\mathrm{GL}_{r}\times\mathrm{Sp}_{2m}\) that defines the local factors, the action of the maximal unipotent subgroup of \(M\) on \(N\) have non-trivial stabilizers, and the space of integration for the corresponding local coefficient is no longer isomorphic to a torus. We will separate its toric part out in our cases and show that it plays the same role as the torus over which the integral representing the local coefficient is taken in the known cases. This is a new phenomenon with sufficient generality and we believe that it may provide us with a possible direction towards a uniform proof of stability of Langlands-Shahidi \(\gamma\)-factors in our future work. ###### Contents * 1 Introduction * 2 Construction of the Rankin-Selberg local \(\gamma\)-factors * 3 A Bruhat decomposition * 4 The orbit space \(Z^{0}_{M}U_{M}\backslash N\) and its invariant measure. * 5 Local coefficient as Mellin transforms of partial Bessel functions * 6 Asymptotics of partial Bessel integrals * 6.1 Some general properties of partial Bessel integrals * 6.2 Asymptotic expansion and uniform smoothness * 6.3 The final local coefficient formula and the separation of the toric part * 7 Proof of Stability ## 1. Introduction Let \(M\) be a connected reductive group defined over a local field \(F\), \(\pi\) an irreducible admissible representation of \(M(F)\). Fix a non-trivial additive character \(\psi\) of \(F\). In the study of Langlands functoriality, or more precisely the L-function theory, one needs to consider a local \(\gamma\)-factor \(\gamma(s,\pi,r,\psi)\) for any finite dimensional complex representation \(r\) of the Langlands L-group \({}^{L}M=\hat{M}\rtimes\Gamma_{F}\), where \(\Gamma_{F}=\mathrm{Gal}(\overline{F}/F)\), or \(W_{F}\), the local Weil group of \(F\). The general definition of \(\gamma(s,\pi,r,\psi)\) is unknown so far, but in special cases there are many methods of constructing them, a very important one among which is the Langlands-Shahidi method [8], [10]. It defines the local \(\gamma\)-factors \(\gamma^{sh}(s,\pi,r,\psi)\) when \(\pi\) is \(\psi\)-generic, namely, \(\pi\) admits a non-zero Whittaker model, \(M\) appears as a Levi subgroup of a larger reductive group \(G\), and \(r\) an irreducible constituent of the adjoint action of \({}^{L}M\) on the Lie algebra of \({}^{L}N\), the L-group of \(N\) where \(N\) is the unipotent radical of the parabolic \(P=MN\). The local \(\gamma\)-factors \(\gamma(s,\pi,r,\psi)\), once defined, are related to the local \(\epsilon\)- and L-factors in the following way: \[\gamma(s,\pi,r,\psi)=\epsilon(s,\pi,r,\psi)\frac{L(1-s,\tilde{\pi},r)}{L(s, \pi,r)}\] The local arithmetic and analytic \(\gamma\)-factors are expected to satisfy some stable equality under highly ramified twists, called the arithmetic and analytic stability respectively. The arithmetic stability is fully proved by P. Deligne [5], but the analytic stability is only known for certain cases. To be precise, if \(\pi_{1}\) and \(\pi_{2}\) are irreducible admissible representations of \(M(F)\) sharing the same central character, then for a highly ramified character \(\chi\) of \(F^{\times}\), regarded as a character of \(M(F)\) via \(m\mapsto\chi(\det\mathrm{Ad}_{\mathfrak{n}}(m))\), where \(\mathrm{Ad}:M\to\mathrm{GL}(\mathfrak{n})\) is the adjoint representation and \(\mathfrak{n}:=\mathrm{Lie}(N)\), we expect to have \[\gamma(s,\pi_{1}\otimes\chi,r,\psi)=\gamma(s,\pi_{2}\otimes\chi,\psi).\] We care about the analytic stability since it serves as an important intermediate step to prove many crucial results in the Langlands program. For example, local converse theorems, functoriality conjecture, and the local Langlands correspondence. Many results are known in these areas. We apologize not to be able to list all of them due to the large number of authors and their work in these directions. The analytic stability of local \(\gamma\)-factors is known in many cases: for \(M=\mathrm{GL}_{n}\times\mathrm{GL}_{m}\) by Jacquet & Shalika [6]; for \(M\) a \(F\)-split classical group, either of symplectic or orthogonal type, by Rallis & Soury [7] via the doubling method; for \(M=\mathrm{GL}_{n}\), \(r=\mathrm{Sym}^{2}\) or \(\wedge^{2}\), by Cogdell, Shahidi & Tsai [4]; for the twisted symmetric and exterior square local factors of \(\mathrm{GL}_{n}\), by She [13]; for Asai local factors, by Shankman [11]; for \(M=\mathrm{GL}_{2}\), \(r=\mathrm{Sym}^{3}\), by Shankman & She [12]; for \(M\) a general quasi-split reductive group, by Cogdell, Piatetski-Shapiro & Shahidi [3], but under the assumptions that \(\dim(U_{M}\backslash N)=2\) and \(\mathrm{rank}\{Z_{G}\backslash T_{w}\}=2\), where \(T_{w}\) is the subtorus defined in (3.6) of [3]. In this paper, we establish the stability for the Rankin-Selberg product local \(\gamma\)-factor attached to a classical group of symplectic type, and a general linear group. Namely, we prove the following result: **Theorem 1.1**.: _Let \(M=\mathrm{GL}_{r}\times\mathrm{Sp}_{2m}\) be split over a p-adic field \(F\), \(\sigma_{i}\) and \(\tau_{i},(i=1,2)\) are irreducible \(\psi\)-generic representations of \(\mathrm{GL}_{r}(F)\) and \(\mathrm{Sp}_{2m}(F)\) respectively. Let \(\pi_{i}=\sigma_{i}\boxtimes\tau_{i}(i=1,2)\), assume \(\omega_{1}=\omega_{2}\), where \(\omega_{i}\) is the central character of \(\pi_{i}\). Take a continuous character \(\chi:F^{\times}\to\mathbb{C}^{\times}\), regarded as a character of \(M(F)\) via \((m_{1},m_{2})\mapsto\chi(\det(m_{1}))\) for \(m_{1}\in\mathrm{GL}_{r}(F)\) and \(m_{2}\in\mathrm{Sp}_{2m}(F)\). Assume \(\chi\) is sufficiently ramified, then we have_ \[\gamma(s,(\sigma_{1}\times\tau_{1})\otimes\chi,\psi)=\gamma(s,(\sigma_{2} \times\tau_{2})\otimes\chi,\psi))\] The strategy is to use Langlands-Shahidi method to define the local \(\gamma\)-factors, and reduce the stability to the stability of the corresponding local coefficient. Similar to [4], the local coefficient admits an integral representation as the Mellin transform of certain partial Bessel functions, whose asymptotic expansion via relevant Bruhat cells breaks into a sum of two parts, the first part depends only on the central character, the second part is a uniform smooth function on certain subtorus. Hence when twisted by a highly ramified character, the second part becomes zero. This gives the stability of the local coefficient. We point out two main differences in our cases compared to the known cases using this method. Firstly, the action of \(U_{M}\) on \(N\) have non-trivial stabilizers, and consequently the geometry of the orbit space \(U_{M}\backslash N\) becomes more subtle. Secondly, the orbit space \(U_{M}\backslash N\), over which the integral of the partial Bessel function is taken, is no longer isomorphic to a torus in our cases. Therefore a similar argument to [4] can not be directly applied. Instead, we separate the toric part out of the orbit space \(U_{M}\backslash N\), and show that only the toric part accounts for the proof of stability. We also observe that the map \(n\mapsto m\) via the Bruhat decomposition \(\dot{w}_{0}^{-1}n=mn^{\prime}\tilde{n}\) is finite \(\acute{e}tale\) onto its image with covering group isomorphic to finitely many copies of \(\mathbb{Z}/2\mathbb{Z}\). This is a new phenomenon with sufficient generality and we believe it may show us one possible direction towards a uniform proof of stability of Langlands-Shahidi \(\gamma\)-factors in our future work. We also generalize the asymptotic analysis for partial Bessel integrals in [4] and [13]. Lastly, we remark that the cases where we replace the symplectic groups by odd orthogonal groups are very similar. A slight modification of our arguments would also give a proof of the same result for odd orthogonal cases. Finally, many of our computations rely on the computational softwares Sagemath, Mathematica and GP/PARI. **Acknowledgement.** The work was started when the first author was a postdoc at the Yau Mathematical Sciences Center (YMSC) and was finished at Yanqi Lake Beijing Institute of Mathematical Sciences and Applications (BIMSA) where he became an assistant fellow. He wants to thank both institutes for their hospitality. At the same time, the second author was a postdoc at the Morningside Center of Mathematics(MCM), he would like to also thank the institute for its hospitality as well, and for providing him a first-rate research environment. Many ideas of the work are based on the second author's Ph.D thesis under the supervision of Prof. Freydoon Shahidi. The second author sincerely expresses his deep sense of gratitude to Prof. Freydoon Shahidi for his invaluable guidance. ## 2. Construction of the Rankin-Selberg local \(\gamma\)-factors Consider the split reductive group \(G=G(n)=\operatorname{Sp}_{2n}\) defined over a p-adic field \(F\) which is a finite extension \(\mathbb{Q}_{p}\). We realize \(G\) as \[\operatorname{Sp}_{2n}=\{h\in\operatorname{GL}_{2n}:{}^{t}hJ^{\prime}h=J^{ \prime}\}\] where \[J^{\prime}=J^{\prime}_{2n}:=\begin{bmatrix}&J_{n}\\ -{}^{t}J_{n}\end{bmatrix}\text{ and }J_{n}=\begin{bmatrix}&&&1\\ &&&-1\\ &&\iddots&\\ (-1)^{n-1}&&\end{bmatrix}.\] It is a semi-simple group of type \(C_{n}\). Fix a Borel subgroup \(B=TU\) consisting of the upper triangular matrices in \(\operatorname{Sp}_{2n}\), then \[T=\{t=\operatorname{diag}(t_{1},\cdots,t_{n},t_{n}^{-1},\cdots,t_{1}^{-1}):t_ {i}\in\mathbb{G}_{m}\}.\] Following Bourbaki's labeling [1] of the root systems, the set of positive roots is given by \[\Phi^{+}=\{\alpha_{1},\cdots,\alpha_{n},\sum_{i\leq k<j}\alpha_{k}(1\leq i<j \leq n),\sum_{i\leq k<j}\alpha_{i}+2\sum_{j\leq k<n}\alpha_{k}+\alpha_{n}(1 \leq i<j\leq n),2\sum_{i\leq k<n}\alpha_{k}+\alpha_{n}(1\leq i\leq n)\}.\] Note that all the standard maximal parabolic subgroups of \(\operatorname{Sp}_{2n}\) are self associate. Indeed, let \(\Delta_{r}=\Delta-\{\alpha_{r}\}\), \(P_{\Delta_{r}}=M_{\Delta_{r}}N_{\Delta_{r}}\), and \(w_{0}=\operatorname{w}_{G}w_{M_{\Delta_{r}}}^{-1}\) where \(w_{G}\) and \(w_{M_{\Theta_{r}}}\) are the long Weyl group element of \(G\) and \(M_{\Delta_{r}}\) respectively, then \(w_{0}(\Delta_{r})=\Delta_{r}\), and \(w_{0}(\alpha_{r})<0\). For simplicity, we write \(P=MN\) from now on. The Levi components of the maximal parabolic subgroups of \(G(n)\) are of the form \(\operatorname{GL}_{r}\times\operatorname{Sp}_{2m}\), with \(r+m=n\), obtained by removing the simple root \(\alpha_{r}(1\leq r\leq n)\) from the set of simple roots. (When \(r=n\), the corresponding maximal Levi is isomorphic to \(\operatorname{GL}_{n}\), by \(\operatorname{Sp}_{0}\) we mean the trivial group 1.) We realize \(M\simeq\operatorname{GL}_{r}\times\operatorname{Sp}_{2m}\) via \[m=\begin{bmatrix}m_{1}&&\\ &m_{2}&&\\ &&J_{r}{}^{t}m_{1}^{-1}J_{r}^{-1}\end{bmatrix}\mapsto(m_{1},m_{2})\] where \(m_{1}\in\operatorname{GL}_{r}\), \(m_{2}\in\operatorname{Sp}_{2m}\). We also have \[N=\{n=\begin{bmatrix}I_{r}&X&Y\\ &I_{2m}&Z\\ &&I_{r}\end{bmatrix}\in\operatorname{Sp}_{2n}:X\in\operatorname{Mat}_{r\times 2 m},Y\in\operatorname{Mat}_{r\times r}\}\] \[=\{n(X,Y):=\begin{bmatrix}I_{r}&X&Y\\ &I_{2m}&(-1)^{r}J_{2m}^{\prime}{}^{t}XJ_{r}\\ &&I_{r}\end{bmatrix}:J_{r}{}^{t}Y-Y^{t}J_{r}+(-1)^{r}XJ_{2m}^{\prime}{}^{t}X=0\}.\] We will use the local theory of Langlands-Shahidi method to construct the local Rankin-Selberg \(\gamma\)-factors in our cases. Fix a non-trivial additive character \(\psi\) of \(F\), and suppose \(\sigma\) and \(\tau\) are irreducible \(\psi\)-generic representations of \(\operatorname{GL}_{r}(F)\) and \(\operatorname{Sp}_{2m}(F)\) respectively. Set \(\pi\simeq\sigma\boxtimes\tau\), then its central character \(\omega_{\pi}\simeq\omega_{\sigma}\boxtimes\omega_{\tau}\). Fix a non-zero Whittaker functional \(\lambda\in\operatorname{Hom}_{U_{M}(F)}(\pi,\psi)\), i.e., \[\lambda(\pi(u)v)=\chi(u)\lambda(v)\] for any \(v\in\pi\) and \(u\in U_{M}(F)\). Suppose that \(\dot{w}_{0}\in G(F)\) is a representative of \(\omega_{0}=\omega_{G}\cdot\omega_{M}^{-1}\), such that \(\dot{w}_{0}=\dot{w}_{G}\dot{w}_{M}^{-1}\) is compatible with \(\psi\), in the sense that \(\psi(\dot{w}_{0}u\dot{w}_{0}^{-1})=\psi(u)\) for all \(u\in U_{M}(F)\). For \(\nu\in\mathfrak{a}_{P,\mathbb{C}}^{*}\), where \(\mathfrak{a}_{P}^{*}=\operatorname{Hom}(X^{*}(M)_{F},\mathbb{R})\), and \(\mathfrak{a}_{P,\mathbb{C}}^{*}=\mathfrak{a}_{P}^{*}\otimes_{\mathbb{R}} \mathbb{C}\), let \[I(\nu,\pi)=\operatorname{Ind}_{M(F)N(F)}^{G(F)}\pi\otimes q_{F}^{(\nu+\rho_{P},H_{P}(\cdot))}\otimes\mathbf{1}_{N(F)}\] be the normalized parabolic induction. Then the functional \[\lambda_{\psi}(\nu,\pi):I(\nu,\pi)\longrightarrow\mathbb{C}\] \[\lambda_{\psi}(\nu,\pi)f=\int_{N^{\prime}(F)}\lambda(f(\dot{w}_{0}^{-1}n^{ \prime}))\overline{\chi}(n^{\prime})dn^{\prime}\] where \(N^{\prime}=w_{0}N^{-}w_{0}^{-1}\), defines a non-zero Whittaker functional on \(I(\nu,\pi)\). Suppose that \(w\in W(G,T)\), such that \(\Theta^{\prime}=w(\Theta)\subset\Delta\) with \(\dot{w}\in G(F)\) is chosen to be compatible with \(\psi\), then \[A(\nu,\pi,\dot{w}):I(\nu,\pi)\longrightarrow I(\dot{w}\nu,\dot{w}(\pi))\] \[A(\nu,\pi,\dot{w})f=(g\mapsto\int_{N_{\dot{w}(F)}}f(\dot{w}^{-1}ng)dn)\] where \(N_{\dot{w}}=U\cap\dot{w}N^{-}\dot{w}^{-1}\), defines an intertwining operator between the induced representations. Consequently, \(\lambda_{\psi}(\dot{w}(\nu),\dot{w}(\pi))\circ A(\nu,\pi,\dot{w})\) is another non-zero Whittaker functional on \(I(\nu,\pi)\). By uniqueness of local Whittaker functionals, there exists a non-zero constant \(C_{\psi}(\nu,\sigma,\dot{w})\), called the Shahidi local coefficient, such that \[\lambda_{\psi}(\nu,\pi)=C_{\psi}(\nu,\pi,\dot{w})\lambda_{\psi}(\dot{w}(\nu), \dot{w}(\pi))\circ A(\nu,\pi,\dot{w}).\] In our cases, \(P=MN=P_{\Delta_{r}}=M_{\Delta_{r}}N_{\Delta_{r}}\) is self-associate for all \(1\leq r\leq n\). Let \(\nu=s\tilde{\alpha}\), and we fix \[\dot{w}_{G}=\begin{bmatrix}&J_{r}\\ &(-1)^{r}J_{2m}^{\prime}&\\ -^{t}J_{r}&\end{bmatrix},\dot{w}_{M}=\begin{bmatrix}J_{r}\\ &(-1)^{r}J_{2m}^{\prime}&\\ &&J_{r}{}^{t}J_{r}^{-1}J_{r}^{-1}\end{bmatrix}=\begin{bmatrix}J_{r}\\ &(-1)^{r}J_{2m}^{\prime}&\\ &&J_{r}\end{bmatrix}\] since \({}^{t}J_{r}^{-1}=J_{r}\). Then we pick the representative \(\dot{w}_{0}\) to be \[\dot{w}_{0}=\dot{w}_{G}\dot{w}_{M}^{-1}=\begin{bmatrix}&&I_{r}\\ &&I_{2m}\\ (-1)^{r}I_{r}&\end{bmatrix}\] A straightforward computation shows that \(\psi(\dot{w}_{0}u\dot{w}_{0}^{-1})=\psi(u)\) for all \(u\in U_{M}\), i.e., our choice of \(\dot{w}_{0}\) is compatible with the generic character \(\psi\). Denote by \(A(s,\pi):=A(s\tilde{\alpha},\pi,\dot{w}_{0})\) and \(I(s,\pi):=I(s\tilde{\alpha},\pi)\) then \[A(s,\pi):I(s,\pi)\longrightarrow I(-s,\dot{w}_{0}(\pi))\] and the local coefficient \(C_{\psi}(s,\pi):=C_{\psi}(s\tilde{\alpha},\pi,\dot{w}_{0})\) is a product of two \(\gamma\)-factors, namely, \[C_{\psi}(s,\pi)=\gamma(s,\sigma\times\tau,\psi)\cdot\gamma(s,\sigma,\wedge^{2},\psi)\] where \(\gamma(s,\sigma,\wedge^{2},\psi)\) is the exterior square local factor of \(\mathrm{GL}_{r}\) attached to \(\sigma\), which is defined in [4] also by Langlands-Shahidi method. Therefore we have defined the Rankin-Selberg local factor \(\gamma(s,\sigma\times\tau,\psi)\) for the reductive groups \(\mathrm{GL}_{r}\) and \(\mathrm{Sp}_{2m}\). Moreover, since the analytic stability of \(\gamma(s,\sigma,\wedge^{2},\psi)\) is established in [4], it reduces the stability of \(\gamma(s,\sigma\times\tau,\psi)\) to the stability of the corresponding local coefficient \(C_{\psi}(s,\pi)\). ## 3. A Bruhat decomposition From the construction of the local Whittaker functionals on the space of induced representations, we would like to write \(\dot{w}_{0}^{-1}n\in P\overline{N}\) where \(\overline{N}=\dot{w}_{0}N\dot{w}_{0}^{-1}\) is the opposite of \(N\). This decomposition does not hold for all \(n\in N\), but holds for a Zariski open dense subset \(N^{\prime}\) of \(N\). By our choice of the representative \(\dot{w}_{0}\) of \(w_{0}\), for \(t=\mathrm{diag}(t_{1},t_{2},\cdots,t_{n},t_{n}^{-1},\cdots,t_{2}^{-1},t_{1}^{ -1})\in T\), we have \[\dot{w}_{0}t\dot{w}_{0}^{-1}=\mathrm{diag}(t_{r}^{-1},\cdots,t_{1}^{-1},t_{r+ 1},\cdots,t_{n},t_{n}^{-1},\cdots,t_{r+1}^{-1},t_{1},\cdots,t_{r}).\] From this we observe that \[w_{0}(\alpha_{i})=\alpha_{r-i},\text{ for }1\leq i\leq r-1,\] \[w_{0}(\alpha_{r})=-((\alpha_{1}+\alpha_{2}+\cdots+\alpha_{r})+2(\alpha_{r+1}+ \cdots+\alpha_{n-1})+\alpha_{n}).\] Therefore \(w_{0}(\Delta_{r})=\Delta_{r}\) and \(w_{0}(\alpha_{r})<0\). In particular, this shows that the parabolic subgroup \(P=P_{\Delta_{r}}\) is self-associate. We will show that there exists a Zariski open dense subset \(N^{\prime}\subset N\) such that the following Bruhat decomposition holds \[\dot{w}_{0}^{-1}n=mn^{\prime}\overline{n}.\] A typical element in \(\overline{N}=\dot{w}_{0}N\dot{w}_{0}^{-1}\) is of the form \(\overline{n}=\overline{n}(X_{1},Y_{1})=\begin{bmatrix}I_{r}\\ (-1)J_{2m}^{\prime}{}^{t}X_{1}J_{r}&I_{2m}\\ (-1)^{r}Y_{1}&(-1)^{r}X_{1}&I_{r}\end{bmatrix}\) for some \(X_{1}\in\mathrm{Mat}_{r\times 2m},Y_{1}\in\mathrm{Mat}_{r\times r}\). Let \(m=(m_{1},m_{2})=\begin{bmatrix}m_{1}&m_{2}\\ &J_{r}{}^{t}m_{1}^{-1}J_{r}\end{bmatrix}\in M\) and \(n^{\prime}=n(X^{\prime},Y^{\prime})=\begin{bmatrix}I_{r}&X^{\prime}&Y^{\prime }\\ I_{2m}&(-1)^{r}J_{2m}^{\prime}{}^{t}X^{\prime}J_{r}\\ I_{r}\end{bmatrix}\in N\). One computes that \[mn^{\prime}\overline{n}=\begin{bmatrix}m_{1}&\\ m_{2}&\\ &J_{r}{}^{t}m_{1}^{-1}J_{r}^{-1}\end{bmatrix}\begin{bmatrix}I_{r}&X^{\prime}&Y ^{\prime}\\ I_{2m}&(-1)^{r}J_{2m}^{\prime}{}^{t}X^{\prime}J_{r}\\ I_{r}\end{bmatrix}\begin{bmatrix}I_{r}&\\ (-1)^{r}J_{2m}^{\prime}{}^{t}X_{1}J_{r}&I_{2m}\\ (-1)^{r}Y_{1}&(-1)^{r}X_{1}&I_{r}\end{bmatrix}\] \[=\begin{bmatrix}m_{1}+(-1)^{r}m_{1}X^{\prime}J_{2m}^{\prime}{}^{t}X_{1}J_{r}+ (-1)^{r}m_{1}Y^{\prime}Y_{1}&m_{1}X^{\prime}+(-1)^{r}m_{1}Y^{\prime}X_{1}&m_{1 }Y^{\prime}\\ (-1)^{r}m_{2}J_{2m}^{\prime}{}^{t}X_{1}J_{r}+m_{2}J_{2m}^{\prime}{}^{t}X^{ \prime}J_{r}Y_{1}&m_{2}+m_{2}J_{2m}^{\prime}{}^{t}X^{\prime}J_{r}X_{1}&(-1)^{r }m_{2}J_{2m}^{\prime}{}^{t}X^{\prime}J_{r}\\ (-1)^{r}J_{r}{}^{t}m_{1}^{-1}J_{r}^{-1}Y_{1}&(-1)^{r}J_{r}{}^{t}m_{1}^{-1}J_{r} ^{-1}X_{1}&J_{r}{}^{t}m_{1}^{-1}J_{r}^{-1}\end{bmatrix}.\] On the other hand, let \(n=n(X,Y)\in N\), we have \[\dot{w}_{0}^{-1}n=\begin{bmatrix}&(-1)^{r}I_{r}\\ I_{2m}&(-1)^{r}J_{2m}^{\prime}{}^{t}XJ_{r}\\ I_{r}&X&Y\end{bmatrix}.\] Compare both sides we obtain the following equalities: \[m_{1}+(-1)^{r}m_{1}X^{\prime}J_{2m}^{\prime}{}^{t}X_{1}J_{r}+(-1)^{r}m_{1}Y^{ \prime}Y_{1}=m_{1}X^{\prime}+(-1)^{r}m_{1}Y^{\prime}X_{1}=(-1)^{r}m_{2}J_{2m}^ {\prime}{}^{t}X_{1}J_{r}+m_{2}J_{2m}^{\prime}{}^{t}X^{\prime}J_{r}Y_{1}=0, \tag{1}\] \[m_{1}Y^{\prime}=(-1)^{r}I_{r}=J_{r}^{\ t}m_{1}^{-1}J_{r}^{-1}Y_{1},\] \[m_{2}+m_{2}J_{2m}^{\prime}\,{}^{t}X^{\prime}J_{r}X_{1}=I_{2m}, \tag{3}\] \[m_{2}J_{2m}^{\prime}\,{}^{t}X^{\prime}J_{r}=J_{2m}^{\prime}\,{}^{t}XJ_{r}, \tag{4}\] \[(-1)^{r}J_{r}\,{}^{t}m_{1}^{-1}J_{r}^{-1}X_{1}=X, \tag{5}\] \[J_{r}\,{}^{t}m_{1}^{-1}J_{r}^{-1}=Y. \tag{6}\] The outer automorphism \(\theta_{r}(g):=J_{r}\,{}^{t}g^{-1}J_{r}^{-1},g\in\mathrm{GL}_{r}\) defines a involution of \(\mathrm{GL}_{r}\). Assume \(\det(Y)\neq 0\), then \[(6)\Leftrightarrow m_{1}=J_{r}\,{}^{t}Y^{-1}J_{r}^{-1}=\theta_{r}(Y),\] \[(5)\Leftrightarrow X_{1}=(-1)^{r}\theta_{r}(m_{1})^{-1}X=(-1)^{r}Y^{-1}X,\] \[(2)\Leftrightarrow Y_{1}=(-1)^{r}Y^{-1},Y^{\prime}=(-1)^{r}\theta_{r}(Y)^{-1}.\] The second equality of (1) is equivalent to \[X^{\prime}=(-1)^{r-1}\theta_{r}(Y^{-1})Y^{-1}X.\] Plug this in (3), we have \[m_{2}(I_{2m}+J_{2m}^{\prime}\,{}^{t}X^{\prime}J_{r}X_{1})=m_{2}(I_{2m}-J_{2m}^ {\prime}\,{}^{t}X^{t}Y^{-1}\theta_{r}({}^{t}Y^{-1})J_{r}Y^{-1}X),\] \[=m_{2}(I_{2m}-J_{2m}^{\prime}\,{}^{t}X^{t}Y^{-1}J_{r}X)=m_{2}(I_{2m}+(-1)^{r-1 }\theta_{r,m}(Y^{-1}X)X)=I_{2m},\] where \[\theta_{r,m}:\mathrm{Mat}_{r\times 2m}\longrightarrow\mathrm{Mat}_{2m\times r},\] \[X\mapsto{}^{t}(J_{r}XJ_{2m}^{\prime})={}^{t}J_{2m}^{\prime}\,{}^{t}X^{t}J_{r} =(-1)^{r}J_{2m}^{\prime}\,{}^{t}XJ_{r}.\] If we further assume that \(I_{2m}-J_{2m}^{\prime}\,{}^{t}X^{t}Y^{-1}J_{r}X\in\mathrm{GL}_{2m}\), which is a Zariski open dense condition on \(N\), then \[m_{2}=(I_{2m}-J_{2m}^{\prime}\,{}^{t}X^{t}Y^{-1}J_{r}X)^{-1}=(I_{2m}+(-1)^{r-1 }\theta_{r,m}(Y^{-1}X)X)^{-1}.\] One easily checks that other equalities in (1) are automatically satisfied for our solutions of \(m_{1},m_{2},X^{\prime},Y^{\prime},X_{1},Y_{1}\). To check that (4) holds, since \(m_{2}\in\mathrm{Sp}_{2m}\), i.e., \({}^{t}m_{2}J_{2m}^{\prime}m_{2}=J_{2m}^{\prime}\), (4) is equivalent to \(X^{\prime}m_{2}^{-1}=X\). So we need to check that \[(-1)^{r-1}\theta_{r}(Y^{-1})Y^{-1}X(I_{2m}-J_{2m}^{\prime}\,{}^{t}X^{t}Y^{-1} J_{r}X)=X,\] hence it suffices to check that \[(-1)^{r-1}J_{r}\,{}^{t}YJ_{r}^{-1}Y^{-1}+(-1)^{r}J_{r}\,{}^{t}YJ_{r}^{-1}Y^{-1 }XJ_{2m}^{\prime}\,{}^{t}X^{t}Y^{-1}J_{r}=I_{r}.\] Multiplying \(YJ_{r}\,{}^{t}Y^{-1}J_{r}^{-1}\) on the left and \(J_{r}^{-1}{}^{t}Y\) on the right, one simplifies to get that \(J_{r}\,{}^{t}Y-Y^{t}J_{r}+(-1)^{r}XJ_{2m}^{\prime}\,{}^{t}X=0\), which holds automatically by the structure of \(\mathrm{Sp}_{2n}\). ## 4. The orbit space \(Z^{0}_{M}U_{M}\backslash N\) and its invariant measure. Recall that the Levi component \(M\) of the standard parabolic subgroup of \(G=\operatorname{Sp}_{2n}\) obtained by removing \(\alpha_{r}\) in its Dynkin diagram is isomorphic to \(\operatorname{GL}_{r}\times\operatorname{Sp}_{2m}\) with \(r+m=n\). Let \(U_{M}=U\cap M\), then \(U_{M}\) acts on \(N\) by conjugation. The local coefficient in these cases can be represented as the Mellin transform of certain partial Bessel functions over the orbit space \(U_{M}\backslash N\). In order to proceed with the tools in harmonic analysis so that we could obtain a nice asymptotic expansion formula for partial Bessel functions, and finally prove the analytic stability, we need to find a Zariski open dense subset \(N^{\prime}\subset N\) such that the geometric quotient \(U_{M}\backslash N^{\prime}\) exists, and construct an invariant measure on the corresponding p-adic manifold over a non-Archimedean local field \(F\) of characteristic \(0\). This is the main goal of this section. We will use the same notations as in section 3. A simple calculation shows that \(U_{M}\) acts on \(N\) by \[U_{M}\times N\longrightarrow N\] \[(u,n(X,Y))\mapsto n(u_{1}Xu_{2}^{-1},u_{1}YJ_{r}{}^{t}u_{1}J_{r}^{-1})=n(u_{1} Xu_{2}^{-1},u_{1}Y\theta_{r}(u_{1}^{-1}))\] where we identify \(u=\operatorname{diag}(u_{1},u_{2},J_{r}{}^{t}u_{1}^{-1}J_{r}^{-1})\in U_{M}\) with \((u_{1},u_{2})\in U_{\operatorname{GL}_{r}}\times U_{\operatorname{Sp}_{2m}}\), where \(U_{\operatorname{GL}_{r}}\) and \(U_{\operatorname{Sp}_{2m}}\) are the maximal unipotent subgroups of \(\operatorname{GL}_{r}\) and \(\operatorname{Sp}_{2m}\) respectively. Since \(n(X,Y)\in N\), \(X\) and \(Y\) are related by \(J_{r}{}^{t}Y-Y^{t}J_{r}+(-1)^{r}XJ_{2m}^{\prime}{}^{t}X=0\). Set \(Z=J_{r}{}^{t}Y+\frac{(-1)^{r}XJ_{2m}^{\prime}{}^{t}X}{2}\), thus \[J_{r}{}^{t}Y-Y^{t}J_{r}+(-1)^{r}XJ_{2m}^{\prime}{}^{t}X=0\Leftrightarrow Z={}^ {t}Z.\] Then the action \((X,Y)\mapsto(u_{1}Xu_{2}^{-1},u_{1}Y\theta_{r}(u_{1}^{-1}))\) is equivalent to \(X\mapsto u_{1}Xu_{2}^{-1},Z\mapsto u_{1}Z^{t}u_{1}\). The advantage of this change of variable is that now \(X\) and \(Z\) are independent. Denote the space of \(k\times k\) symmetric matrices as \(\operatorname{Sym}^{k}\). We have the following description on the orbit space representatives and measures for \(U_{M}\backslash N\): **Proposition 4.1**.: _There exists a Zariski open dense subset \(N^{\prime}\subset N\), such that the Bruhat decomposition \(\dot{w}_{0}^{-1}n=mn^{\prime}\overline{n}\) holds, and the orbit space \(U_{M}\backslash N^{\prime}\) admits a set of representatives of the form_ \[R=(R_{X},R_{Z})\] _where_ \[R_{X}=\begin{cases}\{x_{r-k,1+k}(0\leq k\leq r-1),x_{i,j}(2m-r+1\leq j-i,j\leq 2 m)\},&r\leq m,\\ \{x_{i,j}:i+j=r+1(j\leq m+1),\text{ or }r-m-l\leq i\leq r-m+l,j=m+l+1(1\leq l \leq r-m-1),\\ \text{ or }1\leq i\leq 2(r-m)-1+k,j=r+k(0\leq k\leq 2m-r)\},&m<r<2m,\\ \{x_{i,j}:i+j=r+1(j\leq m+1),\text{ or }r-m-l\leq i\leq r-m+l,j=m+l+1(1\leq l \leq m-1)\},&r\geq 2m,\end{cases}\] \(R_{Z}=\operatorname{Sym}^{r}\) _if \(r<2m\), and \(R_{Z}=\{(x_{ij})\in\operatorname{Sym}^{r}|z_{ij}=0\text{ if }i,j\leq r-2m\text{ and }i\neq j\}\) if \(r\geq 2m\)._ _Set \(d_{X}=\dim R_{X}\), and \(d_{Z}=\dim R_{Z}\). Then_ \[d_{X}=\begin{cases}\frac{r(r+1)}{2},&\text{ if }r\leq m,\\ 2rm-\frac{r(r-1)}{2}-m^{2},&\text{ if }m<r<2m,\,,\text{ and }d_{Z}=\begin{cases} \frac{r(r+1)}{2}&\text{ if }r<2m,\\ (2m+1)(r-m)&\text{ if }r\geq 2m.\end{cases}\] _Note that when \(r<m\), the action has a non-trivial but base point free stabilizer isomorphic to \(U_{\operatorname{Sp}_{2(m-r)}}\) and when \(r\geq m\) the stabilizer is always trivial. Moreover, the corresponding invariant measure \(d\mu\) on \(R\) is given by \(d\mu=d\mu_{X}\wedge d\mu_{Z}\), where_ \[d\mu_{X}=\begin{cases}|x_{r,1}^{r+2m-2}x_{r-1,2}^{r+2m-5}x_{r-2,3}^{r+2m-8} \cdots x_{1,r}^{2m-2r+1}|\prod dx_{ij},&r\leq m,\\ |x_{r,1}^{r+2m-2}x_{r-1,2}^{r+2m-2}x_{r-2,3}^{r+2m-8}\cdots x_{r-m+1,m}^{r-m+1} x_{r-m,m+1}^{r-m-1}x_{r-m-1,m+2}^{r-m-2}\cdots x_{2,r-1}|\prod dx_{ij},&m<r<2m,\\ |x_{r,1}^{r+2m-2}x_{r-1,2}^{r+2m-5}x_{r-2,3}^{r+2m-8}\cdots x_{r-m+1,m}^{r-m+1} x_{r-m,m+1}^{r-m-2}x_{r-m-1,m+2}^{r-2m}\cdots x_{r-2m+1,2m}^{r-2m}|\prod dx_{ij},&r \geq 2m,\end{cases}\] _and_ \[d\mu_{Z}=\begin{cases}\prod_{i,j}dz_{i,j}&\text{ if }\,r\leq 2m,\\ |z_{r-2m,r-2m}^{r-2m-1}z_{r-2m-1,r-2m-1}^{r-2m-2}\cdots z_{2,2}|\prod dz_{ij}& \text{ if }\,r>2m,\end{cases}\] _where the product runs over all \((i,j)\)'s such that \(x_{i,j}\) and \(z_{i,j}\) are non-zero in each case. In addition, if we use \((X,Y)\) instead of \((X,Z)\) to parameterize the orbit space, the corresponding invariant measures are related by_ \[d\mu_{X}\wedge d\mu_{Y}=d\mu_{X}\wedge(d\mu_{Z}\cdot J_{r}).\] Proof.: Based on the arguments before the statement of the proposition, it suffices to study the action \(X\mapsto u_{1}Xu_{2}^{-1},Z\mapsto u_{1}Z^{t}u_{1}\), by induction of the size of \(X\) and \(Z\). If \(r=1\), then \(u_{1}=1\), the action degenerates as \((X,Z)\mapsto(Xu_{2}^{-1},Z)\). Assume \(x_{1,1}\neq 0\), then it is easy to see that the orbit space representative can be given as \(((x_{1,1},0,\cdots,0),Z)\in\mathbb{A}^{2m}\times\text{Sym}^{r}\). Suppose \(r>1\). We study the action on \(X\) first. Write \[u_{1}=\begin{bmatrix}v_{1}&{}^{t}\delta_{1}\\ &1\end{bmatrix},u_{2}=\begin{bmatrix}1&\delta_{2}\\ &v_{2}^{\prime}\end{bmatrix},X=\begin{bmatrix}{}^{t}\alpha&X_{1}^{\prime}\\ x_{r,1}&\beta\end{bmatrix}\] with \(\alpha,\delta_{1}\in\mathbb{A}^{r-1}\), \(\beta,\delta_{2}\in\mathbb{A}^{2m-1}\), \(v_{1}\in U_{\text{GL}_{r-1}},v_{2}^{\prime}\in U_{\text{GL}_{2m-1}}\), and \(X_{1}^{\prime}\in\text{Mat}_{(r-1)\times(2m-1)}\). Then \[u_{1}Xu_{2}^{-1} =\begin{bmatrix}v_{1}&{}^{t}\delta_{1}\\ &1\end{bmatrix}\begin{bmatrix}{}^{t}\alpha&X_{1}^{\prime}\\ x_{r,1}&\beta\end{bmatrix}\begin{bmatrix}1&-\delta_{2}(v_{2}^{\prime})^{-1} \\ &(v_{2}^{\prime})^{-1}\end{bmatrix}=\begin{bmatrix}v_{1}{}^{t}\alpha+{}^{t} \delta_{1}x_{r,r+1}&v_{1}X_{1}^{\prime}+{}^{t}\delta_{1}\beta\\ &x_{r,1}&\beta\end{bmatrix}\begin{bmatrix}1&-\delta_{2}(v_{2}^{\prime})^{-1} \\ &(v_{2}^{\prime})^{-1}\end{bmatrix}\] \[=\begin{bmatrix}v_{1}{}^{t}\alpha+{}^{t}\delta_{1}x_{r,1}&-v_{1} {}^{t}\alpha\delta_{2}(v_{2}^{\prime})^{-1}-x_{r,1}{}^{t}\delta_{1}\delta_{2}( v_{2}^{\prime})^{-1}+v_{1}X_{1}^{\prime}(v_{2}^{\prime})^{-1}+{}^{t}\delta_{1} \beta(v_{2}^{\prime})^{-1}\\ &x_{r,1}&-x_{r,1}\delta_{2}(v_{2}^{\prime})^{-1}+\beta(v_{2}^{\prime})^{-1} \end{bmatrix}.\] Assume \(x_{r,1}\neq 0\), choose \(\delta_{1}\) and \(\delta_{2}\) such that \(v_{1}{}^{t}\alpha+{}^{t}\delta_{1}x_{r,1}=-x_{r,1}\delta_{2}(v_{2}^{\prime})^{ -1}+\beta(v_{2}^{\prime})^{-1}=0\), i.e. \(\delta_{1}=-\frac{\alpha^{t}v_{1}}{x_{r,1}}\), \(\delta_{2}=\frac{\beta}{x_{r,1}}\), then \(-v_{1}{}^{t}\alpha\delta_{2}(v_{2}^{\prime})^{-1}-x_{r,1}{}^{t}\delta_{1} \delta_{2}(v_{2}^{\prime})^{-1}+v_{1}X_{1}^{\prime}(v_{2}^{\prime})^{-1}+v_{1} X_{1}^{\prime}(v_{2}^{\prime})^{-1}=v_{1}X_{1}^{\prime}(v_{2}^{\prime})^{-1}-\frac{v_{1}{}^{t} \alpha\beta(v_{2}^{\prime})^{-1}}{x_{r,1}}\). Let \(X_{1}^{\prime\prime}=X_{1}^{\prime}-\frac{{}^{t}\alpha\beta}{x_{r,1}}\), then \[v_{1}X_{1}^{\prime}(v_{2}^{\prime})^{-1}-\frac{v_{1}{}^{t}\alpha\beta(v_{2}^{ \prime})^{-1}}{x_{r,1}}=v_{1}X_{1}^{\prime\prime}(v_{2}^{\prime})^{-1}.\] We have \(X_{1}^{\prime\prime}\in\text{Mat}_{(r-1)\times(2m-1)}\), \(v_{1}\in U_{\text{GL}_{r}}\) and \(v_{2}^{\prime}\in U_{\text{GL}_{2m-1}}\). Now if \(m=1\) then \(v_{2}^{\prime}=1\), and the action degenerates as \(X_{1}^{\prime\prime}\mapsto v_{1}X_{1}^{\prime\prime}\) with \(X_{1}^{\prime\prime}\in\text{Mat}_{(r-1)\times 1}\). So it is clear that if we assume that the \((r-1,2)\)-th entry of \(X_{1}^{\prime\prime}\) is non-zero, we obtain an orbit space representative of the action on \(X\) given by \(R_{X}=\begin{bmatrix}0&0\\ \vdots&\vdots\\ 0&x_{r-1,2}\\ x_{r,1}&0\end{bmatrix}\). Therefore, we assume that \(m>1\) from now on. Write \(u_{2}=\begin{bmatrix}1&\delta_{2}\\ &v_{2}^{\prime}\end{bmatrix}=\begin{bmatrix}1&\gamma_{2}&x\\ &v_{2}&{}^{t}\gamma_{2}^{\prime}\\ &1\end{bmatrix},\) where \(\delta_{2}=(\gamma_{2},x)\) and \(v_{2}^{\prime}=\begin{bmatrix}v_{2}&{}^{t}\gamma_{2}^{\prime}\\ &1\end{bmatrix}\), with \(\gamma_{2},\gamma_{2}^{\prime}\in\mathbb{A}^{2m-2}\). Moreover, by the structure of \(\text{Sp}_{2m}\), a simple calculation shows that \(\gamma_{2}^{\prime}=-\gamma_{2}v_{2}^{-1}J_{2m-2}^{\prime}\), \(\gamma_{2}{}^{t}J_{2m-2}^{\prime}{}^{t}\gamma_{2}^{\prime}=0\), \(x\) is free, and \({}^{t}v_{2}J_{2m-2}^{\prime}v_{2}=J_{2m-2}^{\prime}\), hence \(v_{2}\in U_{\text{Sp}_{2m-2}}\). Note that since \(\delta_{2}\) is determined by the above process, so is \(\gamma_{2}\). We also write \(X_{1}^{\prime\prime}=\begin{bmatrix}X_{1}&{}^{t}\gamma\end{bmatrix}\) with \(X_{1}\in\text{Mat}_{(r-1)\times(2m-2)}\) and \(\gamma\in\mathbb{A}^{r-1}\). Therefore we can write \[v_{1}X_{1}^{\prime}(v_{2}^{\prime})^{-1}=v_{1}\begin{bmatrix}X_{1}&{}^{t} \gamma\end{bmatrix}\begin{bmatrix}v_{2}^{-1}&-v_{2}^{-1}{}^{t}\gamma_{2}^{ \prime}\\ &1\end{bmatrix}=\begin{bmatrix}v_{1}X_{1}v_{2}^{-1}&v_{1}X_{1}v_{2}^{-1}{}^{t} \gamma_{2}^{\prime}+v_{1}{}^{t}\gamma\end{bmatrix}\] \[=\begin{bmatrix}v_{1}X_{1}v_{2}^{-1}&v_{1}X_{1}v_{2}^{-1}(-{}^{t}J_ {2m-2}^{\prime}{}^{t}v_{2}^{-1}\gamma_{2})+v_{1}{}^{t}\gamma\end{bmatrix}= \begin{bmatrix}v_{1}X_{1}v_{2}^{-1}&-v_{1}X_{1}({}^{t}v_{2}J_{2m-2}^{\prime}v_{2} )^{-1}{}^{t}\gamma_{2}+v_{1}{}^{t}\gamma\end{bmatrix}\] \[=\begin{bmatrix}v_{1}X_{1}v_{2}^{-1}&v_{1}X_{1}J_{2m-2}^{\prime}{}^{t} \gamma_{2}+v_{1}{}^{t}\gamma\end{bmatrix}=\begin{bmatrix}v_{1}X_{1}v_{2}^{-1}&(v_{1}X_{ 1}v_{2}^{-1})v_{2}J_{2m-2}^{\prime}{}^{t}\gamma_{2}+v_{1}{}^{t}\gamma\end{bmatrix}.\] From this observation we see that it suffices to study the action \[(U_{\mathrm{GL}_{r-1}}\times U_{\mathrm{Sp}_{2m-2}})\times(\mathrm{Mat}_{(r-1) \times(2m-2)}\times\mathbb{A}^{r-1})\longrightarrow\mathrm{Mat}_{(r-1)\times(2m- 2)}\times\mathbb{A}^{r-1}\] \[((v_{1},v_{2}),(X_{1},\gamma))\mapsto(v_{1}X_{1}v_{2}^{-1},{v_{1}}^{t}\gamma).\] Set \(X_{0}:=X\), \(r^{\prime}=r-1\) and \(m^{\prime}=m-1\), replace \(X\) by \(X_{1}\), \(u_{1}\) by \(v_{1}\), \(u_{2}\) by \(v_{2}\), \(r\) by \(r^{\prime}\), and \(m\) by \(m^{\prime}\). Continue with the above process, we can construct \(X_{i}\)'s (\(i\geq 0\)) inductively. Since the relative size of \(r\) and \(2m\) will affect the inductive process, we will have to discuss three cases separately: **Case (1):**\(r\leq m\). Then \(r^{\prime}\leq m^{\prime}\) for all \(r^{\prime}\) and \(m^{\prime}\) in the inductive process. Note that in this case all the entries of \(v_{1}\), hence of \(u_{1}\), are chosen in the inductive process. Consequently we do not have free variables in \(v_{1}\) when considering the action \(\gamma\mapsto{v_{1}}^{t}\gamma\). We further assume that all the entries of \({v_{1}}^{t}\gamma\) are non-zero. We eventually obtain that \(X_{r-1}=(\tilde{x}_{1,r},*,\cdots,*)\) is a vector in \(\mathbb{A}^{2(m-r)+2}\) and we are left to consider the right-action of \(U_{\mathrm{Sp}_{2(m-r)+2}}\). Write the action as \(X_{r-1}\mapsto X_{r-1}u^{-1}\) with \(u=\begin{bmatrix}1&\delta\\ &v^{\prime}\end{bmatrix}=\begin{bmatrix}1&\gamma&a\\ &v&{}^{t}\gamma^{\prime}\\ &&1\end{bmatrix}\in U_{Sp_{2(m-r)+2}}\), where \(\delta=(\gamma,a)\in\mathbb{A}^{2(m-r)+1}\), \(\gamma,\gamma^{\prime}\in\mathbb{A}^{2(m-r)}\), and \(v\) is a unipotent matrix of size \(2(m-r)\). Then a similar calculation as above shows that \(\gamma^{\prime}=-\gamma v^{-1}J^{\prime}_{2(r-m)}\), \(a\) is free, and \(v\in U_{\mathrm{Sp}_{2(m-r)}}\). Write \(X_{r-1}=(\tilde{x}_{r,1},\alpha^{\prime})\) with \(\alpha^{\prime}\in\mathbb{A}^{2(m-r)+1}\). By assuming \(\tilde{x}_{1,r}\neq 0\), we choose \(\delta=(\gamma,a)\) such that \(\tilde{x}_{r,1}\delta+\alpha^{\prime}v=0\), then \(X_{r-1}u^{-1}=(\tilde{x}_{r,1},0,\cdots,0)\). So we obtain our orbit space representative in this case as \[R_{X}=\begin{bmatrix}0&0&\cdots&0&x_{1,r}&0&\cdots&0&x_{1,r+2(m-r)+2}&x_{1,r+ 2(m-r)+3}&\cdots&x_{1,2m}\\ 0&0&\cdots&x_{2,r-1}&0&0&\cdots&0&x_{2,r+2(m-r)+3}&\cdots&x_{2,2m}\\ \vdots&\vdots&\cdots&\vdots&\vdots&0&\cdots&0&\vdots&\vdots&\ddots&\vdots\\ 0&x_{r-1,2}&\cdots&0&0&0&\cdots&0&0&\cdots&x_{r-1,2m}\\ x_{r,1}&0&\cdots&0&0&0&\cdots&0&0&\cdots&0\end{bmatrix},\] where there are \(2(m-r)+1\) zero columns in the middle. From the last step above, we also see that in this case the action \(X\mapsto u_{1}Xu_{2}^{-1}\) has stablizer \(U_{\mathrm{Sp}_{2(m-r)}}\) given by \(v\), provided that \(m>r\). Moreover, one computes easily that \(\dim R_{X}=r+\frac{r(r-1)}{2}=\frac{r(r+1)}{r^{\prime}}\). **Case (2):**\(r\geq 2m\). Then \(r^{\prime}\geq 2m^{\prime}\) for all \(r^{\prime}\) and \(m^{\prime}\) in the inductive process. Perform the inductive process to the \(m\)-th step, we obtain that \(X_{m}=\begin{bmatrix}*&*\\ \vdots&\vdots\\ *&x_{r-m,m+1}\\ x_{r-m+1.m}&*\end{bmatrix}\), now \(v_{1}=\begin{bmatrix}v_{1}^{\prime}&{}^{t}\delta_{1}\\ &1\end{bmatrix}\in U_{\mathrm{GL}_{r-m}}\), and \(v_{2}=\begin{bmatrix}1&x\\ &1\end{bmatrix}\). Choose \(\delta_{1}\) and \(x\) accordingly we can make \(X_{m}\) to be of the form \(\begin{bmatrix}0&*\\ \vdots&\vdots\\ 0&x_{r-m,m+1}\\ x_{r-m+1,m}&0\end{bmatrix}\). From the \((m+1)\)-th step on the action degenerates as the left action of \(U_{\mathrm{GL}_{r-m-1}}\) on \((X,{}^{t}\gamma)\). By determining the last column of \(v_{1}\) each time to make the first column but the last entry of \(X\) zero, we eventually obtain a vector of the form \(t^{(}*,\cdots,*,\tilde{x}_{r-2m+1,2m})\in\mathbb{A}^{r-2m+1}\). During this process, all the entries of \(u_{2}\) together with all the last \(2m-1\) columns of \(u_{1}\) are determined, and we are left to consider the left-action of \(U_{\mathrm{GL}_{r-2m+1}}\) on \(\mathbb{A}^{2m-r+1}\). Assume \(\tilde{x}_{r-2m+1,2m}\neq 0\), we pick the orbit representative in the last step as \({}^{t}(0,\cdots,0,\tilde{x}_{r-2m+1,2m})\). As a result, we obtain our orbit space representative in this case as \[R_{X}=\left[\begin{array}{ccccccccc}0&0&\cdots&0&0&0&0&0&0&0\\ &&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&0&0&0&0&0&0\\ 0&0&\cdots&0&0&0&0&0&x_{r-2m+1,2m}\\ 0&0&\cdots&0&0&0&0&x_{r-2m+2,2m-1}&*\\ &&\vdots&0&0&0&\cdots&\vdots&\vdots\\ 0&0&\cdots&0&0&x_{r-m-1,m+2}&\cdots&*&*\\ 0&0&\cdots&0&x_{r-m,m+1}&x_{r-m,m+2}&\cdots&*&*\\ 0&0&\cdots&x_{r-m+1,m}&0&x_{r-m+1,m+2}&\cdots&*&*\\ &&\therefore&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&0&0&0&0&x_{r-2,2m-1}&*\\ 0&x_{r-1,2}&\cdots&0&0&0&0&0&x_{r-1,2m}\\ x_{r,1}&0&\cdots&0&0&0&0&0&0\end{array}\right]\] where there are \(r-2m\) zero rows on the top. Observe that in this case the action \(X\mapsto u_{1}Xu_{2}^{-1}\) has a stablizer \(U_{\mathrm{GL}_{r-2m}}\) given by the upper-left unipotent submatrix of \(u_{1}\) of size \(r-2m\), if \(r>2m\). We also have \(\dim R_{X}=m+(m+\frac{m(m-1)}{2}\cdot 2)=m(m+1)\). **Case (3)**: \(m<r<2m\). In this case it is possible that there exists some \(k\geq 0\) such that \(r^{\prime}=r-k\geq 2(m-k)=2m^{\prime}\), i.e., \(2m-r\leq k\leq m\). If this happens, continue with our algorithm in case (2). Specifically, note that when \(k=2m-r>0\), \(r^{\prime}=2m^{\prime}=2(r-m)\), we can conclude as in the last step of case (2) that the action \(X\mapsto u_{1}Xu_{2}^{-1}\) has trivial stablizer. If for all \(r^{\prime}\) and \(m^{\prime}\) in the inductive process, we have \(r^{\prime}<2m^{\prime}\), we perform a similar inductive argument as in case (1). Since \(r^{\prime}<2m^{\prime}\) for all \(r^{\prime},m^{\prime}\) in the inductive process, by a similar argument as in case (1), we have exhausted all the possible choices of entries in \(u_{1}\). On the other hand, since \(r^{\prime}>m^{\prime}\), when \(r^{\prime}\) decreases to \(1\), so does \(m^{\prime}\), hence we also exhaust all the possible choices of entries in \(u_{2}\). Consequently, the action \(X\mapsto u_{1}Xu_{2}^{-1}\) has trivial stabilizer. Eventually one obtains the orbit space representative in this case as: \[R_{X}=\left[\begin{array}{ccccccccc}0&\cdots&0&0&0&\cdots&0&x_{1,r}&x_{1,r+1}& \cdots&x_{1,2m}\\ 0&\cdots&0&0&0&\cdots&x_{2,r-1}&x_{2,r}&x_{2,r+1}&\cdots&x_{2,2m}\\ \vdots&&\vdots&\vdots&\vdots&\iddots&\vdots&\vdots&\vdots&\iddots&\vdots\\ 0&\cdots&0&0&x_{r-m-1,m+2}&\cdots&*&*&*&\cdots&*\\ 0&\cdots&0&x_{r-m,m+1}&x_{r-m,m+2}&\cdots&*&*&*&\cdots&*\\ 0&\cdots&x_{r-m+1,m}&0&x_{r-m+1,m+2}&\cdots&*&*&*&\cdots&*\\ \vdots&&\vdots&\vdots&\vdots&\cdots&\vdots&\iddots&\vdots&\vdots&\iddots& \vdots\\ 0&\cdots&0&0&0&\cdots&0&x_{2(r-m)-1,r}&*&\cdots&*\\ 0&\cdots&0&0&0&\cdots&0&0&x_{2(r-m),r+1}&\cdots&*\\ 0&\cdots&0&0&0&\cdots&0&0&\cdots&*\\ \vdots&\iddots&\vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\vdots&\ddots& \vdots\\ 0&\cdots&0&0&0&\cdots&0&0&0&\cdots&x_{r-1,2m}\\ x_{r,1}&\cdots&0&0&0&\cdots&0&0&0&\cdots&0\end{array}\right].\] We have \(\dim R_{X}=m+(r-m)^{2}+\frac{(3r-2m-1)(2m-r)}{2}=2rm-\frac{r(r-1)}{2}-m^{2}\). Note that only in case (2) we also have to consider the action of \(U_{\mathrm{GL}_{r-2m}}\) on \(Z\in\mathrm{Sym}^{r}\). Write \(Z=\begin{bmatrix}Z_{1}&h\\ t_{h}&Z_{2}\end{bmatrix}\), with \(Z_{1}\in\mathrm{Sym}^{r-2m},Z_{2}\in\mathrm{Sym}^{2m}\), \(h\in\mathrm{Mat}_{(r-2m)\times 2m}\), and \(u_{1}=\begin{bmatrix}u_{1}^{\prime}&w\\ u_{1}^{\prime\prime}\end{bmatrix}\) with \(u_{1}^{\prime}\in U_{\mathrm{GL}_{r-2m}}\), \(u_{1}^{\prime\prime}\in U_{\mathrm{GL}_{2m}}\) and \(w\in\mathrm{Mat}_{(r-2m)\times 2m}\). A simple calculation gives \[u_{1}Z^{t}u_{1}=\begin{bmatrix}u_{1}^{\prime}Z_{1}{}^{t}u_{1}^{\prime}+w^{t}h^ {t}u_{1}^{\prime}+(u_{1}^{\prime}h+wZ_{2})^{t}w&u_{1}^{\prime}h^{t}u_{1}^{ \prime\prime}+wZ_{2}{}^{t}u_{1}^{\prime\prime}\\ u_{1}^{\prime\prime t}h^{t}u_{1}^{\prime}+u_{1}^{\prime\prime}Z_{2}{}^{t}w&u_{1 }^{\prime}Z_{2}{}^{t}u_{1}^{\prime\prime}\end{bmatrix}\] Note that both \(w\) and \(u_{1}^{\prime\prime}\) are fixed during the inductive process of the action \(X\mapsto u_{1}Xu_{2}^{-1}\), so it suffices to consider the action \((u_{1}^{\prime},(Z_{1},h))\mapsto(u_{1}^{\prime}Z_{1}{}^{t}u_{1}^{\prime},u_{1 }^{\prime}h)\). We further write \(u_{1}^{\prime}=\begin{bmatrix}\tilde{u}_{1}&{}^{t}\alpha_{1}\\ &1\end{bmatrix}\), and \(Z_{1}=\begin{bmatrix}Z_{1}^{\prime}&{}^{t}\delta_{1}^{t}\\ \delta_{1}^{\prime}&z\end{bmatrix}\). Then \[u_{1}^{\prime}Z_{1}{}^{t}u_{1}^{\prime}=\begin{bmatrix}\tilde{u}_{1}Z_{1}^{ \prime}{}^{t}\tilde{u}_{1}+{}^{t}\alpha_{1}\delta_{1}^{\prime}{}^{t}\tilde{u}_ {1}+(\tilde{u}_{1}{}^{t}\delta_{1}^{\prime}{}^{t}+{}^{t}\alpha_{1}z)\alpha_{1} &\tilde{u}_{1}{}^{t}\delta_{1}^{\prime}{}^{t}+{}^{t}\alpha_{1}z\\ \delta_{1}^{\prime}{}^{t}\tilde{u}_{1}+z\alpha_{1}&z\end{bmatrix}\] Assume \(z\neq 0\) we can choose \(\alpha_{1}\) such that \(\tilde{u}_{1}{}^{t}\delta_{1}^{\prime}{}^{t}+{}^{t}\alpha_{1}z=0\), then the matrix becomes \(\begin{bmatrix}\tilde{u}_{1}(Z_{1}^{\prime}-\frac{{}^{t}\delta_{1}^{\prime}{} ^{t}_{1}}{2})^{t}\tilde{u}_{1}&0\\ 0&z\end{bmatrix}\). Replace \(Z_{1}\) by \(Z_{1}^{\prime}-\frac{{}^{t}\delta_{1}^{\prime}{}^{t}_{1}}{2}\), \(u_{1}^{\prime}\) by \(\tilde{u}_{1}\) and continue with the above process, we exhaust all possible choices of entries in \(u_{1}\). Hence there is no need to consider \(h\mapsto u_{1}^{\prime}h\). Assume that after our choices of all entries in \(u_{1}^{\prime}\), we have \(u_{1}^{\prime}h\neq 0\). We obtain a orbit space representative given by \[R_{Z}=\begin{bmatrix}z_{11}&z_{1,r-2m+1}&*&\cdots&*&z_{1,r}\\ &\ddots&\vdots&\vdots&\cdots&\vdots&\vdots\\ &&z_{r-2m,r-2m}&z_{r-2m,r-2m+1}&*&\cdots&*&*\\ z_{1,r-2m+1}&\cdots&z_{r-2m,r-2m+1}&z_{r-2m+1,r-2m+1}&*&\cdots&*&*\\ *&\cdots&*&*&*&\cdots&*&*\\ \vdots&\cdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ *&\cdots&*&*&*&\cdots&z_{r-1,r-1}&*\\ z_{1,r}&\cdots&*&*&*&\cdots&*&z_{r,r}\end{bmatrix}.\] It follows that \(\dim R_{Z}=r-2m+(r-2m)2m+\frac{(2m+1)2m}{2}=(2m+1)(r-m)\). Next, we study the invariant measure on our chosen orbit space representatives. We will first work on \(R_{X}\). Denote by \(U_{0}\) the stabilizer in \(U_{M}\simeq U_{\mathrm{GL}_{r}}\times U_{\mathrm{Sp}_{2m}}\), which is independent of the base point as can be observed from our previous discussion. The action is given by \[(U_{\mathrm{GL}_{r}}\times U_{\mathrm{Sp}_{2m}})/U_{0}\times R_{X}\longrightarrow \mathrm{Mat}_{r\times 2m}\] \[((u_{1},u_{2}),X_{0})\mapsto u_{1}X_{0}u_{2}^{-1}\] Without loss of generality we assume \(m>2\). Again we write \(u_{1}=\begin{bmatrix}v_{1}&{}^{t}\delta_{1}\\ &1\end{bmatrix}\), \(u_{2}=\begin{bmatrix}1&\delta_{2}\\ v_{2}^{\prime}\end{bmatrix}=\begin{bmatrix}1&\gamma_{2}&x\\ v_{2}&{}^{t}\gamma_{2}^{\prime}\\ &1\end{bmatrix}\), and \[X_{0}=\begin{bmatrix}0&X_{1}^{\prime}\\ x_{r,1}&0\end{bmatrix}=\begin{bmatrix}0&X_{1}&{}^{t}\gamma\\ x_{r,1}&0&0\end{bmatrix}\text{. By the previous argument we have }\gamma_{2}^{\prime}=-\gamma_{2}v_{2}^{-1}J_{2m-2}^{\prime}\text{. Then}\] \[u_{1}X_{0}u_{2}^{-1}=\begin{bmatrix}{}^{t}\delta_{1}x_{r,1}&-x_{r,1}{}^{t} \delta_{1}\delta_{2}(v_{2}^{\prime})^{-1}+v_{1}X_{1}^{\prime}(v_{2}^{\prime})^{ -1}\end{bmatrix}\] \[=\begin{bmatrix}{}^{t}\delta_{1}x_{r,1}&-x_{r,1}{}^{t}\delta_{1}\gamma_{2}v_{2}^ {-1}+v_{1}X_{1}v_{2}^{-1}&x_{r,1}{}^{t}\delta_{1}\gamma_{2}J_{2m-2}^{\prime}{}^{t }\gamma_{2}-x_{r,1}{}^{t}\delta_{1}x-v_{1}X_{1}J_{2m-2}^{\prime}{}^{t}\gamma_{2 }+v_{1}{}^{t}\gamma\\ x_{r,1}&-x_{r,1}\gamma_{2}v_{2}^{-1}&x_{r,1}\gamma_{2}J_{2m-2}^{\prime}{}^{t} \gamma_{2}-x_{r,1}x\end{bmatrix}.\] We write this map as \[(\delta_{1},\gamma_{2},x,x_{r,1},\gamma,v_{1},v_{2},X_{1})\] \[\mapsto(x_{r,1}\delta_{1},-x_{r,1}\gamma_{2}v_{2}^{-1},x_{r,1}\gamma_{2}J^{ \prime}_{2m-2}{}^{t}\gamma_{2}-x_{r,1}x,x_{r,1},\] \[x_{r,1}{}^{t}\delta_{1}\gamma_{2}J^{\prime}_{2m-2}{}^{t}\gamma_{2}-x_{r,1}{}^{ t}\delta_{1}x-v_{1}X_{1}J^{\prime}_{2m-2}{}^{t}\gamma_{2}+v_{1}{}^{t}\gamma,-x_{r,1}{}^{t}\delta_{1}\gamma_{2}v_{2}^{-1}+v_{1}X_{1}v_{2}^{-1}).\] The Jacobian matrix of this map is given by \[\begin{bmatrix}x_{r,1}I_{r-1}&0&0&\delta_{1}&0&0&0\\ 0&-x_{r,1}v_{2}^{-1}&0&\gamma_{2}v_{2}^{-1}&0&0&\frac{\partial(-x_{r,1}\gamma_ {2}v_{2}^{-1})}{\partial v_{2}}&0\\ 0&*&-x_{r,1}&*&0&0&0\\ 0&0&0&1&0&0&0\\ *&*&*&*&\frac{\partial(v_{1}{}^{t}\gamma)}{\partial\gamma}&*&0&*\\ *&*&0&*&0&\frac{\partial(v_{1}X_{1}v_{2}^{-1})}{\partial v_{1}}&\frac{ \partial(-x_{r,1}{}^{t}\delta_{1}\gamma_{2}v_{2}^{-1})}{v_{2}}+\frac{\partial( v_{1}X_{1}v_{2}^{-1})}{v_{2}}&\frac{\partial(v_{1}X_{1}v_{2}^{-1})}{X_{1}}\end{bmatrix}.\] Multiplying the second row by \(-\delta_{1}\) and adding it to the last row, one computes that the determinant of the Jacobian matrix is equal to \[|x_{r_{1}}|^{r-2m-2}\cdot|\frac{\partial(v_{1}{}^{t}\gamma)}{\partial\gamma}| \cdot|\det\left[\frac{\partial(v_{1}X_{1}v_{2}^{-1})}{\partial v_{1}}\right. \begin{array}{cc}\partial(v_{1}X_{1}v_{2}^{-1})&\frac{\partial(v_{1}X_{1}v_{ 2}^{-1})}{\partial v_{2}}&\frac{\partial(v_{1}X_{1}v_{2}^{-1})}{\partial X_{1} }\end{array}\right]|\] From the form of \(R_{X}\) we observe that \[|\frac{\partial(v_{1}{}^{t}\gamma)}{\partial\gamma}|=\begin{cases}1,&\text{ if }r\leq m,\\ |x_{r-2m+1,2m}|^{r-2m},&\text{ if }r\geq 2m.\end{cases}\] And the last term \(|\det\left[\frac{\partial(v_{1}X_{1}v_{2}^{-1})}{\partial v_{1}}\right. \begin{array}{cc}\partial(v_{1}X_{1}v_{2}^{-1})&\frac{\partial(v_{1}X_{1}v_{ 2}^{-1})}{\partial v_{2}}\end{array}\right]|\) gives the Jacobian of the same type of action but with rank drop by \(1\). Proceed by induction we obtain the invariant measure on \(R_{X}\). When \(r<m<2m\), \(\frac{\partial(v_{1}{}^{t}\gamma)}{\partial\gamma}=1\) until our induction procedure goes to the \((2m-r+1)\)-th step, where \(\gamma=(x_{2,r-1},*,\cdots,*)\), in which case we have \(|\frac{\partial(v_{1}{}^{t}\gamma)}{\partial\gamma}|=|x_{2,r-1}|\). Then the induction continues as case (2). Consequently, we obtain the invariant measure \(d\mu_{X}\) on \(R_{X}\) as \[d\mu_{X}=\begin{cases}|x_{r,1}^{r+2m-2}x_{r-1,2}^{r+2m-5}x_{r-2,3}^{r+2m-8} \cdots x_{1,r}^{2m-2r+1}|\prod dx_{ij},&r\leq m,\\ x_{r,1}^{r+2m-2}x_{r-2,3}^{r+2m-5}x_{r-2,3}^{r+2m-8}\cdots x_{r-m+1,m}^{r+m+1}x_ {r-m,m+1}^{r-m-1}x_{r-m,m+1}^{r-m-2}\cdots x_{r-1,m+2}^{r-2m}\prod dx_{ij},&m<r <2m,\\ x_{r,1}^{r+2m-2}x_{r-1,2}^{r+2m-5}x_{r-2,3}^{r+2m-8}\cdots x_{r-m+1,m}^{r-m+1}x_ {r-m,m+1}^{r-m-1}x_{r-m-1,m+2}^{r-m-2}\cdots x_{r-2m+1,2m}^{r-2m}|\prod dx_{ij}. \quad r\geq 2m,\end{cases}\] where the product runs over all \((i,j)\) such that \(x_{i,j}\neq 0\). Recall that when \(r\geq 2m\), we will also need to consider the action of \(U_{\mathrm{GL}_{r-2m}}\) on \(\mathrm{Sym}^{r}\). The corresponding invariant measure on \(R_{Z}\) can also be proved inductively. Write \(u_{1}=\begin{bmatrix}u_{1}^{\prime}&w\\ &u_{1}^{\prime\prime}\end{bmatrix}\), where \(u_{1}^{\prime}\in U_{\mathrm{GL}_{r-2m}}\), \(u_{1}^{\prime\prime}\in U_{\mathrm{GL}_{2m}}\), \(w\in\mathrm{Mat}_{(r-2m)\times 2m}\), and \(Z_{0}=\begin{bmatrix}Z_{1}&h\\ t&Z_{2}\end{bmatrix}\) with \(Z_{1}=\mathrm{diag}\{z_{1,1},\cdots,z_{r-2m,r-2m}\}\), \(Z_{2}\in\mathrm{Sym}^{2m}\), \(h\in\mathrm{Mat}_{(r-2m)\times 2m}\). As we see from the proof of \(R_{Z}\) above, it suffices to only consider the action \((U_{1}^{\prime},Z_{1})\mapsto u_{1}^{\prime}Z_{1}{}^{t}u_{1}^{\prime}\). Without loss of generality, we assume \(r-2m>0\). Then we can write \(u_{1}^{\prime}=\begin{bmatrix}\tilde{u}_{1}&\alpha_{1}\\ &1\end{bmatrix}\), with \(\tilde{u}_{1}\in U_{\mathrm{GL}_{r-2m-1}}\), \(\alpha_{1}\in\mathbb{A}^{r-2m-1}\), and \(Z_{1}=\begin{bmatrix}Z_{1}^{\prime}&\\ z_{r-2m,r-2m}\end{bmatrix}\) where \(Z_{1}^{\prime}=\mathrm{diag}\{z_{1,1},\cdots,z_{r-2m-1,r-2m-1}\}\). Then the map \((u_{1}^{\prime},Z_{1})\mapsto u_{1}^{\prime}Z_{1}{}^{t}u_{1}^{\prime}\) can be written as \[(z_{r-2m,r-2m},\alpha_{1},\tilde{u}_{1},Z_{1}^{\prime})\mapsto(z_{r-2m,r-2m},z_{ r-2m,r-2m}\alpha_{1},\tilde{u}_{1}Z_{1}^{\prime}{}^{t}\tilde{u}_{1}+{}^{t}\alpha_{1}z_{r-2m,r-2m} \alpha_{1}),\] whose Jacobian matrix if of the form \[\begin{bmatrix}1&0&0&0\\ 0&z_{r-2m,r-2m}I_{r-2m-1}&0&0\\ *&*&\frac{\partial(\bar{u}_{1}Z_{r}^{t}\bar{u}_{1})}{\partial\bar{u}_{1}}&\frac{ \partial(\bar{u}_{1}Z_{r}^{t}\bar{u}_{1})}{\partial Z_{1}^{t}}\end{bmatrix}.\] So the absolute value of its determinant is equal to \(|x_{r-2m,r-2m}^{r-2m-1}|\cdot|\det\left[\frac{\partial(\bar{u}_{1}Z_{r}^{t} \bar{u}_{1})}{\partial\bar{u}_{1}}\quad\frac{\partial(\bar{u}_{1}Z_{r}^{t}\bar {u}_{1})}{\partial Z_{1}^{t}}\right]|\), where the second term is the absolute value of the Jacobian matrix of the same type of action with rank drop by 1. Hence we can proceed by induction. As a result, we obtain the invariant measure \(d\mu_{Z}\) on \(R_{Z}\) in this case given by \[d\mu_{Z}=|z_{r-2m,r-2m}^{r-2m-1}z_{r-2m-1,r-2m-1}^{r-2m-2}\cdots z_{2,2}|\prod dz _{ij}\] where the product runs over all \((i,j)\) such that \(z_{i,j}\neq 0\). For the other two cases, \(d\mu_{Z}=\prod_{i,j}dz_{i,j}\) where the product runs over all \((i,j)\). Finally, since \(Z=J_{r}{}^{t}Y+\frac{(-1)^{r}XJ_{2m}^{r}{}^{t}X}{2}\), and \({}^{t}Z=Z\), we see that \[Y={}^{t}(J_{r}^{-1}(Z-\frac{(-1)^{r}XJ_{2m}^{r}{}^{t}X}{2}))={}^{t}ZJ_{r}+ \frac{(-1)^{r}XJ_{2m}^{r}{}^{t}XJ_{r}}{2}=ZJ_{r}+\frac{X\theta_{r,m}(X)}{2}.\] Let \(R_{Y}=R_{Z}J_{r}+\frac{R_{X}\theta_{r,m}(R_{X})}{2}\). Since the map \((X,Y)\mapsto(X,Z)\) is bijective and \(U_{M}\)-equivariant, \((R_{X},R_{Y})\) gives a set of orbit space representatives for the action \((X,Y)\mapsto(u_{1}Xu_{2}^{-1},u_{1}Y\theta_{r}(u_{1}^{-1}))\). If we denote the corresponding measure on \(R_{Y}\) as \(d\mu_{Y}\), since \(\theta_{r,m}\) is linear, we have \[d\mu_{Y}=d\mu_{Z}\cdot J_{r}+\frac{d\mu_{X}\cdot\theta_{r,m}(X)+X\cdot\theta_{ r,m}(d\mu_{X})}{2}\] Therefore \[d\mu_{X}\wedge d\mu_{Y}=d\mu_{X}\wedge(d\mu_{Z}\cdot J_{r}).\] This shows that we replace integrals over \(R_{X}\times R_{Y}\) by the ones over \(R_{X}\times R_{Z}\), without changing the corresponding invariant Haar measures. In order to apply Theorem 6.2 of [9], we need the following lemma: **Lemma 4.2**.: _There exists an injection \(\alpha^{\vee}:F^{\times}\longrightarrow Z_{G}\backslash Z_{M}\), s.t., \(\alpha(\alpha^{\vee}(t))=t\), for \(\forall t\in F^{\times}\), where \(\alpha=\alpha_{r}\)._ Proof.: Since \(Z_{M}=\cap_{i\neq r}\ker(\alpha_{i})=\{\mathrm{diag}(\underbrace{t,\cdots,t}_{ r},\pm 1,\cdots,\pm 1,\underbrace{t^{-1},\cdots,t^{-1}}_{r})\}\), and \(Z_{G}=\cap_{i=1}^{n}\ker(\alpha_{i})=\{\pm 1\}\), the quotient \(Z_{G}\backslash Z_{M}\) can be identified as \(\{\mathrm{diag}(\underbrace{t,\cdots,t}_{r},1,\cdots,1,\underbrace{t^{-1}, \cdots,t^{-1}}_{r}):t\in\mathbb{G}_{m}\}\). Define \[\alpha^{\vee}(t)=\mathrm{diag}(\underbrace{t,\cdots,t}_{r},1,\cdots,1, \underbrace{t^{-1},\cdots,t^{-1}}_{r}),\] then clearly we have \(\alpha(\alpha^{\vee}(t))=t\) for \(\forall t\in F^{\times}\). Let \(Z_{M}^{0}\) denote the image of \(\alpha^{\vee}\). Notice that \(Z_{M}^{0}\) is the connected component of the center \(Z_{M}\) of \(M\). As we will see later in section 5, the test functions in the local coefficient formula are compactly supported modulo \(Z_{M}\), so it is necessary to consider the action of \(Z_{M}^{0}\) on \(U_{M}\backslash N\). As a result, the local coefficient is an integral of certain partial Bessel function over the space \(Z_{M}^{0}U_{M}\backslash N\). The corresponding orbit space representatives and invariant measures can be given by the following proposition: **Proposition 4.3**.: _The action of \(Z^{0}_{M}\) on \(R=R_{X}\times R_{Z}=U_{M}\backslash N\) admits a set of orbit space representatives of the form \(R^{\prime}:=R_{X^{\prime}}\times R_{Z^{\prime}}\) by setting \(x^{\prime}_{i,j}=\frac{x_{i,j}}{x_{r,1}}\), and \(z^{\prime}_{i,j}=\frac{z_{i,j}}{x^{\prime}_{r,1}}\) for elements in \(R_{X}\) and \(R_{Z}\) respectively. The corresponding invariant measure is given by \(d\mu^{\prime}=d\mu_{X^{\prime}}\wedge d\mu_{Z^{\prime}}\), where_ \[d\mu_{X^{\prime}}=\begin{cases}|x^{\prime r+2m-5}_{r-1,2}x^{\prime r+2m-8}_{r- 2,3}\cdots x^{\prime}_{1,r}^{2m-2r+1}|\prod dx^{\prime}_{ij},&r\leq m,\\ |x^{\prime r+2m-5}_{r-1,2}x^{\prime r-2m-8}_{r-2,3}\cdots x^{\prime r-m+1}_{r- m+1,m}x^{\prime r-m-1}_{r-m+1}x^{\prime r-m-2}_{r-m+1,m+2}\cdots x^{\prime}_{2,r- 1}|\prod dx^{\prime}_{ij},&m<r<2m,\\ |x^{\prime r+2m-5}_{r-1,2}x^{\prime r+2m-8}_{r-2,3}\cdots x^{\prime r-m+1}_{r- m+1,m}x^{\prime r-m-1}_{r-m,m+1}x^{\prime r-m-2}_{r-m-1,m+2}\cdots x^{\prime r-2m +1,2m}|\prod dx^{\prime}_{ij}.&r\geq 2m,\end{cases}\] _and_ \[d\mu_{Z^{\prime}}=\begin{cases}\prod_{i,j}dz^{\prime}_{i,j}&\text{if }r\leq 2m, \\ |z^{\prime}_{r-2m,r-2m}z^{\prime r-2m-2}_{r-2m-1,r-2m-1}\cdots z^{\prime}_{2,2}| \prod dz^{\prime}_{ij}&\text{if }r>2m.\end{cases}\] Proof.: For \(z=\alpha^{\vee}(t)\), we have \(zn(R_{X},R_{Y})z^{-1}=n(tR_{X},t^{2}R_{Y})\). This is equivalent to say that the action of \(Z^{0}_{M}\) on \(U_{M}\backslash N^{\prime}\simeq R_{X}\times R_{Z}\) is given by \[R_{X}\mapsto tR_{X},R_{Z}\mapsto t^{2}R_{Z}\] From this we identify \(Z^{0}_{M}U_{M}\backslash N\) with \(R_{X^{\prime}}\times R_{Z^{\prime}}\) where \(X^{\prime}\) is given by setting \(x_{r,1}=1\) in \(R_{X}\), and \(Z^{\prime}\) is of the same form with \(Z\). We will construct the invariant measure \(d\mu^{\prime}\) on the orbit space \(R^{\prime}:=Z^{0}_{M}U_{M}\backslash N^{\prime}\simeq F^{\times}\backslash(R _{X}\times R_{Z})=R_{X^{\prime}}\times R_{Z^{\prime}}\) such that it is compatible with our invariant measure \(d\mu=d\mu_{X}\wedge d\mu_{Z}\), in the sense that \[\int_{R^{\prime}(F)}\int_{F^{\times}}f(tX^{\prime},t^{2}Z^{\prime})q^{(2\rho,H _{M}(\alpha^{\vee}(t)))}\frac{dt}{|t|}d\mu^{\prime}=\int_{R(F)}f(X,Z)d\mu_{X} \wedge d\mu_{Z}\] for any integrable function \(f\) on \(R=R_{X}\times R_{Z}\), where \(\rho\) is half sum of the positive roots in \(N\). Observe that \(dx_{i,j}=|t|dx^{\prime}_{i,j}\), \(dz_{i,j}=|t|^{2}dz_{i,j}\), and \[2\rho=\sum_{1\leq i\leq r,r+1\leq j\leq n}(e_{i}\pm e_{j})+\sum_{1\leq i\leq j \leq r}(e_{i}+e_{j})=(2m+2)\sum_{i=1}^{r}e_{i}+\sum_{1\leq i<j\leq r}(e_{i}+e_ {j})=(2m+r+1)\sum_{i}^{r}e_{i}\] Hence \(q^{(2\rho,H_{M}(\alpha^{\vee}(t)))}=|t|^{r(2m+r+1)}\). The above observation implies that the measures on \(R_{X^{\prime}}\) and \(R_{Z^{\prime}}\) must be of the form \(d\mu_{X^{\prime}}=\prod|x^{\prime}_{ij}|^{k_{ij}}dx^{\prime}_{ij}\), and \(d\mu_{Z^{\prime}}=\prod|z^{\prime}_{st}|^{l_{st}}dz^{\prime}_{st}\). Therefore \[\int_{R^{\prime}(F)}\int_{F^{\times}}f(tX^{\prime},t^{2}Z^{\prime })q^{(2\rho,H_{M}(\alpha^{\vee}(t)))}\frac{dt}{|t|}d\mu^{\prime}=\int_{F^{ \times}\times R^{\prime}(F)}f(tX^{\prime},t^{2}Z^{\prime})|t|^{r(2m+r+1)} \frac{dt}{|t|}(d\mu_{X^{\prime}}\wedge d\mu_{Z^{\prime}})\] \[=\int_{R(F)}f(X,Z)(|x_{r,1}|^{r(2m+r+1)-1}\prod_{(i,j)\neq(r,1)}|x _{r,1}|^{-k_{ij}-1}|x_{ij}|^{k_{ij}}dx_{ij})\wedge(\prod_{(s,t)}|x_{r,1}|^{-l_{ st}-2}|z_{st}|^{l_{st}}dz_{st})\] Compare this with the formulas for \(R_{X}\) and \(R_{Z}\) we obtained, if forces that all the powers of \(x^{\prime}_{i,j}\) and \(z^{\prime}_{i,j}\) remain the same as those of \(x_{i,j}\) and \(z_{i,j}\) respectively, except \(x_{r,1}\), i.e. the invariant measure on \(Z^{0}_{M}U_{M}\backslash N\) can be given as \(d\mu^{\prime}=d\mu_{X^{\prime}}\wedge d\mu_{Z^{\prime}}\), where \[d\mu_{X^{\prime}}=\begin{cases}|x^{\prime r+2m-5}_{r-1,2}x^{\prime r+2m-8}_{r-2,3 }\cdots x^{\prime 2m-2r+1}_{r,r}|\prod dx^{\prime}_{ij},&r\leq m,\\ |x^{\prime r+2m-5}_{r-1,2}x^{\prime r+2m-8}_{r-2,3}\cdots x^{\prime r-m+1}_{r-m+1,m }x^{\prime r-m-1}_{r-m,m+1}x^{\prime r-m-2}_{r-m,1+2}\cdots x^{\prime}_{2,r-1}| \prod dx^{\prime}_{ij},&m<r<2m,\\ |x^{\prime r+2m-5}_{r-1,2}x^{\prime r+2m-8}_{r-2,3}\cdots x^{\prime r-m+1}_{r-m+1,m }x^{\prime r-m-1}_{r-m,m+1}x^{\prime r-m-2}_{r-m-1,m+2}\cdots x^{\prime r-2m}_{r-2m +1,2m}|\prod dx^{\prime}_{ij}.&r\geq 2m,\end{cases}\] and \[d\mu_{Z^{\prime}}=\begin{cases}\prod_{i,j}dz^{\prime}_{i,j}&\text{if }r\leq 2m,\\ |z^{\prime}_{r-2m,r-2m}z^{\prime r-2m-2}_{r-2m-1,r-2m-1}\cdots z^{\prime}_{2,2}| \prod dz^{\prime}_{ij}&\text{if }r>2m.\end{cases}\] Moreover, to match the power of \(x_{r,1}\) in the expressions of \(d\mu_{X}\), one also needs to verify that the following identities hold in each cases: If \(r\leq m\), then \(r(r+2m+1)-1-\sum_{k=1}^{r-1}((r+2m-2-3k)+1)-\sum_{k=1}^{r-1}k-2\sum_{k=1}^{r}k=r +2m-2\) ; If \(m<r<2m\), then \(r(r+2m+1)-1-\sum_{k=1}^{m-1}((r+2m-2-3k)+1)-\sum_{k=1}^{r-m-1}((r-m-k)+1)-((r-m )^{2}-(r-m-1))-\frac{(3r-2m-1)(2m-r)}{2}-2\sum_{k=1}^{r}k=r+2m-2\); If \(r\geq 2m\), then \(r(r+2m+1)-1-\sum_{k=1}^{m-1}((r+2m-2-3k)+1)-\sum_{k=1}^{m}((r-m-k)+1)-m(m-1)-2 \sum_{k=1}^{r-2m}((k-1)+2)-2(\frac{r(r+1)}{2}-\frac{(r-2m)(r-2m-1)}{2})=r+2m-2\). These identities follow by straightforward calculations. ## 5. Local coefficient as Mellin transforms of partial Bessel functions In this section we apply Theorem 6.2 of [9] to represent the local coefficient in our case as the Mellin transform of certain partial Bessel integrals. Recall that given irreducible \(\psi\)-generic representations \(\sigma\) and \(\tau\) of \(\mathrm{GL}_{n}(F)\) and \(\mathrm{Sp}_{2m}(F)\) respectively, then \(\pi=\sigma\otimes\tau\) is a \(\psi\)-generic representation of \(M(F)\simeq\mathrm{GL}_{n}(F)\times\mathrm{Sp}_{2m}(F)\). The corresponding local coefficient is a product of two \(\gamma\)-factors: \[C_{\psi}(s,\pi)=\gamma(s,\sigma\times\tau,\psi)\gamma(s,\sigma,\wedge^{2},\psi)\] As the stability of \(\gamma(s,\sigma,\wedge^{2},\psi)\) is proved in [4], the stability of \(\gamma(s,\sigma\times\tau,\psi)\) is equivalent to the stability of the local coefficient \(C_{\psi}(s,\pi)\). Let \(n\in N(F)\) such that the Bruhat decomposition \(\dot{w_{0}^{-1}}n=mn^{\prime}\bar{n}\) holds. Denote by \[U_{M,n}:=\{u\in U_{M}:unu^{-1}=n\},\text{ and }U^{\prime}_{M,m}:=\{u\in U_{M} :num^{-1}\in U_{M}\ \ \&\ \ \psi(num^{-1})=\psi(u)\}.\] By [14], except for a set of measure zero on \(N(F)\), we have \(U_{M,n}=U^{\prime}_{M,m}\). Together with Lemma 4.2, this implies that the assumptions for Theorem 6.2 in [9] are satisfied. Bessel functions are defined on Bruhat cells. Suppose in the decomposition \(\dot{w_{0}^{-1}}n=mn^{\prime}\bar{n}\), \(m\) lies in some Bruhat cell \(C_{M}(w)=B_{M}wB_{M}\), where \(B_{M}=B\cap M\). We define the Bessel function associated to \(\pi\) and \(w\) as \[j_{\pi,w}(m)=\int_{U_{M}}W(mu)\psi^{-1}(u)du=\int_{U_{M}}W(mu^{-1})\psi(u)du\] where \(W\in W(\pi,\psi)\) is a Whittaker model defined by a Whittaker functional \(\lambda\) on \(\pi\) as \(W(m)=\lambda(\pi(m)v)\) for certain \(v\in\pi\), normalized so that \(W(e)=1\), where \(e\) is the identity element of \(M(F)\). Then It is immediate that \[j_{\pi,w}(u_{1}mu_{2})=\psi(u_{1}u_{2})j_{\pi,w}(m)\] for any \(u_{1},u_{2}\in U_{M}(F)\). It follows by the Bruhat decomposition that the Bessel functions are essentially functions on \(wT\), where \(T\) is the maximal (split) torus of \(G\). We refer the readers to [2] for partial Bessel funcitons for quasi-split groups. However, the local coefficient is represented by an integral of certain partial Bessel function, not the Bessel function itself. To define that, we need an exhaustive sequence of open compact subgroups \(\overline{N}_{0,\kappa}\subset\overline{N}(F)\) such that \(\alpha^{\vee}(t)\overline{N}_{0,\kappa}\alpha^{\vee}(t)^{-1}\) depends only on \(|t|\). For a matrix \(X\) of size \(l\times k\), let \[\varphi_{\kappa}(X)=\begin{cases}1&|X_{i,j}|\leq q^{((k-i)+(l-j)+1)\kappa},\\ 0&\text{otherwise}\.\end{cases}\] Set \[\overline{N}_{0,\kappa}=\{\bar{n}(X,Y):\varphi_{\kappa}(\varpi^{-(d+g)}X)\cdot \varphi_{\kappa}(\varpi^{-2(d+g)}Y)=1\}\] where \(d=\mathrm{Cond}(\psi)\) and \(g=\mathrm{Cond}(\omega_{\pi}^{-1}(w_{0}(\omega_{\pi})))\). As we have seen before that in our case \(\alpha^{\vee}(t)\bar{n}(X,Y)\alpha^{\vee}(t)^{-1}=\bar{n}(tX,t^{2}Y)\), it follows that our definition of \(\overline{N}_{0,\kappa}\) makes \(\alpha^{\vee}(t)\overline{N}_{0,\kappa}\alpha^{\vee}(t)^{-1}\) depends only on \(|t|\). Denote by \(\varphi_{\overline{N}_{0,\kappa}}\) the characteristic function of \(\overline{N}_{0,\kappa}\). Then the partial Bessel function is a function on \(M(F)\times Z^{0}_{M}(F)\) defined as \[j_{\overline{N}_{0,\kappa},\pi,w}(m,z):=\int_{U_{M,n}\setminus U_{M}}W(mu) \varphi_{\overline{N}_{0,\kappa}}(zu^{-1}\overline{n}uz^{-1})\psi^{-1}(u)du.\] Given that \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\), let \[\dot{j}_{\overline{N}_{0,\kappa},\pi,w}(n):=\dot{j}_{\overline{N}_{0,\kappa}, \pi,w}(m,\alpha^{\vee}(\varpi^{d+g}u_{\alpha_{r}}(\dot{w}_{0}\bar{n}\dot{w}_{0} ^{-1}))).\] Suppose \(\sigma\) and \(\tau\) are irreducible \(\psi\)-generic representations of \(\operatorname{GL}_{r}(F)\) and \(\operatorname{Sp}_{2m}(F)\) respectively, and \(L=\prod_{k=1}^{d}\operatorname{GL}_{n_{k}}\times\prod_{j=1}^{l}\operatorname{ GL}_{t_{j}}\times\operatorname{Sp}_{2m^{\prime}}\) with \(\sum_{k=1}^{d}n_{k}=r\) and \(\sum_{j=1}^{l}n_{j}+m^{\prime}=m\). Then we can find \(\psi\)-generic supercuspidal representations \(\sigma_{k}^{\prime},(1\leq k\leq d)\), \(\sigma_{j}^{\prime\prime},(1\leq j\leq l)\), and \(\tau^{\prime}\) of \(\operatorname{GL}_{n_{k}}(F)\), \(\operatorname{GL}_{t_{j}}(F)\), and \(\operatorname{Sp}_{2m^{\prime}}(F)\) respectively, so that \(\pi=\sigma\boxtimes\tau\hookrightarrow\operatorname{Ind}_{L(F)N(F)}^{M(F)} \sigma^{\prime}\otimes\mathds{1}_{N(F)}\), where \(\sigma^{\prime}=\otimes_{k=1}^{d}\sigma_{k}^{\prime}\otimes\otimes_{j=1}^{ \sigma_{j}^{\prime\prime}}\otimes\tau^{\prime}\). By the multiplicativity of Langlands-Shahidi local \(\gamma\)-factors in our cases, see Theorem 3.1 of [8] or Theorem 8.3.2 of [10], we obtain that \[\gamma(s,\sigma\times\tau,\psi)=\prod_{k=1}^{d}\prod_{j=1}^{l}\gamma(s,\sigma _{k}^{\prime}\times\sigma_{j}^{\prime\prime},\psi)\gamma(s,\sigma_{k}\times \tilde{\sigma}_{j}^{\prime\prime},\psi)\prod_{k=1}^{d}\gamma(s,\sigma_{k}^{ \prime}\times\tau^{\prime},\psi)\] where \(\tilde{\sigma}_{j}^{\prime\prime}\) is the contragredient of \(\sigma_{j}^{\prime\prime}\). Consequently, it suffices to prove stability for \(\psi\)-generic supercuspidal representations. Suppose \(\sigma\) and \(\tau\) are \(\psi\)-generic supercuspidal representations of \(\operatorname{GL}_{n}(F)\) and \(\operatorname{Sp}_{2m}(F)\) respectively. Set \(\pi=\sigma\boxtimes\tau\), then its central character \(\omega_{\pi}=\omega_{\sigma}\boxtimes\omega_{\tau}\). In this case, one can find a function \(f\in C_{c}^{\infty}(M(F);\omega_{\pi})\), the space of smooth functions of compact support modulus the center on \(M(F)\), such that \(f(zm)=\omega_{\pi}(m)f(z)\) for all \(z\in Z_{M}(F)\), and \(m\in M(F)\). Then \(W(m)=W_{f}(m):=\int_{U_{M}(F)}f(xm)\psi^{-1}(x)dx\) defines a non-zero Whittaker model. We normalize it so that \(W_{f}(1)=1\). For simplicity, we omit \(w\) and denote \(j_{\pi,\kappa,f}(n):=j_{\overline{N}_{0,\kappa},\pi,w}(n)\) where we replace \(W\) by \(W_{f}\). Let \(\alpha=\alpha_{r}\) and \(\tilde{\alpha}=\langle\rho,\alpha\rangle^{-1}\rho\), where \(\rho\) is half sum of the positive roots in \(N\). Then by Theorem 6.2 of [9], we have **Proposition 5.1**.: _Let \(\sigma\) and \(\tau\) be \(\psi\)-generic supercuspidal representations of \(\operatorname{GL}_{r}(F)\) and \(\operatorname{Sp}_{2m}(F)\) respectively, set \(\pi=\sigma\boxtimes\tau\), then_ \[C_{\psi}(s,\pi)^{-1}=\gamma(2(\tilde{\alpha},\alpha^{\vee})s,\omega_{\pi}( \dot{w}_{0}\omega_{\pi}^{-1}),\psi^{-1})\int_{Z_{M}^{0}(F)U_{M}(F)\backslash N( F)}j_{\pi,\kappa,f}(\hat{n})\omega_{\pi_{\pi}}^{-1}(\dot{w}_{0}\omega_{\pi_{ \pi}})(x_{\alpha})q_{F}^{(s\tilde{\alpha}+\rho,H_{M}(m))}\text{d}\hat{n}\] _for sufficiently large \(\kappa\), where \(m\) is the image of \(\hat{n}\mapsto m\) with \(\hat{n}\) a representative of the orbit of \(n\in Z_{M}^{0}(F)U_{M}(F)\backslash N(F)\) in \(N(F)\) via \(\dot{w}_{0}^{-1}\hat{n}=mn^{\prime}\hat{n}\), which holds off a subset of measure zero on \(N(F)\). Here \(\hat{d}\hat{n}\) is the invariant measure on the orbit space \(Z_{M}^{0}(F)U_{M}(F)\backslash N(F)\), \(\pi_{s}=\pi\otimes q^{(s\tilde{\alpha},H_{M}(\cdot))}\), \(x_{\alpha}=u_{\alpha_{r}}(\dot{w}_{0}\bar{n}\dot{w}_{0}^{-1})\), and \(\gamma(2(\tilde{\alpha},\alpha^{\vee})s,\omega_{\pi}(\dot{w}_{0}\omega_{\pi}^{ -1}),\psi^{-1})\) is the Abelian \(\gamma\)-factor depending only on \(\omega_{\pi}\)._ Let us simplify this formula in our cases. First, recall that we have \(2\rho=(r+2m+1)\sum_{i=1}^{r}e_{i}\), \(\alpha=\alpha_{r}=e_{r}-e_{r+1}\). So \[\langle\rho,\alpha\rangle=\frac{2(\rho,\alpha)}{(\alpha,\alpha)}=\frac{((r+2m+ 1)\sum_{i=1}^{r}e_{i},e_{r}-e_{r+1})}{(e_{r}-e_{r+1},e_{r}-e_{r+1})}=\frac{r+2m +1}{2}\] and therefore \(\tilde{\alpha}=\langle\rho,\alpha\rangle^{-1}\rho=\sum_{i=1}^{r}e_{i}\). It follows that \(\langle\tilde{\alpha},\alpha^{\vee}\rangle=\langle\sum_{i=1}^{r}e_{i},\sum_{i=1 }^{r}e_{i}^{*}\rangle=r\), and if we write \(m=\operatorname{diag}\{m_{1},m_{2},\theta_{r}(m_{1})\}\), then \[q^{(s\tilde{\alpha},H_{M}(m))}=|\det(m_{1})|^{s},q^{(s\tilde{\alpha}+\rho,H_{M}(m ))}=|\det(m_{1})|^{s+\frac{r+2m+1}{2}}.\] Moreover, \[\omega_{\pi}(\dot{w}_{0}\omega_{\pi}^{-1})(\alpha^{\vee}(t))=\omega_{\pi}( \alpha^{\vee}(t))\omega_{\pi}^{-1}(\dot{w}_{0}^{-1}\alpha^{\vee}(t)\dot{w}_{0})\] \[=\omega_{\pi}(\operatorname{diag}(\underbrace{t,\cdots,t}_{r},1,\cdots,1, \underbrace{t^{-1},\cdots,t^{-1}}_{r}))\omega_{\pi}^{-1}(\operatorname{diag}( \underbrace{t^{-1},\cdots,t^{-1}}_{r},1,\cdots,1,\underbrace{t,\cdots,t}_{r}))= \omega_{\pi}^{2}(\alpha^{\vee}(t)).\] Hence \(\omega_{\pi}(\dot{w}_{0}\omega_{\pi}^{-1})\circ\alpha^{\vee}=\omega_{\pi}^{2} \circ\alpha^{\vee}\). Similarly \(\omega_{\pi_{*}}^{-1}(\dot{w}_{0}\omega_{\pi_{*}})(\alpha^{\vee}(t))=\omega_{\pi _{*}}^{-2}(\alpha^{\vee}(t))=\omega_{\pi}^{-2}(\alpha^{\vee}(t))|t|^{-rs}\). From our previous calculation of the Bruhat decomposition \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\) with \(n=n(X,Y)\), we see that \[\dot{w}_{0}\bar{n}\dot{w}_{0}^{-1}=\begin{bmatrix}&I_{r}\\ &I_{2m}\\ (-1)^{r}I_{r}\end{bmatrix}\begin{bmatrix}I_{r}\\ {J_{2m}^{\prime}}^{t}X^{t}Y^{-1}J_{r}&I_{2m}\\ Y^{-1}&Y^{-1}X&I_{r}\end{bmatrix}\begin{bmatrix}&(-1)^{r}I_{r}\\ I_{2m}&\\ I_{r}\end{bmatrix}\] \[=\begin{bmatrix}I_{r}&Y^{-1}X&Y^{-1}\\ &I_{2m}&(-1)^{r}J_{2m}^{\prime}t^{X}tYY^{-1}J_{r}\\ &&I_{r}\end{bmatrix}=n(Y^{-1}X,Y^{-1}).\] Hence \(u_{\alpha_{r}}(\dot{w}_{0}\bar{n}\dot{w}_{0}^{-1})\) is the lower-left entry of \(Y^{-1}X\). When we take our orbit space representative \(R=R_{X}\times R_{Y}\simeq R_{X}\times R_{Z}\simeq U_{M}\backslash N\), and \(n=n(X,Y)\) with \((X,Y)\in R_{X}\times R_{Y}\), this is just \(\frac{y_{rr}^{*}x_{r+1}}{\det Y}\), where \(y_{rr}^{*}\) is the \((r,r)\)-th entry of the adjoint matrix of \(Y=ZJ_{r}+\frac{X\theta_{r,m}(X)}{2}\), which is a polynomial function of \((X,Z)\), we denote it as \(P(X,Z)\). When passing to the orbit of \(Z_{M}^{0}(F)\), we obtain that \(u_{\alpha_{r}}(\dot{w}_{0}\bar{n}(X^{\prime},Y^{\prime})\dot{w}_{0}^{-1})= \frac{P(X^{\prime},Z^{\prime})}{\det Y}\). From now on, we will also use \(n(X,Z)\)(resp. \(m(X,Z)\)) to denote \(n(X,Y)\)(resp. \(m(X,Y)\)) if we emphasis that it is parameterized by \((X,Z)\) instead of \((X,Y)\). With the discussion of the orbit space structure of \(Z_{M}^{0}U_{M}\backslash N\) and its invariant measure in Proposition 4.3, we obtain the following result: **Proposition 5.2**.: _Let \(\sigma\) and \(\tau\) be \(\psi\)-generic supercuspidal representations of \(\mathrm{GL}_{r}(F)\) and \(\mathrm{Sp}_{2m}(F)\) respectively, set \(\pi=\sigma\boxtimes\tau\), then_ \[C_{\psi}(s,\pi)^{-1}=\gamma(2rs,\omega_{\pi}^{2},\psi^{-1})\int_{R_{X^{\prime} }\times R_{Z^{\prime}}}j_{\pi,\kappa,f}(n(X^{\prime},Z^{\prime}))\omega_{\pi} ^{-2}(\frac{P(X^{\prime},Z^{\prime})}{\det(Z^{\prime}J_{r}+\frac{X^{\prime} \theta_{r,m}(X^{\prime})}{2})})\] \[\cdot|\frac{P(X^{\prime},Z^{\prime})}{\det(Z^{\prime}J_{r}+\frac{X^{\prime} \theta_{r,m}(X^{\prime})}{2})}|^{-rs}|\det(m_{1}(X^{\prime},Z^{\prime}))|^{s +\frac{r+2m+1}{2}}d\mu_{X^{\prime}}\wedge d\mu_{Z^{\prime}}\] _for sufficiently large \(\kappa\), where \(m(X^{\prime},Z^{\prime})\) is the image of \(n(X^{\prime},Z^{\prime})\mapsto m(X^{\prime},Z^{\prime})\) via the Bruhat decomposition \(\dot{w}_{0}^{-1}\dot{n}=mn^{\prime}\bar{n}\), which holds off a subset of measure zero on \(N(F)\), \(m=\mathrm{diag}\{m_{1},m_{2},\theta_{r}(m_{1})\}\) with \(m_{1}\in\mathrm{GL}_{r}(F)\), \(m_{2}\in\mathrm{Sp}_{2m}(F)\), and \(\gamma(2rs,\omega_{\pi}^{2},\psi^{-1})\) is the Abelian \(\gamma\)-factor depending only on \(\omega_{\pi}\)._ To study the stability of local coefficient, we need to consider \(C_{\psi}(s,\pi\otimes\chi)\) with \(\chi\) a highly ramified character of \(F^{\times}\), regarded as a character of \(M(F)\simeq\mathrm{GL}_{r}(F)\times\mathrm{Sp}_{2m}(F)\) via \(\chi(m_{1},m_{2}):=\chi(\det(m_{1}))\) in our case. Therefore it is necessary to choose the open compact subgroups \(\{\overline{N}_{0,\kappa}\}_{\kappa}\) of \(\overline{N}(F)\) to be independent of \(\chi\). As in the proof of Theorem 6.2 of [9], given an irreducible \(\psi\)-generic representation \(\pi\) of \(M(F)\) with ramified central character \(\omega_{\pi}\), we choose a section \(h\in I(s,\pi)=\mathrm{Ind}_{P(F)}^{G(F)}(\pi\otimes q^{s\hat{\alpha}+\rho,H_{M }(\cdot)})\otimes\mathbf{1}_{N(F)}\), such that \(h\) is compactly supported modulus \(P(F)\), and use it to obtain the integral representation of the local coefficient formula as above. So we choose a sufficiently large open compact subgroup \(\overline{N}_{0}\) of \(\overline{N}(F)\) such that \(\mathrm{Supp}(h)\subset P(F)\overline{N}_{0}\). In our situation, we fix a character \(\chi_{0}\) of \(F^{\times}\) such that \(\omega_{\pi\otimes\chi_{0}}=\omega_{\pi}\chi_{0}^{\gamma}\) is ramified, and choose a \(\kappa_{0}\) and \(h_{0}\in I(s,\pi)\) such that \(\mathrm{Supp}(h_{0})\subset P(F)\overline{N}_{0,\kappa_{0}}\). Since \(\overline{N}_{0,\kappa}\subset\overline{N}_{0,\kappa}\) for all \(\kappa\geq\kappa_{0}\), \(\overline{N}_{0,\kappa}\) is independent of \(\chi_{0}\) if \(\kappa\geq\kappa_{0}\). Suppose \(\chi\) is any character of \(F^{\times}\) such that \(\omega_{\pi\otimes\chi}\) is ramified, choose \(h_{\chi}\in I(s,\pi\otimes\chi)\) such that \(\mathrm{Supp}(h_{\chi})\subset P(F)\overline{N}_{0,\chi}\) for some open compact subgroup \(\overline{N}_{0,\chi}\) of \(\overline{N}(F)\). If \(\overline{N}_{0,\chi}\subset\overline{N}_{0,\kappa_{0}}\), we are fine as we just discussed. If not, we replace \(h_{\chi}\) by a right shift \(R(\alpha^{\vee}(t))h_{\chi}:g\mapsto h_{\chi}(g\alpha^{\vee}(t))\), then \(R(\alpha^{\vee}(t)h_{\chi})\) is supported in \(\alpha^{\vee}(t)\overline{N}_{0,\chi}\alpha^{\vee}(t)^{-1}\) mod \(P(F)\), as \(\alpha^{\vee}(t)\in M(F)\). Since \(\alpha^{\vee}(t)\bar{n}(X,Y)\alpha^{\vee}(t)^{-1}=\bar{n}(tX,t^{2}Y)\), we choose \(|t|\) to be small enough so that \(\alpha^{\vee}(t)\overline{N}_{0,\chi}\alpha^{\vee}(t)^{-1}\subset\overline{N}_{0,\kappa_{0}}\), then we are done. So we obtain the following stronger version of the above proposition: **Proposition 5.3**.: _Let \(\sigma\) and \(\tau\) be \(\psi\)-generic supercuspidal representations of \(\mathrm{GL}_{r}(F)\) and \(\mathrm{Sp}_{2m}(F)\) respectively, such that \(\omega_{\pi}\) is ramified where \(\pi=\sigma\boxtimes\tau\), then there exists a \(\kappa_{0}\), such that for all \(\kappa\geq\kappa_{0}\) and all characters \(\chi\) of \(F^{\times}\) such that \(\omega_{\pi\otimes\chi}=\omega_{\pi}\chi^{r}\) is ramified, we have_ \[C_{\psi}(s,\pi\otimes\chi)^{-1}=\gamma(2rs,\omega_{\pi}^{2}\chi^{2r},\psi^{-1}) \int_{R_{X^{\prime}}\times R_{Z^{\prime}}}j_{\pi\otimes\chi,\kappa,f}(n(X^{ \prime},Z^{\prime}))(\omega_{\pi}^{-2}\chi^{-2r})(\frac{P(X^{\prime},Z^{\prime })}{\det(Z^{\prime}J_{r}+\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2})})\] \[\cdot|\frac{P(X^{\prime},Z^{\prime})}{\det(Z^{\prime}J_{r}+\frac{X^{\prime} \theta_{r,m}(X^{\prime})}{2})}|^{-rs}|\det(m_{1}(X^{\prime},Z^{\prime}))|^{s+ \frac{r+2m+1}{2}}d\mu_{X^{\prime}}\wedge d\mu_{Z^{\prime}}\] _where \(m(X^{\prime},Z^{\prime})\) is the image of \(n(X^{\prime},Z^{\prime})\mapsto m(X^{\prime},Z^{\prime})\) via the Bruhat decomposition \(\dot{w}_{0}^{-1}\dot{n}=mn^{\prime}\bar{n}\), which holds off a subset of measure zero on \(N(F)\), \(m=\operatorname{diag}\{m_{1},m_{2},\theta_{r}(m_{1})\}\) with \(m_{1}\in\operatorname{GL}_{r}(F)\), \(m_{2}\in\operatorname{Sp}_{2m}(F)\), and \(\gamma(2rs,\omega_{\pi}^{2}\chi^{2r},\psi^{-1})\) is the Abelian \(\gamma\)-factor depending only on \(\omega_{\pi}\) and \(\chi\)._ ## 6. Asymptotics of partial Bessel integrals So far we obtained our local coefficient as the Mellin transform of certain partial Bessel functions. The partial Bessel functions appeared in our discussions can be reformulated using partial Bessel integrals, which admit nice asymptotic expansion formulas, as well as some uniform smooth properties. Theses properties are crucial for the proof of stability using our method. In this section, we discuss some general properties of partial Bessel integrals and obtain the asymptotic formula and uniform smoothness in our case. In particular, different from the known results using this method, we observe that the orbit space \(Z_{M}^{0}U_{M}\backslash N\) is no longer isomorphic to a torus, and we will separate its 'toric'-part out, which plays the same role as the torus over which the integral representing the local coefficient is taken in the known cases. This is a new phenomenon and we believe it can be generalized in our future work. ### Some general properties of partial Bessel integrals We begin by introducing partial Bessel integrals and some of its important properties. The structure and results in this subsection is the same as section 6 of [13] but are under a more general setting, so we will reprove some of the important results. Let \(M\) be a connected reductive group defined over a local field \(F\). Fix a Borel subgroup \(B_{M}=AU_{M}\), where \(A\) is the maximal (split) torus and \(U_{M}\) is the unipotent radical of \(B_{M}\). Suppose \(\pi\) is a \(\psi\)-generic supercuspidal representation of \(M(F)\) with central character \(\omega_{\pi}\). Take a matrix coefficient \(f\) of \(\pi\), then \(f\in C_{c}^{\infty}(M(F);\omega_{\pi})\), the space of smooth functions of compact support on \(M(F)\) modulo the center, such that \(f(zm)=\omega_{\pi}(z)f(m)\) for all \(z\in Z_{M}(F)\), \(m\in M(F)\). Then the integral \(W_{f}(m)=\int_{U_{M}(F)}f(zm)\psi^{-1}(x)dx\) converges since \(Z_{M}(F)U_{M}(F)m\) is closed in \(M(F)\), and hence defines a non-zero Whittaker model attached to \(\pi\). We normalize it so that \(W_{f}(e)=1\), where \(e\) is the identity element of \(M(F)\). From now on, we will not distinguish algebraic group and the group of its \(F\)-points unless it is necessary in order to simplify the notations, we hope this will not cause any troubles for the readers. Given an \(F\)-involution \(\Theta_{M}:M\to M\), we define the partial integral as \[B_{\varphi}^{M}(m,f):=\int_{U_{M,m}^{\Theta_{M}}\backslash U_{M}}W_{f}(mu) \varphi(\Theta_{M}(u^{-1})m^{\prime}u)\psi^{-1}(xu)du=\int_{U_{M,m}^{\Theta_{M }}\backslash U_{M}}\int_{U_{M}}f(xmu)\varphi(\Theta_{M}(u^{-1})m^{\prime}u) \psi^{-1}(xu)dxdu\] where \(U_{M,m}^{\Theta_{M}}=\{m\in U_{M}:\Theta_{M}(u^{-1})mu=m\}\) is the twisted centralizer of \(m\) in \(U_{M}\), and \(\varphi\) is certain characteristic function of a subset of \(M(F)\). And \(m^{\prime}\) is obtained from \(m\) by stripping off the center, i.e. \(m=zm^{\prime}\) with \(z\in Z_{M}\). Similarly one can define partial Bessel integrals on any Levi subgroup \(L\) of \(M\) by an involution \(\Theta_{L}\) of \(L\). Let \(\Delta_{M}\) denote the set of simple roots in \(M\), \(W(M,A)\) the Weyl group, and \(w_{M}\in W(M,A)\) the long Weyl group element. The following objects needed for our study of the asymptotic expansion of partial Bessel integrals. * **B(M).** Following [4], the subset of Weyl group elements that supports Bessel functions is given by \(B(M)=\{w\in W(M,A):\alpha\in\Delta_{M}\ \ s.t.\ \ w\alpha>0\Rightarrow w\alpha\in\Delta_{M}\}\), there is a bijection \[B(M)\leftrightarrow\{L:\ \text{Levi of standard parabolic subgroups}\ \ \text{of}\ M\}\] by \(w\mapsto L=Z_{M}(\cap_{\alpha\in\theta^{+}_{M,w}}\ker\alpha)\), where \(\theta^{+}_{M,w}=\{\alpha\in\Delta_{M}:w\alpha>0\}\), and conversely \(L\mapsto w=w_{M}w_{L}^{-1}\). * \(\mathbf{U}^{+}_{M,w},\mathbf{U}^{-}_{M,w}\). For each \(w\in W(M,A)\), define \(U^{+}_{M,w}=\{u\in U_{M}:wuw^{-1}\in U_{M}\}\) and \(U^{-}_{M,w}=\{u\in U_{M}:wuw^{-1}\in U^{-}_{M}\}\). Then \(U_{M}=U^{+}_{M,w}U^{-}_{M,w}\) and \(U^{+}_{M,w}\) normalizes \(U^{-}_{M,w}\). In particular, if \(w\in B(M)\) with \(\dot{w}=\dot{w}_{M}\dot{w}_{L}^{-1}\), then \(U^{+}_{M,w}=U_{L}:=U_{M}\cap L,U^{-}_{M,w}=N_{L}\), the unipotent radical of the standard parabolic subgroup of \(M\) with Levi component \(L\). In the extreme cases, if \(w=w_{L}\), then \(U^{+}_{M,w_{L}}=N_{L},U^{-}_{M,w_{L}}=U_{L}\). If \(w=w_{M}\), then \(U^{+}_{M,w_{M}}=\{e\},U^{-}_{M,w_{M}}=U_{M}\). * **Bessel distance** For \(w,w^{\prime}\in B(M)\) with \(w>w^{\prime}\) define \[d_{B}(w,w^{\prime})=max\{m:\exists w_{i}\in B(M)\ \ s.t\ \ w=w_{m}>w_{m-1}>\cdots>w_{0}=w^{\prime}\}.\] * **Bruhat order** For \(w,w^{\prime}\in W(M,A)\), \(w\leq w^{\prime}\Longleftrightarrow C(w)\subset\overline{C(w^{\prime})}\). * **The relevant torus \(\mathbf{A}_{w}\)**. For \(w\in B(M)\), define \(A_{w}=\{a\in A:a\in\cap_{\alpha\in\theta^{+}_{M,w}}\ker\alpha\}^{\circ}\subset A\). Note that it is also the connected center of \(L_{w}=Z_{M}(\cap_{\alpha\in\theta^{+}_{M,w}}\ker\alpha)\). * **The relevant Bruhat cell \(\mathbf{C}_{r}(\dot{w})\)**. We call \(C_{r}(\dot{w})=U_{M}\dot{w}A_{w}U^{-}_{M,w}\) the relevant part of the Bruhat cell \(C(w)\). Note that \(C_{r}(\dot{w})\) depends on the choice of the representative \(\dot{w}\) of \(w\). We choose the representative \(\dot{w}\) of \(w\) so that it is compatible with \(\psi\) in the sense that \(\psi(\dot{w}u\dot{w}^{-1})=\psi(u)\) for all \(u\in U^{+}_{M,w}\), as in Proposition 5.1 of [4]. * **Transverse tori** For \(w,w^{\prime}\in B(M)\), let \(L=L_{w}\) and \(L^{\prime}=L_{w^{\prime}}\) be their associated Levi subgroups respectively. Suppose \(w^{\prime}\leq w\). Then \(L\subset L^{\prime}\) and \(A_{w^{\prime}}\supset A_{w}\). Let \(A^{w^{\prime}}_{w}=A_{w}\cap L^{\prime}_{der}=Z_{L}\cap L^{\prime}_{der}\). In particular \(A^{w^{\prime}}_{w}\cap A_{w^{\prime}}=A^{w^{\prime}}_{w^{\prime}}=Z_{L^{ \prime}}\cap L^{\prime}_{der}\) is finite and the subgroup \(A^{w^{\prime}}_{w^{\prime}}A_{w^{\prime}}\subset A_{w}\) is open and of finite index. So this decomposition is essentially a "transfer principal" for relevant tori. Let \(\{x_{\alpha}\}_{\alpha\in\Delta_{M}}\) be a set of 1-parameter subgroups, i.e. \(x_{\alpha}:\mathbb{G}_{a}\simeq U_{\alpha}\), Then \((B_{M},A,\{x_{\alpha}\}_{\alpha\in\Delta_{M}})\) defines an \(F\)-splitting of \(M\). We make the following assumptions from now on: 1. \(\Theta_{M}\) preserves the \(F\)-splitting and is compatible with Levi subgroups and \(\psi\). To be precise, if \(L\subset M\) is a Levi subgroup, we require that \(\tau^{M}_{L}\circ\Theta_{M}|_{L}=\Theta_{L}\), where \(\tau^{M}_{L}=\mathrm{Int}((\dot{w}^{M}_{L})^{-1})\). And \(\Theta_{L}\) also preserves the \(F\)-splitting \(\{B_{L}:=B_{M}\cap L,A,\{x_{\alpha}\}_{\alpha\in\Delta_{L}}\}\). Here \(\mathrm{Int}(w)(m)=wmw^{-1}\), and we denote \(\dot{w}^{M}_{L}:=\dot{w}_{M}\dot{w}_{L}^{-1}\). Consequently \(\Theta_{M}(U_{M})=U_{M}\), \(\Theta_{L}(U_{L})=U_{L}\). The representatives \(\dot{w}^{M}_{L}\) are chosen to be compatible with the generic character \(\psi\) as in Proposition 5.1 of [4], i.e., \(\psi(\tau^{M}_{L}(u))=\psi(\dot{w}^{M}_{L}(u\dot{w}^{M}_{L})^{-1})=\psi(u)\) for all \(u\in U_{L}(F)\). By \(\Theta_{M}\) is compatible with \(\psi\), we mean that \(\psi(\Theta_{M}(u))=\psi(u)\) for all \(u\in U_{M}(F)\). As a result, if \(u\in U_{L}(F)\), then \(\psi(\Theta_{L}(u))=\psi(\tau^{M}_{L}\circ\Theta_{M}(u))=\psi(\Theta_{M}(u))= \psi(u)\). 2. \(\varphi\) is the characteristic function of some subset of \(M(F)\) so that it is invariant under the \(\Theta_{M}\)-twisted conjugate action by some open compact subgroup \(U_{0}\) of \(U_{M}(F)\).i.e. \(\varphi(\Theta_{M}(u^{-1})mu)=\varphi(m)\) if \(u\in U_{0}\). These assumptions guarantees some nice behavior of partial Bessel functions, and are satisfied in all the known cases. We will see later that they also hold in our cases. We would like to first understand the structure of twisted centralizer in the definition of partial Bessel integrals, in order to get a descent formula for partial Bessel integrals to those defined over Levi subgroups of \(M\). This serves as an important step to prove the general asymptotic expansion formula for partial Bessel integrals. **Lemma 6.1**.: _For \(m=u_{1}\dot{w}au_{2}\in C(w^{M}_{L})\), then \(U^{\Theta_{M}}_{M,m}\subset u_{2}^{-1}U_{L}u_{2}=u_{2}^{-1}U^{+}_{M,w^{M}_{L}}u_ {2}\)._ Proof.: We have \[u\in U^{\Theta_{M}}_{M,m}\Leftrightarrow\Theta_{M}(u^{-1})u_{1}w_{M}w_{L}^{-1} au_{2}u=u_{1}w_{M}w_{L}^{-1}au_{2}\Leftrightarrow w_{M}^{-1}u_{1}^{-1}\Theta_{M}(u^{-1})u_{1}w_{M} =w_{L}^{-1}au_{2}u^{-1}u_{2}^{-1}w_{L}.\] Since \(\Theta_{M}(U_{M})=U_{M}\), the left hand side of the last equality lies in \(U^{-}_{M}\). Hence \(au_{2}u^{-1}u_{2}^{-1}a^{-1}\in U^{-}_{w_{L}}=U_{L}\), therefore \(u_{2}u^{-1}u_{2}^{-1}\in a^{-1}U_{L}a=U_{L}\), so \(u\in u_{2}^{-1}U_{L}u_{2}\) **Lemma 6.2**.: _Let \(H\subset L\subset M\) be Levi subgroups of \(M\), then for \(m=u_{1}\dot{w}_{H}^{M}au_{2}\in C_{r}(\dot{w}_{H}^{M})\), we have_ \[U_{M,m}^{\Theta_{M}}=(\tilde{u}_{1}^{-})^{-1}U_{L,l}^{\Theta_{L}}\tilde{u}_{1}^{ -}\cap(u_{2}^{-})^{-1}U_{L,l}^{\Theta_{L}}u_{2}^{-}\] _where \(u_{1}=u_{1}^{-}u_{1}^{+}\) with \(u_{1}^{-}\in U_{M,w^{-1}}^{-}\) and \(u_{1}^{+}\in U_{M,w^{-1}}^{+}\), \(u_{2}=u_{2}^{+}u_{2}^{-}\) with \(u_{2}^{+}\in U_{M,w}^{+}\) and \(u_{2}^{-}\in U_{M,w}^{-}\), \(\tilde{u}_{1}^{-}=\tau_{L}^{M}\circ\Theta_{M}\circ\tau_{L}^{M}((u_{1}^{-})^{-1})\), \(l=w^{-1}u_{1}^{+}ww_{H}^{L}au_{2}^{+}\in L\), where \(w=w_{L}^{M}\in B(M)\)._ Proof.: By Lemma 6.1 of [13], \(m=u_{1}\dot{w}_{H}^{M}au_{2}\in C_{r}(\dot{w}_{H}^{M})\subset\Omega_{w}=U_{M, w^{-1}}^{-}\times wL\times U_{M,w}^{-}\), where \(\Omega_{w}:=\bigsqcup_{w\leq w^{\prime}}C(w^{\prime})\), and the decomposition is unique. Decompose \(u_{1}=u_{1}^{-}u_{1}^{+}\), \(u_{2}=u_{2}^{+}u_{2}^{-}\) as stated in the Lemma. Then we can write \(m=u_{1}^{-}\dot{w}(\dot{w}^{-1}u_{1}^{+}\dot{w}\dot{w}_{H}^{L}au_{2}^{+})u_{2 }^{-}\) and it is easy to see that \(l=\dot{w}^{-1}u_{1}^{+}\dot{w}\dot{w}_{H}^{L}au_{2}^{+}\in L\). Then \[u\in U_{M,m}^{\Theta_{M}}\Leftrightarrow\Theta_{M}(u^{-1})mu=m\Leftrightarrow( \Theta_{M}(u^{-1})u_{1}^{-}\dot{w}\Theta_{L}(u^{+})\dot{w}^{-1})\dot{w}\Theta_ {L}((u^{+})^{-1})lu^{+}((u^{+})^{-1}u_{2}^{-}u^{+}u^{-})=u_{1}^{-}\dot{w}lu_{2} ^{-}\] where we decompose \(u=u^{+}u^{-}\) with \(u^{+}\in U_{M,w}^{+}=U_{L}\) and \(u^{-}\in U_{M,w}^{-}=N_{L}\). We claim that \[\Theta_{M}(u^{-1})u_{1}^{-}\dot{w}\Theta_{L}(u^{+})\dot{w}^{-1}\in U_{L,w^{-1}} ^{-}.\] To see this, note that it is equivalent to \(\dot{w}^{-1}\Theta_{M}(u^{-1})u_{1}^{-}\dot{w}\Theta_{L}(u^{+})\in U_{M}^{-}\). But \(\dot{w}^{-1}\Theta_{M}(u^{-1})u_{1}^{-}\dot{w}\Theta_{L}(u^{+})=\dot{w}^{-1} \Theta_{M}((u^{-})^{-1})\dot{w}(\dot{w}^{-1}\Theta_{M}((u^{+})^{-1})\dot{w})( \dot{w}^{-1}u_{1}^{-}\dot{w})\Theta_{L}(u^{+})\). Since \((u^{-})^{-1}\in N_{L}\), and by our assumption, \(\tau_{L}^{M}\circ\Theta_{M}\) preserves the \(F\)-splitting when restricted to \(L\), so \(\Theta_{M}(N_{L})=N_{L}\). In addition, since \(\dot{w}=\dot{w}_{L}^{M}\), we have \(\dot{w}^{-1}N_{L}\dot{w}\cap U_{L}=\{1\}\). As a result, \(\dot{w}^{-1}\Theta_{M}((u^{-})^{-1})\dot{w}\in U_{M}^{-}\). This also implies that \(\tau_{L}^{M}\circ\Theta_{M}|_{N_{L}}\subset U_{M}^{-}\). Moreover, \(\dot{w}^{-1}\Theta_{M}((u^{+})^{-1})\dot{w}=\tau_{L}^{M}\circ\Theta_{L}((u^{+} )^{-1})=\Theta_{L}((u^{+})^{-1})\in U_{L}\) by our assumption on \(\Theta_{L}\). We also have \(\dot{w}^{-1}U_{M,w^{-1}}^{-}\dot{w}=(U_{M,w^{-}}^{-})=N_{L}^{-}\), so \(\dot{w}^{-1}u_{1}^{-}\dot{w}\in N_{L}^{-}\). Since \(U_{L}\) normalizes \(N_{L}^{-}\), \((\dot{w}^{-1}\Theta_{M}((u^{+})^{-1})\dot{w})(\dot{w}^{-1}u_{1}^{-}\dot{w}) \Theta_{L}(u^{+})\in N_{L}^{-}\subset U_{M}^{-}\). This proves the claim. It follows from the claim and the uniqueness of the decomposition \(\Omega_{w}=U_{L,w^{-1}}^{-}\times\dot{w}L\times U_{L,w}^{-}\) that (1). \(\Theta_{M}(u^{-1})u_{1}^{-}\dot{w}\Theta_{L}(u^{+})\dot{w}^{-1}=u_{1}^{-},\ ( 2).\ \Theta_{L}((u^{+})^{-1})lu^{+}=l,\ (3).\ ((u^{+})^{-1}u_{2}^{-}u^{+}u^{-}=u_{2}^{-}\). Note that (1)\(\Leftrightarrow\dot{w}^{-1}(u_{1}^{-})\Theta_{M}(u^{-1})u_{1}^{-}\dot{w}= \Theta_{L}(u^{+})^{-1}\Leftrightarrow\tau_{L}^{M}(u_{1}^{-})\tau_{L}^{M}\circ \Theta_{M}(u^{-1})\tau_{L}^{M}(u_{1}^{-})=\Theta_{L}(u^{+})^{-1}\). Apply \(\tau_{L}^{M}\circ\Theta_{M}\) on both sides, and use \(\tau_{L}^{M}\circ\Theta_{M}|_{L}=\Theta_{L}\), then take inverse on both sides, we obtain that \(\tilde{u}_{1}^{-}u(\tilde{u}_{1}^{-})^{-1}=u^{+}\), where \(\tilde{u}_{1}^{-}=\tau_{L}^{M}\circ\Theta_{M}\circ\tau_{L}^{M}((u_{1}^{-})^{-1})\). (2) is saying that \(u^{+}\in U_{L,l}^{\Theta_{L}}\). Therefore (1) and (2) imply that \(U_{M,m}^{\Theta_{M}}\subset(\tilde{u}_{1}^{-})^{-1}U_{L,l}^{\Theta_{L}}\tilde{u}_{1} ^{-}\). On the other hand, (3)\(\Leftrightarrow u_{2}^{-}u(u_{2}^{-})^{-1}=u^{+}\), then (2) and (3) imply that \(U_{M,m}^{\Theta_{M}}\subset(u_{2}^{-})^{-1}U_{L,l}^{\Theta_{L}}u_{2}^{-}\). Hence \[U_{M,m}^{\Theta_{M}}\subset(\tilde{u}_{1}^{-})^{-1}U_{L,l}^{\Theta_{L}}\tilde{u}_{1 }^{-}\cap(u_{2}^{-})^{-1}U_{L,l}^{\Theta_{L}}u_{2}^{-}.\] Conversely, if \(u=(\tilde{u}_{1}^{-})^{-1}u^{\prime}\tilde{u}_{1}^{-}=(u_{2}^{-})^{-1}u^{ \prime\prime}u_{2}^{-}\) with \(u^{\prime},u^{\prime\prime}\in U_{L,l}^{\Theta_{L}}\), then \(u=u^{+}u^{-}=u^{\prime}(u^{\prime-1}(\tilde{u}_{1}^{-})^{-1}u^{\prime}\tilde{u}_{1 }^{-})=u^{\prime\prime}((u^{\prime\prime})^{-1}(u_{2}^{-})^{-1}u^{\prime\prime}u_{ 2}^{\prime})\). One can see from the above argument that \(\tilde{u}_{1}^{-},u_{2}^{-}\in N_{L}\). Since \(U_{L}\cap N_{L}=\{1\}\), and \(U_{L}\) normalizes \(N_{L}\), this implies that \(u^{+}=u^{\prime}=u^{\prime\prime}\), this is equivalent to (2). Replace \(u^{\prime}\) and \(u^{\prime\prime}\) by \(u^{+}\) we obtain that \(\tilde{u}_{1}^{-}u(\tilde{u}_{1}^{-})^{-1}=u^{+ _where_ \[B_{\varphi}^{L}(u_{1}^{-},u_{2}^{-},m,h_{f})=\int_{U_{L,l}\cap n_{0}U_{L,l},n_{0}^ {-1}\setminus U_{L}}\int_{U_{L}}h_{f}(x^{\prime}lu^{\prime})\varphi(\Theta_{M}(( n_{0}u^{\prime})^{-1}\dot{w}l^{\prime}u^{\prime})\psi^{-1}(x^{\prime})\psi^{-1}(u^{ \prime})dx^{\prime}du^{\prime}\] _in which \(n_{0}=\tilde{u}_{1}^{-}(u_{2}^{-})^{-1}\in N_{L}\), \(\tilde{u}_{1}^{-}=\tau_{L}^{M}\circ\Theta_{M}\circ\tau_{L}^{M}((u_{1}^{-})^{-1})\), and \(l=zl^{\prime}\) with \(z\in Z_{M}\)._ Proof.: By Lemma 6.2, \(U_{M,m}^{\Theta_{M}}=(\tilde{u}_{1}^{-})^{-1}U_{L,l}^{\Theta_{L}}\tilde{u}_{1 }^{-}\cap(u_{2}^{-})^{-1}U_{L,l}^{\Theta_{L}}u_{2}^{-}\). Denote \(n_{1}=(\tilde{u}_{1}^{-})^{-1},n_{2}=(u_{2}^{-})^{-1}\), and \(n_{0}=\tilde{u}_{1}^{-}(u_{2}^{-})^{-1}\), then \(n_{1},n_{2}\in U_{M,w}^{-}=N_{L}\subset U_{M}\), so \(U_{M}=n_{1}^{-1}U_{M}n_{1}\). Hence if we decompose \(x=x^{-}x^{+}\) with \(x^{-}\in U_{M,w-1}^{-}\), \(x^{+}\in U_{M,w-1}^{+}\), make a change of variable \(u\mapsto n_{1}un_{1}^{-}\), and then decompose \(u=u^{+}u^{-}\) with \(u^{+}\in U_{M,w}^{+}=U_{L}\), \(u^{-}\in U_{M,w}^{-}=N_{L}\) in the integral \[B_{\varphi}^{M}(m,f)=\int_{U_{M,m}^{\Theta_{M}}\setminus U_{M}}\int_{U_{M}}f( xmu)\varphi(\Theta_{M}(u^{-1})m^{\prime}u)\psi^{-1}(xu)dxdu,\] we obtain \[B_{\varphi}^{M}(m,f)=\int_{n_{1}(U_{L,l}^{\Theta_{L}}\cap n_{0}U_{L,l}^{ \Theta_{L}}n_{0}^{-1})n_{1}^{-1}\setminus n_{1}U_{L}n_{1}^{-1}}\int_{U_{M,w-1 }^{+}}\int_{U_{M,w}^{-}}\int_{U_{M,w-1}^{-}}f(x^{-}x^{+}u_{1}^{-}\dot{w}lu_{2}^ {-}n_{1}u^{+}u^{-}n_{1}^{-})\] \[\varphi(\Theta_{M}(n_{1}(u^{-})^{-1}(u^{+})^{-1}n_{1}^{-1})u_{1}^{-}\dot{w}l^ {\prime}u_{2}^{-}n_{1}u^{+}u^{-}n_{1}^{-})\psi^{-1}(x^{+}x^{-})\psi^{-1}(n_{1 }u^{+}u^{-}n_{1}^{-})dx^{-}du^{-}dx^{+}du^{+}.\] Since \(n_{1}^{-1}=\tilde{u}_{1}^{-}=\tau_{L}^{M}\circ\Theta_{M}\circ\tau_{L}^{M}((u_{ 1}^{-})^{-1})\), we see that \(\Theta_{M}(n_{1}^{-1})=(\Theta_{M}\circ\tau_{L}^{M})^{2}((u_{1}^{-})^{-1})=(u_ {1}^{-})^{-1}\). Let \(y^{-}=x^{-}x^{+}u_{1}^{-}(x^{+})^{-1}\), \(x^{\prime}=\dot{w}^{-1}x^{+}\dot{w}\), \(v^{-}=(u^{+})^{-1}u_{2}^{-}n_{1}u^{+}u^{-}n_{1}^{-1}\), and \(u^{\prime}=u^{+}\), then \(y^{-}\in U_{M,w-1}^{-}\), \(x^{\prime}\in U_{M,w}^{+}=U_{L}\), and \(v^{-}\in U_{M,w}^{-}=N_{L}\). By the compatibility of \(\psi\) with \(\dot{w}\) by [4, Proposition 5.1], we have \(\psi(x^{\prime})=\psi(x^{+})\). Therefore we can rewrite the above integral as \[B_{\varphi}^{M}(m,f)=\psi(u_{1}^{-}u_{2}^{-})\int_{U_{L,l}^{\Theta_{L}}\cap n _{0}U_{L,l}^{\Theta_{L}}n_{0}^{-1}\setminus U_{L}}\int_{U_{L}}\int_{U_{M,w-1}^ {-}}\int_{U_{M,w}^{-}}f(y^{-}\dot{w}x^{\prime}lu^{\prime}v^{-})\] \[\varphi(\Theta_{M}((n_{0}u^{\prime}v^{-})^{-1})\dot{w}l^{\prime}u^{\prime}v^{-} )\psi^{-1}(v^{-}y^{-}x^{\prime}u^{\prime})dv^{-}dy^{-}dx^{\prime}du^{\prime}.\] Since \(f\in C_{c}^{\infty}(\Omega_{w};\omega_{\pi})\), \(\Omega_{w}=U_{M,w-1}^{-}\times\dot{w}L\times U_{M,w}^{-}\), \(U_{M,w-1}^{-}\) and \(U_{M,w}^{-}\) are closed in \(\Omega_{w}\), so there exists compact subgroups \(U_{1}\) of \(U_{M,w}^{-}\) and \(U_{2}\) of \(U_{M,w-1}^{-}\) such that \(f(y^{-}\dot{w}x^{\prime}lu^{\prime}v^{-})\neq 0\) implies that \(y^{-}\in U_{1}\) and \(v^{-}\in U_{2}\). On the other hand, by our assumption on \(\varphi\), \(\varphi\) is invariant under the \(\Theta_{M}\)-twisted conjugate action by some open compact subgroup \(U_{0}\) of \(U_{M}(F)\). We shrink \(U_{2}\) if necessary so that \(v^{-}\) lies in \(U_{0}\). It follows that \[\varphi(\Theta_{M}(n_{0}u^{\prime}v^{-})^{-1}\dot{w}l^{\prime}u^{\prime}v^{-})= \varphi(\Theta_{M}(n_{0}u^{\prime})^{-1}\dot{w}l^{\prime}u^{\prime}).\] So we obtain that \[B_{\varphi}^{M}(m,f)=\psi(u_{1}^{-}u_{2}^{-})\int_{U_{L,l}^{\Theta_{L}}\cap n_{0}U_{L,l}^{ \Theta_{L}}n_{0}^{-1}\setminus U_{L}}\int_{U_{L}}\int_{U_{M,w-1}^{-}}\int_{U_{M,w }^{-}}f(y^{-}\dot{w}x^{\prime}lu^{\prime}v^{-})\psi^{-1}(v^{-}y^{-})dv^{-}dy^{-}\] \[\varphi(\Theta_{M}((n_{0}u^{\prime})^{-1})\dot{w}l^{\prime}u^{\prime})\psi^{-1}(x^ {\prime}u^{\prime})dx^{\prime}du^{\prime}\] \[=\psi(u_{1}^{-}u_{2}^{-})\int_{U_{L,l}^{\Theta_{L}}\cap n_{0}U_{L,l}^{\Theta_{L}}n_{0} ^{-1}\setminus U_{L}}\int_{U_{L}}h_{f}(x^{\prime}lu^{\prime})\varphi(\Theta_{M}((n_ {0}u^{\prime})^{-1})\dot{w}l^{\prime}u^{\prime})\psi^{-1}(x^{\prime}u^{\prime})dx ^{\prime}du^{\prime}=\psi(u_{1}^{-}u_{2}^{-})B_{\varphi}^{L}(u_{1}^{-},u_{2}^{-},l,h_{f}).\] To show the last integral is well-defined, note that if \(v^{\prime\prime}=n_{0}u^{\prime\prime}n_{0}^{-1}\), with \(u^{\prime\prime},v^{\prime\prime}\in U_{L,l}^{\Theta_{L}}\), then \[l^{\prime}=\Theta_{L}((v^{\prime\prime})^{-1})l^{\prime}v^{\prime\prime}=\Theta_{ L}((u^{\prime\prime-1}n_{0}u^{\prime\prime}n_{0}^{-1})^{-1})\Theta_{L}((u^{\prime\prime})^{-1}l^{ \prime}u^{\prime\prime}(u^{\prime\prime-1}n_{0}u^{\prime\prime}n_{0}^{-1})\] \[=\tau_{L}^{M}\circ\Theta_{M}((u^{\prime\prime-1}n_{0}u^{\prime\prime}n_{0}^{-1})^{-1})l ^{\prime}(u^{\prime\prime-1}n_{0}u Since \(\Theta_{M}(U_{M})=U_{M}\), \(U_{M}=U_{L}N_{L}\), and \(\tau_{L}^{M}(N_{L})=\overline{N_{L}}\), the opposite of \(N_{L}\). The left hand side of the above equality lies in \(\overline{N_{L}}\), while the right hand side lies in \(N_{L}\), as \(N_{L}\cap\overline{N_{L}}=\{1\}\) it forces that \({u^{\prime\prime}}^{-1}n_{0}u^{\prime\prime}n_{0}^{-1}=1\), i.e. \(u^{\prime\prime}n_{0}=n_{0}u^{\prime\prime}\), and \(v^{\prime\prime}=u^{\prime\prime}\). It follows that \[\Theta_{M}((n_{0}v^{\prime\prime}u^{\prime})^{-1})\dot{w}l^{\prime}v^{\prime \prime}u^{\prime}=\Theta_{M}((u^{\prime\prime}n_{0}u^{\prime})^{-1})\dot{w} \Theta_{L}(u^{\prime\prime})(\Theta_{L}(u^{\prime\prime})l^{\prime}u^{\prime \prime})u^{\prime}\] \[=\Theta_{M}((n_{0}u^{\prime})^{-1})\Theta_{M}((u^{\prime\prime})^{-1})\dot{w} \Theta_{L}(u^{\prime\prime})l^{\prime}u^{\prime}=\Theta_{M}((n_{0}u^{\prime}) ^{-1})\dot{w}l^{\prime}u^{\prime}\] where we used \(\dot{w}^{-1}\Theta_{M}(u^{\prime\prime})\dot{w}=\tau_{L}^{M}\circ\Theta_{M}(u^ {\prime\prime})=\Theta_{L}(u^{\prime\prime})\). _Remark 6.4_.: If we impose a stronger condition on the cut-off function \(\varphi\), i.e, we require that \(\varphi(\Theta_{M}(x_{1}^{-1})mx_{2})=\varphi(m)\) for \(x_{1},x_{2}\in U_{0}\), then we can show that \[B_{\varphi}^{M}(m,f)=\psi(u_{1}^{-}u_{2}^{-})\mathrm{Vol}(U_{L,l}^{\Theta_{L} }\cap n_{0}U_{L,l}^{\Theta_{L}}n_{0}^{-1}\backslash U_{L,l}^{\Theta_{L}})B_{ \varphi_{\dot{w}}}^{L}(l,h_{f})\] for all \(m\in C_{r}(\dot{w})\) where \(\varphi_{\dot{w}}(\cdot)=\varphi(\dot{w}\cdot)\) is the left translation of \(\varphi\) by \(\dot{w}\). But notice that in general the subset consists of the image \(n\mapsto m\) via \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\) is not invariant under the action \(m\mapsto\Theta_{M}(x_{1}^{-1})mx_{2}\) for \(x_{1},x_{2}\in U_{M}\). In particular, one checks easily from our calculation of the Bruhat decomposition in section 3 that in our case the set of elements of the form \(m(X,Y)\) is not invariant under this action. So we do not use this formula, although it is of a nicer form. _Remark 6.5_.: We observe easily that in particular, if \(n_{0}=1\), i.e. \(\tilde{u}_{1}^{-}=u_{2}^{-}\), we have \[B_{\varphi}^{M}(m,f)=B_{\varphi_{\dot{w}}}^{L}(l,h_{f}).\] ### Asymptotic expansion and uniform smoothness The main idea to prove stability is that once we relate our local coefficient formula with the Mellin transform of the partial Bessel integrals, there are two parts in the asymptotic expansion formula of the partial Bessel integrals, one depends only on the central character of \(\pi\), the other is certain uniform smooth function. Therefore under highly ramified twists, the second part vanishes. To obtain the asymptotic expansion of partial Bessel integrals, we study its boundary behavior on \(\Omega_{w}\) with \(w\in B(M)\), i.e. on Bruhat cells of smallest possible dimension, and do induction on Bessel distance. This is a standard procedure as in [4] or [13]. We begin with the small cell \(C(e)=B_{M}\) of \(M\). First note that \(L_{e}=M,A_{e}=Z_{M}\), \(U_{M,e}^{+}=U_{M}\), and \(\Omega_{e}=M\). Take \(\dot{e}=I\), the identity matrix in \(M\). Recall that \(\pi\) is \(\psi\)-generic supercuspidal and we pick an matrix coefficient \(f\in C_{c}^{\infty}(M;\omega_{\pi})\), normalized with \(W_{f}(e)=1.\) Then we have **Proposition 6.6**.: _Fix an auxiliary function \(f_{0}\in C_{c}^{\infty}(M;\omega_{\pi})\) with \(W_{f_{0}}(e)=1\). Then for each \(f\in C_{c}^{\infty}(M;\omega_{\pi})\) with \(W_{f}(e)=1\) and \(w^{\prime}\in B(M)\) with \(d_{B}(e,w^{\prime})=1\), there exists a function \(f_{w^{\prime}}\in C_{c}^{\infty}(\Omega_{w^{\prime}};\omega_{\pi})\) such that for any \(w\in B(M)\) and \(m\in M\), we have_ \[B_{\varphi}^{M}(m,f)=B_{\varphi}^{M}(m,f_{1})+\sum_{w^{\prime}\in B(M),dn(w^{ \prime},e)=1}B_{\varphi}^{M}(m,f_{w^{\prime}})\] _where \(f_{1}(m):=\sum_{m=m_{1}c}f_{0}(m_{1})B^{M}(\dot{e}c,f)=\sum_{m=m_{1}c}f_{0}(m _{1})\omega_{\pi}(c)\), the sum runs over all possible decompositions \(m=m_{1}c\) with \(m_{1}\in M_{der}\), and \(c\in A_{e}=Z_{M}\)._ Proof.: Decompose \(M=M_{der}A_{e}=M_{der}Z_{M}\), where \(M_{der}\) is the derived group of \(M\). Write \(m=m_{1}c\) with \(m_{1}\in M_{der}\) and \(c\in Z_{M}\), then there are only finitely many such decompositions which are indexed by elements in the transverse torus \(A_{e}^{e}=M_{der}\cap Z_{M}\). Define \[f_{1}(m):=\sum_{m=m_{1}c}f_{0}(m_{1})B^{M}(\dot{e}c,f)=\sum_{m=m_{1}c}f_{0}(m_{1 })\omega_{\pi}(c)\] Then \(f_{1}(m)\in C_{c}^{\infty}(M;\omega_{\pi})\). For \(a\in A_{e}=Z_{M}\), \(a^{\prime}=e\), therefore \[B_{\varphi}^{M}(\dot{e}a,f_{1})=\omega_{\pi}(a)\int_{U_{M,e}^{\omega_{M}} \backslash U_{M}}\int_{U_{M}}f_{1}(xu)\varphi(\Theta_{M}(u^{-1})u)\psi^{-1}(xu) dxdu=\omega_{\pi}(a)W_{f_{1}}(e)\int_{U_{M,e}^{\Theta_{M}}\backslash U_{M}} \varphi(\Theta_{M}(u^{-1})u)du.\] On the other hand, \(W_{f_{1}}(e)=\int_{U}f_{1}(x)\psi^{-1}(x)dx\), while \(x\in U_{M}\subset M_{der}\) so \(f_{1}(x)=f_{0}(x)\), thus \(W_{f_{1}}(e)=W_{f_{0}}(e)=1\), hence we obtain that \(B^{M}_{\varphi}(\dot{e}a,f_{1})=B^{M}_{\varphi}(\dot{e}a,f)\) for all \(a\in A_{e}\). Therefore \(B^{M}_{\varphi}(\dot{e}a,f-f_{1})=0\) for all \(a\in A_{e}\). Apply [4, Lemma 5.13], there exists \(f_{2}^{\prime}\in C_{c}^{\infty}(\Omega_{e^{\prime}}^{\alpha};\omega_{\pi})\), such that \(B^{M}_{\varphi}(m,f-f_{1})=B^{M}_{\varphi}(m,f_{2}^{\prime})\) for all \(m\in M\), where \(\Omega_{e}^{\circ}=\Omega_{e}-C(e)=M-B_{M}\). Let \(\Omega_{1}=\cup_{w\in B(M),w\neq e}\Omega_{w}=\cup_{w\in B(M),d_{B}(w,e)=1} \Omega_{w}\), and \(\Omega_{0}=M-C(e)=\Omega_{e}^{\circ}\). By [4, Lemma 5.14], there exists \(f_{2}\in C_{c}^{\infty}(\Omega_{1};\omega_{\pi})\), so that \(B^{M}_{\varphi}(m,f_{2})=B^{M}_{\varphi}(m,f_{2}^{\prime})=B^{N}_{\varphi}( m,f-f_{1})\), for a sufficiently large \(\varphi\) depending only on \(f_{1}\). Then by a partition of unity argument, for each \(w^{\prime}\in B(M)\), \(d_{B}(w^{\prime},e)=1\), we can find \(f_{w^{\prime}}\in C_{c}^{\infty}(\Omega_{w^{\prime}};\omega_{\pi})\), such that \(f_{2}=\sum_{w^{\prime}\in B(M),d_{B}(w^{\prime},e)=1}f_{w^{\prime}}\). We would like to do the same type of analysis on each \(B^{M}_{\varphi}(m,f_{w^{\prime}})\) and obtain an asymptotic expansion formula for \(B^{M}_{\varphi}(m,f)\) indexed by Weyl group elements that supports Bessel functions, and for all \(m\) lying in the relevant part of Bruhat cells. To be precise we will show that **Proposition 6.7**.: _Fix an auxiliary function \(f_{0}\in C_{c}^{\infty}(M;\omega_{\pi})\) with \(W_{f_{0}}(e)=1\). Then for each \(f\in C_{c}^{\infty}(M;\omega_{\pi})\) with \(W_{f}(e)=1\) and \(w^{\prime}\in B(M)\) with \(d_{B}(e,w^{\prime})\geq 1\), there exists a function \(f_{w^{\prime}}\in C_{c}^{\infty}(\Omega_{w^{\prime}};\omega_{\pi})\) such that for any \(m=u_{1}\dot{w}au_{2}\in C_{r}(\dot{w})\) with \(\dot{w}=\dot{w}_{L}^{M}\in B(M)\), we have_ \[B^{M}_{\varphi}(m,f)=\sum_{a=bc}\omega_{\pi}(c)B^{M}_{\varphi}(u_{1}\dot{v}bu_ {2},f_{0})+\sum_{w^{\prime}\in B(M),d_{B}(w^{\prime},e)\geq 1}B^{M}_{ \varphi}(m,f_{w^{\prime}})\] _where \(a=bc\) runs over the possible decompositions of \(a\in A_{w}\) with \(b\in A_{w}^{e}\) and \(c\in A_{e}=Z_{M}\)._ Proof.: Note that in particular if \(m=u_{1}\dot{w}au_{2}\in C_{r}(\dot{w})\), by Lemma 6.1, \(U^{\Theta_{M}}_{M,m}\subset u_{2}^{-1}U^{+}_{M,w}u_{2}\), we can write \(U_{M}=u_{2}^{-1}U^{+}_{M,w}U^{-}_{M,w}u_{2}^{-1}=u_{2}^{-1}U^{+}_{M,w}u_{2}(u_ {2}^{-1}U^{-}_{M,w}u_{2})\). Let \(u=u^{\prime}(u_{2}^{-1}u^{-}u_{2})\) where \(u^{\prime}=u_{2}^{-1}u^{+}u_{2}\) with \(u^{+}\in U^{+}_{M,w}\), \(u^{-}\in U^{-}_{M,w}\), we have \[B^{M}_{\varphi}(m,f_{1})=\int_{U^{\Theta_{M}}_{M,m}\backslash u_{2}^{-1}U^{+ }_{M,w}u_{2}}\int_{U^{-}_{M,w}}\int_{U_{M}}f_{1}(xu_{1}\dot{w}au_{2}u^{\prime} u_{2}^{-1}u^{-}u_{2})\] \[\cdot\varphi(\Theta_{M}(u_{2}^{-1}(u^{-})^{-1}u_{2}u^{\prime-1})u_{1}\dot{w}a ^{\prime}u_{2}u^{\prime}u_{2}^{-1}u^{-}u_{2})\psi^{-1}(xu^{\prime}u_{2}^{-1}u ^{-}u_{2})dxdu^{-}du^{\prime}.\] As \(u^{+}=u_{2}u^{\prime}u_{2}^{-1}\in U^{+}_{M,w}\) and \(a\in A_{w}\), we have \(xu_{1}\dot{w}au_{2}^{-1}u^{\prime}u_{2}^{-1}u^{-}u_{2}=xu_{1}(\dot{w}u_{2}u^{ \prime}u_{2}^{-1}\dot{w}^{-1})\dot{w}u^{-}u_{2}\). Let \(x^{\prime}=xu_{1}(\dot{w}u_{2}u^{\prime}u_{2}^{-1}\dot{w}^{-1})\in U^{+}_{M,w}\), \(v^{-}=u^{-}u_{2}\in U^{-}_{M,w}=U^{-}_{M,w}\), and by compatibility of \(\psi\) with \(\dot{w}\), we obtain that \[B^{M}_{\varphi}(m,f_{1})=\psi(u_{1}u_{2})\int_{U^{\Theta_{M}}_{M,m}\backslash u _{2}^{-1}U^{+}_{M,w}u_{2}}\int_{U^{-}_{M,w}}\int_{U_{M}}f_{1}(x^{\prime}\dot{w}av ^{-})\] \[\cdot\varphi(\Theta_{M}((v^{-})^{-1}u_{2}u^{\prime-1})u_{1}\dot{w}a^{\prime}u_{ 2}u^{\prime}u_{2}^{-1}v^{-})\psi^{-1}(x^{\prime}v^{-})dx^{\prime}dv^{-}du^{\prime}\] So by the construction of \(f_{1}\), we need to decompose \(x^{\prime}\dot{w}av^{-}=m_{1}c\) with \(m_{1}\in M_{der}\) and \(c\in A_{e}=Z_{M}\), which is equivalent to \(ac^{-1}=\dot{w}^{-1}(x^{\prime})^{-1}m_{1}(v^{-})^{-1}\). Since we pick \(\dot{w}\in M_{der}\), and \(x^{\prime},v^{-}\in U_{M}\subset M_{der}\), this is saying that \(b:=ac^{-1}\in M_{der}\cap A_{w}=A_{w}^{e}\). It follows that \(f_{1}(x^{\prime}\dot{w}av^{-})=\sum_{a=bc}f_{0}(x^{\prime}\dot{v}bv^{-})\omega_{ \pi}(c)\), and consequently \(B^{M}_{\varphi}(m,f_{1})=\sum_{a=bc}\omega_{\pi}(c)B^{M}_{\varphi}(u_{1}\dot{v} bu_{2},f_{0})\) where \(a=bc\) runs over the possible decompositions of \(a\in A_{w}\) with \(b\in A_{e}^{e}\) and \(c\in A_{e}=Z_{M}\). We continue to obtain expansions for \(B^{M}_{\varphi}(m,f_{w^{\prime}})\). Suppose \(w^{\prime}=w^{M}_{L}\in B(M)\). Let \(h=h_{f_{w^{\prime}}}\in C_{c}^{\infty}(L;\omega_{\pi})\) be the image of \(f_{w^{\prime}}\) under \(C_{c}^{\infty}(\Omega_{w^{\prime}};\omega_{\pi})\twoheadrightarrow C_{c}^{\infty}(L; \omega_{\pi})\). Pick \(h_{0}\in C_{c}^{\infty}(L;\omega_{\pi})\) normalized so that \(B^{L}_{\varphi_{,\omega^{\prime}}}(\dot{e},h_{0})=\frac{1}{|Z_{L}\cap A_{w^{ \prime}}^{\omega^{\prime}}|}\), and \(B^{M}(b,h_{0})=0\) for \(b\in A_{w^{\prime}}^{w^{\prime}}\) but \(b\notin Z_{M}\cap A_{w^{\prime}}^{w^{\prime}}\). Similar to the construction of \(f_{1}\) from \(f\), let \(h_{1}(l)=\sum_{l=l_{1}c}h_{0}(l_ Apply [4, Lemma 5.13, 5.14] again, together with a partition of unity argument, there exists \(f_{w^{\prime},w^{\prime\prime}}\in C_{c}^{\infty}(\Omega_{w^{\prime\prime}}; \omega_{\pi})\) for \(w^{\prime}<w^{\prime\prime}\in B(M)\) such that \(d_{B}(w^{\prime},w^{\prime\prime})=1\), and \[B_{\varphi}^{M}(m,f_{w^{\prime}})=B_{\varphi}^{M}(m,f_{1})+\sum_{w^{\prime \prime}\in B(M),w^{\prime\prime}>w^{\prime},d_{B}(w^{\prime\prime},w^{\prime}) =1}B_{\varphi}^{M}(m,f_{w^{\prime\prime}}).\] Proceed by repeating this process and do induction on the Bessel distance, and combine with the above argument on \(B_{\varphi}^{M}(m,f_{1})\), we obtain the asymptotic expansion formula as desired. Note that the first sum in Proposition 6.7 depends only on \(\omega_{\pi}\) and a fixed auxiliary function \(f_{0}\). The next important step is to show that each \(B_{\varphi}^{M}(m,f_{w^{\prime}})\) satisfy some uniform smooth property. Recall that for \(h\in C_{c}^{\infty}(\Omega_{w^{\prime}};\omega_{\pi})\) with \(w^{\prime}=w_{L}^{M}\in B(M)\), we constructed \(h_{1}(l)=\sum_{l=l_{1}c}h_{0}(l_{1})B^{L}(c,h)\). Suppose \(m=u_{1}^{-}\dot{w}^{\prime}lu_{2}^{-}\) and write \(l=zl^{\prime}\) with \(z\in Z_{M}\), \(n_{0}=\tilde{u}_{1}^{-}(u_{2}^{-})^{-1}\), where \(\tilde{u}_{1}^{-}=\tau_{L}^{M}\circ\Theta_{M}\circ\tau_{L}^{M}((u_{1}^{-})^{-1})\), we have \(B_{\varphi}^{L}(u_{1}^{-},u_{2}^{-},l,h_{1})=\omega_{\pi}(z)B_{\varphi}^{L}(u_ {1}^{-},u_{2}^{-},l^{\prime},h_{1})\). Since \[B_{\varphi}^{L}(u_{1}^{-},u_{2}^{-},l^{\prime},h_{1})=\int_{U_{L,l}^{\Theta_{ L}\cap n_{0}U_{L,l}^{\Theta_{L}}\tau_{n_{0}}^{-1}\cup U_{L}}}\int_{U_{L}}h_{1}(xl^{ \prime}u)\varphi(\Theta_{M}((n_{0}u)^{-1})\dot{w}^{\prime}l^{\prime}u)\psi^{- 1}(xu)dxdu,\] we need to write \(xl^{\prime}u=l_{1}c\) with \(l_{1}\in L_{der}\), \(c\in Z_{L}\). Suppose \(l=v_{1}\dot{w}av_{2}\in C^{L}(\dot{w})\) a Bruhat cell in \(L\) with \(\dot{w}\in L_{der}\), then \(l^{\prime}=v_{1}\dot{w}a^{\prime}v_{2}\) where \(a=za^{\prime}\). As \(v_{1},v_{2},\dot{w}\in L_{der}\), so to write \(xl^{\prime}u=xv_{1}\dot{w}a^{\prime}v_{2}=l_{1}c\) is equivalent to write \(a^{\prime}=bc\) with \(b\in A\cap L_{der}\) and \(c\in Z_{L}\). Let \(b=zbb^{\prime}\), \(c=z_{c}c^{\prime}\) with \(z_{b},z_{c}\in Z_{M},b^{\prime}\in A^{\prime}\), \(c^{\prime}\in Z_{L}^{\prime}\), then \(a^{\prime}=z_{b}z_{c}b^{\prime}c^{\prime}\Rightarrow z_{b}z_{c}=1\), and \(a^{\prime}=b^{\prime}c^{\prime}\). Let \(l_{b}=v_{1}\dot{w}bv_{2}\), and similarly \(l_{b^{\prime}}=v_{1}\dot{w}b^{\prime}v_{2}\), then \(h_{1}(xl^{\prime}u)=\sum_{a^{\prime}=b^{\prime}c^{\prime}}h_{0}(xl_{b^{\prime} }u)B^{L}(c^{\prime},h)\). Since \(U_{L,l^{\prime}}^{\Theta_{L}}=\{u\in U_{L}:\Theta_{L}(u^{-1})l^{\prime}u=l^{ \prime}\}=\{u\in U_{L}:\Theta_{L}(u^{-1})l_{b^{\prime}}c^{\prime}u=l_{b^{ \prime}}c^{\prime}\}=\{u\in U_{L}:\Theta_{L}(u^{-1})l_{b^{\prime}}uc^{\prime} =l_{b^{\prime}}c^{\prime}\}=\{u\in U_{L}:\Theta_{L}(u^{-1})l_{b^{\prime}}u=l_{ b^{\prime}}\}=U_{L,l_{b^{\prime}}}^{\Theta_{L}}\), we obtain that \[B_{\varphi}^{L}(u_{1}^{-},u_{2}^{-},l^{\prime},h_{1})=\sum_{a^{ \prime}=b^{\prime}c^{\prime}}B^{L}(c^{\prime},h)\int_{U_{L,l^{\prime}}^{\Theta _{L}}\cap n_{0}U_{L,l^{\prime}}^{\Theta_{L}}\tau_{0}^{-1}\cup U_{L}}\int_{U_{L} }h_{0}(xl_{b^{\prime}}u)\varphi(\Theta_{M}((n_{0}u)^{-1})l_{b^{\prime}}uc^{ \prime})\psi^{-1}(xu)dxdu\] \[=\sum_{a^{\prime}=b^{\prime}c^{\prime}}B^{L}(c^{\prime},h)B_{ \varphi^{c^{\prime}}}^{L}(u_{1}^{-},u_{2}^{-},l_{b^{\prime}},h_{0})\] where \(\varphi^{c^{\prime}}(\,):=\varphi(\cdot c^{\prime})\) is the right shift of \(\varphi\) by \(c^{\prime}\). A smooth function on a p-adic group is **uniform smooth** if it is uniformly locally constant, i.e., there exists an open compact subgroup \(K_{0}\) such that the function is constant on \(aK_{0}\) for any \(a\) in the group. Recall that \(A\) is the maximal split torus of \(M\). We say a unipotent element \(u\in U_{M}\) is **rationally parameterized** by \((a,d)\in A\times\mathbb{A}^{k}\) if all the entries in the 1-parameter subgroups corresponding to \(u\) are rational functions of \((a,d)\), here \(\mathbb{A}^{k}\) is the affine space of dimension \(k\), i.e., if one has \(u=\prod_{\alpha}u_{\alpha}(x_{\alpha})\) with \(x_{\alpha}\in\mathbb{G}_{a}\), then each \(x_{\alpha}\) is a rational function of \((a,d)\). **Proposition 6.8**.: _Let \(H\subset L\subset M\) be Levi subgroups of \(M\), suppose \(m=u_{1}^{-}\dot{w}^{\prime}lu_{2}^{-}=m(a,d)=u_{1}(a,d)\dot{w}au_{2}(a,d)\in C_ {r}(\dot{w})\) is rationally parameterized by \((a,d)\in A_{w}^{w^{\prime}}A_{w^{\prime}}\times\mathbb{A}^{k}\subset A\times \mathbb{A}^{k}\) for some \(k\geq 0\) with \(w=w_{H}^{M}\), \(w^{\prime}=w_{L}^{M}\in B(M)\). Assume that the rational functions that parameterize \(u_{1}\) and \(u_{2}\) have no singularities on \(A\times\mathbb{A}^{k}\). Then if one writes \(a=bc\) with \(b\in A_{w}^{w^{\prime}}\), \(c\in A_{w^{\prime}}\) and \(c=zc^{\prime}\) with \(z\in Z_{M}\), \(c\in A_{w^{\prime}}^{\prime}\), where \(A_{w^{\prime}}=Z_{M}A_{w^{\prime}}^{\prime}\), we have that_ \[B_{\varphi}^{M}(m,f_{1})=\omega_{\pi}(z)\psi(u_{1}^{-}(bc^{\prime}z,d),u_{2}^{-} (bc^{\prime}z,d))\sum_{a^{\prime}=b^{\prime}c^{\prime}}B^{L}(c^{\prime},h)B_{ \varphi^{c^{\prime}}}^{L}(u_{1}^{-}(bc^{\prime}z),u_{2}^{-}(bc^{\prime}z,d),l_{ b}(bc^{\prime}z,d),h_{0})\] _is uniformly smooth as a function of \(c^{\prime}\in A_{w^{\prime}}^{\prime}\), where \(f_{1}\mapsto h_{1}\) via \(C_{c}^{\infty}(\Omega_{w^{\prime}};\omega_{\pi})\twoheadrightarrow C_{c}^{ \infty}(L;\omega_{\pi})\)._ Proof.: We have \(B_{\varphi}^{M}(m,f_{1})=\psi(u_{1}^{-},u_{2}^{-})B_{\varphi}^{L}(u_{1}^{-},u_{2}^ {-},l,h_{1})=\psi(u_{1}^{-},u_{2}^{-})\sum_{a=bc}B^{L}(c,h)B_{\varphi^{c}}^{L}(u_ {1}^{-}(bc,d) decompositions are of the form \(a=(b\xi^{-1})(\xi c)\) with \(\xi\in A_{w}^{w^{\prime}}\cap A_{w^{\prime}}=A_{w^{\prime}}^{w^{\prime}}=Z_{L} \cap L_{der}\), a finite set, so \(|\xi|=1\). Therefore \[B_{\varphi}^{L}(u_{1}^{-},u_{2}^{-},l,h_{1})=\sum_{\xi\in A_{w}^{\varphi}}B^{L}( \xi c,h)B_{\varphi^{\xi c}}^{L}(u_{1}^{-}(bc,d),u_{2}^{-}(bc,d),l_{b\xi^{-1}},h _{0})\] where \(l_{b\xi^{-1}}=\dot{w}^{\prime-1}u_{1}^{+}\dot{w}^{\prime}\dot{w}_{H}^{L}b\xi^ {-1}u_{2}^{+}\). As \(\varphi\) depends only on the absolute value, \(\varphi^{\xi c}=\varphi^{c}\). On the other hand, \[B^{L}(\xi c,h)=\int_{U_{M}(F)}h(x\xi c)\psi^{-1}(x)dx=\omega_{\pi}(\xi_{1}z) \int_{U_{M}(F)}h(x\xi^{\prime}c^{\prime})\psi^{-1}(x)dx\] where we write \(\xi=\xi_{1}\xi^{\prime}\) with \(\xi_{1}\in Z_{M}\) and \(\xi^{\prime}=\xi\xi_{1}^{-1}\). As \(h\in C_{c}^{\infty}(L;\omega_{\pi})\) is smooth of compact support modulo \(Z_{M}\), and the small Bruhat cell \(C^{L}(e_{L})=AU_{L}=Z_{M}A^{\prime}U_{L}\) of \(L\) is closed in \(L\), and \(Z_{L}^{\prime}\subset A^{\prime}\) is closed, there exists compact subsets \(K_{1}\subset Z_{L}^{\prime}\) and \(U_{1}\subset U_{M}\) such that \(h(x\xi^{\prime}c^{\prime})\neq 0\Rightarrow\xi^{\prime}c^{\prime}\in K_{1}\) and \(x\in U_{1}\). Writing \(c=zc^{\prime}\), we see that \(B_{\varphi}^{L}(u_{1}^{-},u_{2}^{-},l,h_{1})=\omega_{\pi}(z)\sum_{\xi\in A_{w ^{\prime}}^{\varphi}}B^{L}(\xi c^{\prime},h)B_{\varphi^{\omega}}^{L}(u_{1}^{-} (bc^{\prime}z,d),u_{2}^{-}(bc^{\prime}z,d),l_{b\xi^{-1}},h_{0})\) does not vanish unless \(c^{\prime}\in K:=\cup_{\xi^{\prime}}\xi^{\prime-1}K_{1}\), a compact subset. From this we also see that the support of \(h\) in \(c^{\prime}\) is compact and independent of the decomposition \(a=bc\). Moreover, since the support of \(h\) in \(c^{\prime}\) is compact, and \(h\) is smooth, there exists an open compact subgroup \(C_{0}\subset Z_{L}^{\prime}\) such that \(h(x\xi^{\prime}c^{\prime}c_{0})=h(x\xi^{\prime}c^{\prime})\) for all \(x\in U_{1}\), \(c^{\prime}\in Z_{L}^{\prime}\), and \(c_{0}\in C_{0}\). By our assumption the entries in the 1-parameter subgroups corresponding to \(u_{i}(a,d)\), and hence \(u_{i}^{\pm}(a,d)=u_{i}^{\pm}(bc^{\prime}z,d)\) (\(i=1,2\)) are rational functions in \((b,c^{\prime},z,d)\) without singularities, it follows that \(u_{i}^{\pm}\) and hence also \(\psi(u_{1}^{-},u_{2}^{-})\) are smooth functions of \((b,c^{\prime},z,d)\) with no singularities, and the quotient space \(U_{L,l}^{\Theta_{L}}\cap n_{0}U_{L,l}^{\Theta_{L}}n_{0}^{-1}\backslash U_{L}\) is also smoothly parameterized by \((b,c^{\prime},z,d)\). Therefore \(B_{\varphi}^{M}(m(bc^{\prime}z,d),f_{1})\) is zero when \(c^{\prime}\notin K\). So for each fixed \(z,b,d\), we simultaneously choose \(C_{0}\), depending on \(z,b,d\), so that \(u_{i}^{\pm}(zbc^{\prime}c_{0},d)=u_{i}^{\pm}(zbc^{\prime},d)\) for all \(c_{0}\in C_{0},c^{\prime}\in K\). Finally, We shrink \(C_{0}\) so that \(C_{0}\subset Z_{L}^{\prime}(\mathcal{O}_{F})\) if necessary, then \(\varphi^{cc_{0}}=\varphi^{c}\) for all \(c_{0}\in C_{0}\) and \(c\in A_{w^{\prime}}=Z_{L}\). Consequently, there exists an open compact subgroup \(C_{0}\subset Z_{L}^{\prime}\) such that \[B_{\varphi}^{M}(m(zbc^{\prime}c_{0},d),f_{1})=B_{\varphi}^{M}(m(zbc^{\prime},d ),f_{1})\] for all \(a=bc\) and \(c_{0}\in C_{0}\subset Z_{L}^{\prime}=A_{w^{\prime}}^{\prime}\), i.e. \(B_{\varphi}^{M}(m,f_{1})\) is uniformly smooth in \(c^{\prime}\in A_{w^{\prime}}^{\prime}\). ### The final local coefficient formula and the separation of the toric part We will relate the partial Bessel functions defined in section 5 with partial Bessel integrals introduced in section 6, so that the local coefficient formula in Proposition 5.2 can be restated in terms of Mellin transforms of partial Bessel integrals. With nice asymptotic expansion formulas in Proposition 6.7 and uniform smoothness in Proposition 6.8, we will be able to show our desired analytic stability. For our applications, we set the involution \(\Theta_{M}:m\mapsto\dot{w_{0}}^{-1}m\dot{w}\), and it is easily verified that it satisfies the assumption that it preserves the \(F\)-splitting and is compatible with Levi subgroups and the generic character \(\psi\). Recall that for a matrix \(X\) of size \(k\times l\), we defined \(\varphi_{\kappa}(X)=\begin{cases}1&|X_{i,j}|\leq q^{((k-i)+(l-j)+1)\kappa}\\ 0&\text{ otherwise}\end{cases}\), and \(\overline{N}_{0,\kappa}=\{\bar{n}(X,Y):\varphi_{\kappa}(\varpi^{-(d+g)}X) \cdot\varphi_{\kappa}(\varpi^{-2(d+g)}Y)=1\}\), where \(d=\operatorname{Cond}(\psi)\) and \(g=\operatorname{Cond}(\omega_{\pi}^{-1}(w_{0}(\omega_{\pi})))\). We also denoted by \(\varphi_{\overline{N}_{0,\kappa}}\), the characteristic function of \(\overline{N}_{0,\kappa}\). Suppose \(n(X,Y)\mapsto m(X,Y)\) via the Bruhat decomposition \(\dot{w_{0}}^{-1}n=mn^{\prime}\bar{n}\). Define \[\varphi(m(X,Y)):=\varphi_{\kappa}((Y^{-1}X)_{r1}X)\varphi_{\kappa}(((Y^{-1}X)_{ r1})^{2}Y)\] where \((Y^{-1}X)_{r1}\) is the \((r,1)\)-th entry of \(Y^{-1}X\). We will show that \(\varphi\) satisfies the assumption that it is invariant under the \(\Theta_{M}\)-twisted conjugate action by some open compact subgroup \(U_{0}\) of \(U_{M}(F)\). Consequently, the assumptions on \(\Theta_{M}\) and \(\varphi\) in the beginning of section 6 are both satisfied and we have **Proposition 6.9**.: _Let \(m(X^{\prime},Z^{\prime})\) be the image of \(n(X^{\prime},Z^{\prime})\) under the map \(n\mapsto m\) via \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\). Then_ \[\dot{j}_{\overline{N}_{0,\kappa},\pi,w}(n(X^{\prime},Z^{\prime}))=B_{\varphi}^ {M}(m(X^{\prime},Z^{\prime}),f)\] _where \(\varphi_{\overline{N}_{0,\kappa}}\) and \(\varphi\) are defined as above and \(f\in C_{c}^{\infty}(M(F);\omega_{\pi})\) such that \(W=W_{f}\), normalized so that \(W_{f}(e)=1\)._ Proof.: Given the Bruhat decomposition \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\), for \(z=\alpha^{\vee}(t)\in Z_{M}^{0}\), if \(u\mapsto zu^{-1}nuz^{-1}\), then \(m\mapsto\Theta_{M}(zu^{-1})mu^{-1}=\Theta_{M}(u)^{-1}\Theta_{M}(z)z^{-1}mu= \Theta_{M}(u^{-1})\alpha^{\vee}(t^{-2})mu\), as \(\Theta_{M}(z),z\in Z_{M}\). In our cases, where \(n=n(X,Y)\), then \(m=m(X,Y)=\operatorname{diag}\{m_{1}(X,Y),m_{2}(X,Y),\theta_{r}(m_{1}(X,Y))\}\). So \[\Theta_{M}(m(X,Y))=\operatorname{diag}(\theta_{r}(m_{1}(X,Y)),m_{2}(X,Y),m_{1 }(X,Y))\] It shows that \(\Theta_{M}\) induces \(\theta_{r}\). Moreover, for \(z=\alpha^{\vee}(t)\in Z_{M}^{0}\), \(u=\operatorname{diag}\{u_{1},u_{2},\theta_{r}(u_{1})\}\in U_{M}\), we have \[zu^{-1}\bar{n}(X,Y)uz^{-1}=\bar{n}(u_{1}^{-1}(tX)u_{2},u_{1}^{-1}(t^{2}Y) \theta_{r}(u_{1}))=u^{-1}\bar{n}(tX,t^{2}Y)u.\] Hence the action \(\bar{n}(X,Y)\mapsto zu^{-1}\bar{n}(X,Y)uz^{-1}\) is given by \(X\mapsto u_{1}^{-1}tXu_{2},Y\mapsto u_{1}^{-1}t^{2}Y\theta_{r}(u_{1})\). On the other hand, \[\Theta_{M}(zu^{-1})mu^{-1}=\Theta_{M}(u)^{-1}\alpha^{\vee}(t^{-2})m(X,Y)= \begin{bmatrix}\theta_{r}(u_{1}^{-1})&\\ &u_{2}^{-1}&\\ &&u_{1}^{-1}\end{bmatrix}\begin{bmatrix}t^{-2}\theta_{r}(Y)\\ &(I_{2m}-J^{\prime}_{2m}{}^{t}X^{t}Y^{-1}J_{r}X)^{-1}&\\ &&t^{2}Y\end{bmatrix}\] \[\begin{bmatrix}u_{1}&&\\ &u_{2}&\\ &&\theta_{r}(u_{1})\end{bmatrix}=\begin{bmatrix}\theta_{r}(u_{1}^{-1})\theta_{ r}(t^{2}Y)u_{1}&\\ &(I_{2m}-J^{\prime}_{2m}{}^{t}(u_{1}^{-1}tXu_{2})^{t}(u_{1}^{-1}t^{2}Y\theta_{r} (u_{1}))J_{r}(u_{1}^{-1}tXu_{2}))^{-1}&\\ &&u_{1}^{-1}t^{2}Y\theta_{r}(u_{1})\end{bmatrix}.\] This verifies that for \(z=\alpha^{\vee}(t)\), \(u=\operatorname{diag}(u_{1},u_{2},\theta_{r}(u_{1}))\in U_{M}\), we have \[\Theta_{M}(zu^{-1})m(X,Y)uz^{-1}=m(u_{1}^{-1}tXu_{2},u_{1}^{-1}t^{2}Y\theta_{r }(u_{1})).\] Let \(U_{0,\kappa}=U_{1,\kappa}\times U_{2,\kappa}\) with \(U_{1,\kappa}\subset U_{\operatorname{GL}_{r}}(F)\) and \(U_{2,\kappa}\subset U_{\operatorname{Sp}_{2m}}(F)\) given by \[U_{1,\kappa}:=\{(u_{1,ij})\in U_{\operatorname{GL}_{r}(F)}:|u_{1,ij}|\leq q^{ (j-i)\kappa}\},\quad U_{2,\kappa}:=\{(u_{2,ij})\in U_{\operatorname{Sp}_{2m}( F)}:|u_{2,ij}|\leq q^{(j-i)\kappa}\}.\] Then \(U_{0,\kappa}\) are open compact subgroups of \(U_{M}(F)\). For \(u=(u_{1},u_{2})\in U_{0,\kappa}\), one checks easily that \(\Theta_{M}(u^{-1})\overline{N}_{0,\kappa}u\subset\overline{N}_{0,\kappa}\), i.e, \(\overline{N}_{0,\kappa}\) is invariant under the twisted conjugate action by \(U_{0,\kappa}\). Let \(\varphi(m(X,Y)):=\varphi_{\kappa}((Y^{-1}X)_{r1}X)\varphi_{\kappa}(((Y^{-1}X)_ {r1})^{2}Y)\), so \(\varphi\) is defined on the subset of \(M\) which lie in the image of \(n(X,Y)\) under the map \(n\mapsto m\) via \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\), where \((Y^{-1}X)_{r1}\) is the \((r,1)\)-th entry of \(Y^{-1}X\). We will show that \(\varphi\) is invariant under the \(\Theta_{M}\)-twisted conjugate action \(U_{0,\kappa}\). Suppose \(u=(u_{1},u_{2})\in U_{0,\kappa}\), then \[\varphi(\Theta_{M}(u^{-1})m(X,Y)u)=\varphi(m(u_{1}^{-1}Xu_{2},u_{1}^{-1}Y \theta_{r}(u_{1})))\] \[=\varphi_{\kappa}((\theta_{r}(u_{1}^{-1})Y^{-1}Xu_{2})_{r1})u_{1}^{-1}Xu_{2}) \varphi_{\kappa}((\theta_{r}(u_{1}^{-1})Y^{-1}Xu_{2})_{r1})^{2}u_{1}^{-1}Y\theta_ {r}(u_{1}))\] \[=\varphi_{\kappa}((Y^{-1}X)_{r1})u_{1}^{-1}Xu_{2})\varphi_{\kappa}((Y^{-1}X)_ {r1})^{2}u_{1}^{-1}Y\theta_{r}(u_{1})).\] The last equality holds since \((Y^{-1}X)_{r1}\) is clearly invariant under the left or right translation by an upper triangular unipotent matrix. On the other hand, since \(u_{1}(Y^{-1}X)_{r1}u_{1}^{-1}=(Y^{-1}X)_{r1}\), \[\varphi_{\kappa}((Y^{-1}X)_{r1}u_{1}^{-1}Xu_{2})=1\Leftrightarrow(Y^{-1}X)_{r1 }u_{1}^{-1}Xu_{2}\in X(\kappa)\ \ \Leftrightarrow X\in u_{1}(Y^{-1}X)_{r1}u_{1}^{-1}(u_{1}X(\kappa)u_{2}^{-1})=(Y^{-1 }X)_{r1}X(\kappa)\] \[\Leftrightarrow\varphi_{\kappa}((Y^{-1}X)_{r1}X)=1,\] where \(X(\kappa)=\{X=(x_{ij})\in\operatorname{Mat}_{r\times 2m}:|x_{ij}|\leq q^{((r-i)+(2m-j)+1) \kappa}\}\), which can be checked by direct calculation that it is invariant under the left-action by \(U_{1,\kappa}\) and right-action by \(U_{2,\kappa}\). Similarly we have \(\varphi_{\kappa}(((Y^{-1}X)_{r1})^{2}u_{1}^{-1}Y\theta_{r}(u_{1}))=\varphi_{ \kappa}(((Y^{-1}X)_{r1})^{2}Y)\). It follows that \[\varphi(\Theta_{M}(u^{-1})m(X,Y)u)=\varphi(m(X,Y))\] for all \(u\in U_{0,\kappa}\). Then for \((X^{\prime},Z^{\prime})\in R_{X^{\prime}}\times R_{Z^{\prime}}\), \(z=\alpha^{\vee}(\varpi^{d+g}u_{\alpha_{r}}(\dot{w}_{0}\bar{n}(X^{\prime},Z^{ \prime})\dot{w}_{0}^{-1}))=\alpha^{\vee}(\varpi^{d+g}\frac{y^{\prime\ast}_{\ \ r}}{\det(Y^{\prime}))})\), where \(y^{\prime\ast}_{\ \ rr}\) is the adjoint matrix of the \((r,r)\)-th entry of \(Y^{\prime}\), we have \[\varphi_{\overline{N}_{0,\kappa}}(zu^{-1}\bar{n}(X^{\prime},Z^{\prime})uz^{-1 })=\varphi_{\kappa}(u_{1}^{-1}\frac{y^{\prime\ast}_{\ \ rr}}{\det(Y^{\prime})}X^{\prime}u_{2})\varphi_{ \kappa}(u_{1}^{-1}(\frac{y^{\prime\ast}_{\ \ rr}}{\det Y^{\prime})})^{2}Y^{\prime} \theta_{r}(u_{1}))\] \[=\varphi_{\kappa}(u_{1}^{-1}({X^{\prime}}^{-1}Y^{\prime})_{r1}X^{\prime}u_{2 })\varphi_{\kappa}(u_{1}^{-1}(({Y^{\prime}}^{-1}X^{\prime})_{r1})^{2}Y^{\prime }\theta_{r}(u_{1}))=\varphi(\Theta_{M}(u^{-1})m(X^{\prime},Z^{\prime})u).\] Since the uniqueness of the Bruhat decomposition \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\) implies that when \(n\mapsto u^{-1}nu\), then \(m\mapsto\Theta_{M}(u)^{-1}mu\). By our calculation, for \(u=(u_{1},u_{2})\in U_{M}\), \(n=n(X,Y)\mapsto m=m(X,Y)\), we have \(u^{-1}n(X,Y)u=n(u_{1}^{-1}Xu_{2},u_{1}^{-1}Y\theta(u_{1}))\), \(\Theta_{M}(u)^{-1}m(X,Y)u=m(u_{1}^{-1}Xu_{2},u_{1}^{-1}Y\theta(u_{2}))\). So both the actions are given by \(X\mapsto u_{1}^{-1}Xu_{2}\), \(Y\mapsto u_{1}^{-1}Y\theta(u_{1})\) Therefore, \[U_{M,m(X,Y)}^{\Theta_{M}}=U_{M,n(X,Y)}=\{u=(u_{1},u_{2})\in U_{M}:u_{1}^{-1}Xu _{2}=X,u_{1}^{-1}Y\theta(u_{1})=Y\}\] This shows that the centralizer of \(n(X,Y)\) in \(U_{M}\) agrees with the \(\Theta_{M}\)-twisted centralizer of \(m(X,Y)\) in \(U_{M}\). Compare the definitions of \(j_{\overline{N}_{0,\kappa},\pi,w}(n(X^{\prime},Z^{\prime}))\) and \(B_{\varphi}^{M}(m(X^{\prime},Z^{\prime}),f)\), we obtain the desired equality. Finally, by Proposition 5.2 and Proposition 6.8, we can restate Proposition 5.2 as: **Proposition 6.10**.: _Let \(\sigma\) and \(\tau\) be \(\psi\)-generic supercuspidal representations of \(\mathrm{GL}_{r}(F)\) and \(\mathrm{Sp}_{2m}(F)\) respectively, such that \(\omega_{\pi}\) is ramified where \(\pi=\sigma\boxtimes\tau\), then there exists a \(\kappa_{0}\), such that for all \(\kappa\geq\kappa_{0}\) and all characters \(\chi\) of \(F^{\times}\) such that \(\omega_{\pi\otimes\chi}=\omega_{\pi}\chi^{r}\) is ramified, we have_ \[C_{\psi}(s,\pi\otimes\chi)^{-1}=\gamma(2rs,\omega_{\pi}^{2}\chi^{2r},\psi^{-1 })\int_{R_{X^{\prime}}\times R_{Z^{\prime}}}B_{\varphi}^{M}(m(X^{\prime},Z^{ \prime}),f)(\omega_{\pi}^{-2}\chi^{-2r})(\frac{P(X^{\prime},Z^{\prime})}{ \det(Z^{\prime}J_{r}+\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2})})\] \[\cdot|\frac{P(X^{\prime},Z^{\prime})}{\det(Z^{\prime}J_{r}+\frac{X^{\prime} \theta_{r,m}(X^{\prime})}{2})}|^{-rs}|\det(m_{1}(X^{\prime},Z^{\prime}))|^{s+ \frac{r+2m+1}{2}}d\mu_{X^{\prime}}\wedge d\mu_{Z^{\prime}}\] _where \(m(X^{\prime},Z^{\prime})\) is the image of \(n(X^{\prime},Z^{\prime})\mapsto m(X^{\prime},Z^{\prime})\) via the Bruhat decomposition \(\dot{w}_{0}^{-1}\dot{n}=mn^{\prime}\bar{n}\), which holds off a subset of measure zero on \(N(F)\), \(m=\mathrm{diag}\{m_{1},m_{2},\theta_{r}(m_{1})\}\) with \(m_{1}\in\mathrm{GL}_{r}(F)\), \(m_{2}\in\mathrm{Sp}_{2m}(F)\), and \(\gamma(2rs,\omega_{\pi}^{2}\chi^{2r},\psi^{-1})\) is the Abelian \(\gamma\)-factor depending only on \(\omega_{\pi}\) and \(\chi\)._ One important observation in our case is that our orbit space \(R^{\prime}=R_{X^{\prime}}\times R_{Z^{\prime}}\) is no longer isomorphic to a torus, and attempts to reparameterize \(m(X,Z)\) by the maximal torus together with non-torus parts fail since the map \((X,Z)\mapsto(m_{1},m_{2})\) is not surjective in general. So in order to apply the asymptotic analysis of partial Bessel integrals, we will need to study the toric action on the orbit space and separate its toric part out from the integral. First we would like to understand the action of the maximal torus \(A\simeq A_{1}\times A_{2}\) on \(R=R_{X}\times R_{Z}=U_{M}\backslash N\), where \(A_{1}\) is the maximal torus of \(\mathrm{GL}_{r}\) and \(A_{2}\) is the maximal torus of \(\mathrm{Sp}_{2m}\). Recall that through the Bruhat decomposition \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\), for \(s\in A\), if \(n\mapsto sns^{-1}\), then \(m\mapsto\Theta_{M}(s)ms^{-1}\). In our case, for \(m=(m_{1},m_{2})\) with \(m_{1}\in\mathrm{GL}_{r}\) and \(m_{2}\in\mathrm{Sp}_{2m}\), the action is given by \(m_{1}\mapsto s^{\prime}m_{1}\theta_{r}({s^{\prime}}^{-1})\), and \(m_{2}\mapsto s^{\prime}m_{2}{s^{\prime}}^{\prime-1}\), which is equivalent to \(X\mapsto s^{\prime}X{s^{\prime}}^{-1}\), \(Y\mapsto s^{\prime}Y\theta_{r}({s^{\prime\prime}}^{-1})\), or to \(X\mapsto s^{\prime}X{s^{\prime}}^{-1}\), \(Z\mapsto s^{\prime}Z{s^{\prime}}\), where \(s=(s^{\prime},s^{\prime\prime})\) with \(s^{\prime}\in A_{1}\) and \(s^{\prime\prime}\in A_{2}\). Write \(s^{\prime}=\mathrm{diag}(s_{1},s_{2},\cdots,s_{r})\in A_{1},s^{\prime}=\mathrm{ diag}(s_{r+1},s_{r+2},\cdots,s_{r+m},s_{r+m},s_{r+m}{}^{-1},\cdots,s_{r+1}{}^{-1}) \in A_{2}\). Define \[T_{X,Z}=\begin{cases}\{(x_{r,1},x_{r-1,2}\cdots,x_{r-m+1,m},z_{1,1},\cdots,z_{r,r} ):(X,Z)\in R_{X}\times R_{Z}\},&\text{ if }r\geq m,\\ \{(x_{r,1},x_{r-1,2},\cdots,x_{1,r},z_{1,1},\cdots,z_{r,r}):(X,Z)\in R_{X} \times R_{Z}\},&\text{ if }r<m.\end{cases}\] **Proposition 6.11**.: _Let \(\Phi:R_{X}\times R_{Z}\to M\to A=A_{1}\times A_{2}\) be the map given by \((X,Z)\mapsto m=m(X,Z)=(m_{1},m_{2})\mapsto(t_{1},t_{2})\), where \(m(X,Z)\) is the image of \(n(X,Z)\) under the map \(n\mapsto m\) via \(\dot{w}_{0}^{-1}n=mn^{\prime}\bar{n}\), and \(m_{1}=u_{1}\dot{w}_{1}t_{1}u_{2}\), \(m_{2}=v_{1}\dot{w}_{2}t_{2}v_{2}\) are the Bruhat decomposition of \(m_{1}\) and \(m_{2}\) respectively. Then \(\Phi\) is compatible with the toric action, i.e., \(\Phi({s^{\prime}}X{s^{\prime}}^{-1},{s^{\prime}}Z{s^{\prime}})=(m_{1}({s^{ \prime}}X{s^{\prime}}^{-1},{s^{\prime}}Z{s^{\prime}}),m_{2 toric action on \(R_{X}\times R_{Z}\) is completely determined by its restriction on \(T_{X,Z}\), up to the square of each \(s_{i}(1\leq i\leq r+\min\{r,m\})\). Moreover, the restriction \(\Phi|_{T_{X,Z}}\) is a finite etale map onto its image in \(A\)._ Proof.: The compatibility of \(\Phi\) with the toric action follows directly from \(\Theta_{M}(s)m(X,Z)s^{-1}=m(s^{\prime}{Xs^{\prime}}^{-1},s^{\prime}Zs^{\prime})\). Recall that we also write \(m=m(X,Z)=m(X,Y)\) where \(Y=ZJ_{r}+\frac{X\theta_{r,m}(X)}{2}\). We claim that if we inductively write \(Y=Y_{r}=\begin{bmatrix}\alpha&y_{1,r}\\ Y_{r-1}&t\beta\end{bmatrix}\) and \(Y=Y^{(r)}=\begin{bmatrix}t^{\alpha^{\prime}}&Y^{(r-1)}\\ y_{r,1}&\beta^{\prime}\end{bmatrix}\), and assume that \(\det(Y_{i})\)'s and \(\det(Y^{(j)})\)'s are all non-zero, then the image \(m(X,Y)\) of \(n(X,Y)\) lies in the big cell of \(M\). If we decompose \(m_{1}=u_{1}\dot{w}_{1}t_{1}u_{2}\), \(m_{2}=v_{1}\dot{w}_{2}t_{2}v_{2}\) with \(\dot{w}_{1}=J_{r}\), \(\dot{w}_{2}=J_{2m}^{\prime}\), the long Weyl group elements of \(\mathrm{GL}_{r}\) and \(\mathrm{Sp}_{2m}\) respectively, we will show that \[t_{1}=\mathrm{diag}((-1)^{r-1}\frac{\det(Y_{r-1})}{\det(Y_{r})},(-1)^{r-2} \frac{\det(Y_{r-2})}{\det(Y_{r-1})},\cdots,\frac{1}{\det(Y_{1})})\] and \(t_{2}=\mathrm{diag}(\bar{t}_{2},\bar{t}_{2}^{-1})\) with \[\bar{t}_{2}=\begin{cases}(-\frac{\det(Y^{(r-1)})}{\det(Y^{(r)})}x_{r,1}^{2},( -1)^{2}\frac{\det(Y^{(r-2)})}{\det(Y^{(r-1)})}x_{r-1,2}^{2},\cdots,(-1)^{r-1} \frac{1}{\det(Y^{(1)})}x_{1,r}^{2},1,\cdots,1)&\text{ if }r<m\\ (-\frac{\det(Y^{(r-1)})}{\det(Y^{(r)})}x_{r,1}^{2},(-1)^{2}\frac{\det(Y^{(r-2)} )}{\det(Y^{(r-1)})}x_{r-1,2}^{2},\cdots,(-1)^{m}\frac{\det(Y^{(r-m)})}{\det(Y^ {(r-m+1)})}x_{r-m+1,m}^{2})&\text{ if }r\geq m\end{cases}\] Let us first prove this for \(t_{1}\) by induction on \(r\). If \(r=1\) then there is nothing to show. Assume \(r>1\) and write \(Y_{r}=\begin{bmatrix}\alpha&y_{1,r}\\ Y_{r-1}&t\beta\end{bmatrix}\). Take the representative of the long Weyl group element in \(\mathrm{GL}_{r}\) as \(J_{r}\). Then \(m_{1}=\theta_{r}(Y)=u_{1}J_{r}t_{1}u_{2}\Leftrightarrow\tilde{u}_{1}Y\tilde{u }_{2}=t_{1}^{-1}J_{r}\), where \(\tilde{u}_{i}=\theta_{r}(u_{i})^{-1}(i=1,2)\). Write \(\tilde{u}_{1}=\begin{bmatrix}1&\delta_{1}\\ &u_{1}^{\prime}\end{bmatrix}\), and \(\tilde{u}_{2}=\begin{bmatrix}u_{2}^{\prime}&t\delta_{2}\\ &1\end{bmatrix}\), with \(u_{i}^{\prime}\in U_{\mathrm{GL}_{r-1}}\), \(\delta_{i}\in\mathbb{A}^{r-1}\). Then \[\tilde{u}_{1}Y\tilde{u}_{2}=\begin{bmatrix}(\alpha+\delta_{1}Y_{r-1})u_{2}^{ \prime}&(\alpha+\delta_{1}Y_{r-1})^{t}\delta_{2}+(y_{1,r}+\delta_{1}{}^{t} \beta)\\ u_{1}^{\prime}Y_{r-1}u_{2}^{\prime}&u_{1}^{\prime}Y_{r-1}{}^{t}\delta_{2}+u_{1} ^{\prime}\beta\end{bmatrix}.\] Assume \(\det(Y_{r-1})\neq 0\), \(y_{1,r}\neq 0\), choose \(\delta_{1}=-\alpha Y_{r-1}^{-1}\), \({}^{t}\delta_{2}=-Y_{r-1}^{-1}\beta\), write \(t_{1}=\mathrm{diag}\{a_{1},\cdots,a_{r}\}\) then \(\tilde{u}_{1}Y\tilde{u}_{2}=t_{1}^{-1}J_{r}\Leftrightarrow\begin{bmatrix}0&y_{1, r}-\alpha Y_{r-1}^{-1}\,t\beta\\ u_{1}^{\prime}Y_{r-1}u_{2}^{\prime}&0\end{bmatrix}=\begin{bmatrix}a_{1}^{-1}\\ &\cdots\\ (-1)^{r-1}a_{r}\end{bmatrix}\). By induction hypothesis, it suffices to show that \(y_{1,r}-\alpha Y_{r-1}^{-1}\,t\beta=a_{1}^{-1}\). Note that \(\det(Y_{r})=\det\begin{bmatrix}\alpha&y_{1,r}\\ Y_{r-1}&t\beta\end{bmatrix}=(-1)^{r-1}\det\begin{bmatrix}Y_{r-1}&t\beta\\ \alpha&y_{1,r}\end{bmatrix}\). On the other hand, \(\begin{bmatrix}Y_{r-1}^{-1}&1\\ &1\end{bmatrix}\begin{bmatrix}Y_{r-1}&t\beta\\ \alpha&y_{1,r}\end{bmatrix}=\begin{bmatrix}I_{r-1}&Y_{r-1}^{-1}\,t\beta\\ \alpha&y_{1,r}\end{bmatrix}\). Taking determinant on both sides, we obtain that \[(-1)^{r-1}\det(Y_{r})\det(Y_{r-1})^{-1}=\det\begin{bmatrix}I_{r-1}&Y_{r-1}^{-1} \,t\beta\\ \alpha&y_{1,r}\end{bmatrix}=\det\begin{bmatrix}I_{r-1}&Y_{r-1}^{-1}\,t\beta\\ 0&-\alpha Y_{r-1}^{-1}\,t\beta+y_{1,r}\end{bmatrix}=y_{1,r}-\alpha Y_{r-1}^{-1 }\beta,\] i.e., \(a_{1}=(-1)^{r-1}\frac{\det(Y_{r-1})}{\det(Y_{r})}\). Next, we work on the formula for \(t_{2}\). Let us first assume that \(r\geq m\) and we prove the formula by induction on \(m\). When \(m=1\), the result follows from direct calculation. Assume \(m>1\) and suppose \(m_{2}=v_{1}J_{2m}^{\prime}t_{2}v_{2}\), then \(m_{2}^{-1}=I_{2m}-J_{2m}^{\prime}{}^{t}X^{t}Y^{-1}J_{r}X=-v_{2}^{-1}t_{2}^{-1}J_ {2m}^{\prime}v_{1}^{-1}\), which is equivalent to \(v_{2}(I_{2m}-J_{2m}^{\prime}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}=-t_{2}^{-1}J_{2m}^{\prime}\). Write \(v_{i}=\begin{bmatrix}1&\gamma_{i}&b_{i}\\ v_{i}^{\prime}&\gamma_{i}^{*}\\ &1\end{bmatrix},(i=1,2)\), \(X=\begin{bmatrix}0&X_{1}&t\gamma\\ x_{r,1}&0&0\end{bmatrix}\), \(Y=Y^{(r)}=\begin{bmatrix}t^{\alpha^{\prime}}&Y^{(r-1)}\\ y_{r,1}&\beta^{\prime}\end{bmatrix}\). Assume that each \(Y^{(j)},(1\leq j\leq m)\). Then \(m=1\) and \(m=1\), the result follows from direct calculation. Assume \(m>1\) and suppose \(m_{2}=v_{1}J_{2m}^{\prime}t_{2}v_{2}\), then \(m_{2}^{-1}=I_{2m}-J_{2m}^{\prime}{}^{t}X^{t}Y^{-1}J_{r}X=-v_{2}^{-1}t_{2}^{-1}J_ {2m}^{\prime}v_{1}^{-1}\), which is equivalent to \(v_{2}(I_{2m}-J_{2m}^{\prime}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}=-t_{2}^{-1}J_{2m}^{ \prime}\). Write \(v_{i}=\begin{bmatrix}1&\gamma_{i}&b_{i}\\ v_{i}^{\prime}&\gamma_{i}^{*}\\ &1\end{bmatrix},(i=1,2)\), \(X=\begin{bmatrix}0&X_{1}&t\gamma\\ x_{r,1}&0&0\end{bmatrix}\), \(Y=Y^{(r)}=\begin{bmatrix}t^{\alpha^{\prime}}&Y^{(r-1)}\\ y_{r,1}&\beta^{\prime}\end{bmatrix}\). Assume that each \(Y^{(j)},(1\leq j\leq m)\). Then \(m=1\) and \(m=1\), the result follows from direct calculation. Assume \(m>1\ \(j\leq r\)) is invertible, \(y_{r,1}\neq 0\), and set \({}^{t}(Y^{(r)})^{-1}=\begin{bmatrix}{}^{t}\delta^{\prime}_{1}&H\\ x&\delta^{\prime}_{2}\end{bmatrix}\). Then one computes that \[I_{2m}-J^{\prime}_{2m}{}^{t}X^{t}Y^{-1}J_{r}X=I_{2m}-\begin{bmatrix}&1\\ &-J^{\prime}_{2m-2}\end{bmatrix}\begin{bmatrix}0&x_{r,1}\\ {}^{t}X_{1}&0\\ \gamma&0\end{bmatrix}\begin{bmatrix}{}^{t}\delta^{\prime}_{1}&H\\ x&\delta^{\prime}_{2}\end{bmatrix}\begin{bmatrix}&1\\ -J_{r-1}\end{bmatrix}\begin{bmatrix}0&X_{1}&{}^{t}\gamma\\ x_{r,1}&0&0\end{bmatrix}\] \[=\begin{bmatrix}1-\gamma^{t}\delta^{\prime}_{1}x_{r,1}&\gamma HJ_{r-1}X_{1}& \gamma HJ_{r-1}{}^{t}\gamma\\ J^{\prime}_{2m-2}{}^{t}X_{1}{}^{t}\delta^{\prime}_{1}x_{r,1}&I_{2m-2}-J^{ \prime}_{2m-2}{}^{t}X_{1}HJ_{r-1}X_{1}&-J^{\prime}_{2m-2}{}^{t}X_{1}HJ_{r-1}{ }^{t}\gamma\\ x_{r,1}^{2}x&-x_{r,1}\delta^{\prime}_{2}J_{r-1}X_{1}&1-x_{r,1}\delta^{\prime}_{ 2}J_{r-1}{}^{t}\gamma\end{bmatrix}\] By definition of \({}^{t}(Y^{(r)})^{-1}\), we have \[\alpha^{\prime}{}^{t}\delta^{\prime}_{1}+y_{r,1}x=1,\alpha^{\prime}H+y_{r,1} \delta^{\prime}_{2}=0,{}^{t}Y^{(r-1)}{}^{t}\delta^{\prime}_{1}+{}^{t}\beta^{ \prime}x=0,{}^{t}Y^{(r-1)}H+{}^{t}\beta^{\prime}\delta^{\prime}_{2}=I_{r-1}.\] Multiply by \(v_{2}\) on the left and \(v_{1}\) on the right, and compare with \(-t_{2}^{-1}J^{\prime}_{2m}=\begin{bmatrix}&a_{r+1}^{-1}\\ &-t_{2}^{\prime-1}J^{\prime}_{2m-2}\end{bmatrix}\), where we write \(t_{2}=\text{diag}\{a_{r+1},\cdots,a_{n},a_{n}^{-1},\cdots,a_{r+1}^{-1}\}= \text{diag}\{a_{r+1},t^{\prime}_{2},a_{r+1}^{-1}\}\). The middle entry of \(v_{2}(I_{2m}-J^{\prime}_{2m}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}\) gives \[v_{2}^{\prime}\big{(}I_{2m-2}-J^{\prime}_{2m-2}{}^{t}X_{1}{}^{t}(Y^{(r-1)})^{- 1}J_{r-1}X_{1})v_{1}^{\prime}\] \[+v_{2}^{\prime}J^{\prime}_{2m-2}{}^{t}X_{1}{}^{t}(Y^{(r-1)})^{-1t}\beta^{ \prime}\delta^{\prime}_{2}J_{r-1}X_{1}v_{1}^{\prime}+\gamma_{2}^{*}(-x_{r,1} \delta^{\prime}_{2}J_{r-1}X_{1})v_{1}^{\prime}=-t_{2}^{\prime-1}J^{\prime}_{2m -2}.\] Therefore we have to show that \[v_{2}^{\prime}J^{\prime}_{2m-2}{}^{t}X_{1}{}^{t}(Y^{(r-1)})^{-1t}\beta^{\prime }\delta^{\prime}_{2}J_{r-1}X_{1}v_{1}^{\prime}+\gamma_{2}^{*}(-x_{r,1}\delta^{ \prime}_{2}J_{r-1}X_{1})v_{1}^{\prime}=0\hskip 28.452756pt(*)\] in order to continue with our induction procedure. Since \({}^{t}Y^{(r-1)}{}^{t}\delta^{\prime}_{1}+{}^{t}\beta^{\prime}x=0\), we have \({}^{t}\delta^{\prime}_{1}+{}^{t}(Y^{(r-1)})^{-1t}\beta^{\prime}x=0\), thus \({}^{t}\delta^{\prime}_{1}\delta^{\prime}_{2}+{}^{t}(Y^{(r-1)})^{-1t}\beta^{ \prime}\delta^{\prime}_{2}x=0\). We also note that \(\gamma_{2}^{*}=v_{2}^{\prime}J^{\prime}_{2m-2}{}^{t}\gamma_{2}\) by the structure of \(\text{Sp}_{2m-2}\). So the left hand side of (*) is equal to \[v_{2}^{\prime}J^{\prime}_{2m-2}(-{}^{t}X_{1}{}^{t}\delta^{\prime}_{1}\delta^{ \prime}_{2}-{}^{t}\gamma_{2}x_{r,1}\delta^{\prime}_{2})J_{r-1}X_{1}v_{1}^{ \prime}.\] On the other hand, one computes that the \((2,1)\)-th entry of \(v_{2}(I_{2m}-J^{\prime}_{2m}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}\) gives \(v_{2}^{\prime}J^{\prime}_{2m-2}({}^{t}X_{1}{}^{t}\delta^{\prime}_{1}x_{r,1}+{}^ {t}\gamma_{2}x_{r,1}^{2}x)=0\), hence \({}^{t}X_{1}{}^{t}\delta^{\prime}_{1}x_{r,1}+{}^{t}\gamma_{2}x_{r,1}^{2}x=0\). As a result, \[{}^{t}X_{1}{}^{t}\delta^{\prime}_{1}\delta^{\prime}_{2}x_{r,1}+{}^{t}\gamma_{2} x_{r,1}^{2}\delta^{\prime}_{2}x=0.\] Since \(x_{r,1}\neq 0\), this shows that the left hand side of (*) is equal to \(0\). By induction hypothesis, it suffices to show that \(a_{r+1}=-x_{r,1}^{2}\frac{\det(Y^{(r-1)})}{\det(Y^{(r)})}\). Note that the \((3,1)\)-th entry of \(v_{2}(I_{2m}-J^{\prime}_{2m}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}\) implies that \(x_{r,1}^{2}x=-a_{r+1}\). One also checks by direct calculation that other identities in the comparison of entries in \(v_{2}(I_{2m}-J^{\prime}_{2m}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}\) and \(-t_{2}^{-1}J^{\prime}_{2m}\) are automatically satisfied. On the other hand, \(\alpha^{\prime}{}^{t}\delta^{\prime}_{1}+y_{r,1}x=1\) together with \({}^{t}\delta_{1}=-{}^{t}(Y^{(r-1)})^{-1t}\beta^{\prime}x\) imply that \(x(y_{r,1}-\alpha^{\prime}t(Y^{(r-1)})^{-1t}\beta^{\prime})=1\). We have \[\frac{\det(Y^{(r)})}{\det(Y^{(r-1)})}=\det(Y^{(r)}\begin{bmatrix}(Y^{(r-1)})^{- 1}&\\ &1\end{bmatrix})=\det(\begin{bmatrix}Y^{(r-1)}&{}^{t}\alpha^{\prime}\\ \beta^{\prime}&y_{r,1}\end{bmatrix}\begin{bmatrix}(Y^{(r-1)})^{-1}&\\ &1\end{bmatrix})\] \[=\det\begin{bmatrix}I_{r-1}&{}^{t}\alpha_{1}\\ \beta^{\prime}(Y^{(r-1)})^{-1}&y_{r,1}\end{bmatrix}=\det\begin{bmatrix}I_{r-1}&{}^{t} \alpha^{\prime}\\ 0&-\beta^{\prime}(Y^{(r-1)})^{-1t}\alpha^{\prime}+y_{r,1}\end{bmatrix}=y_{r,1}- \alpha^{\prime}{}^{t}(Y^{(r-1)})^{-1t}\beta^{\prime}.\] Hence by assuming that \(y_{r,1}-\alpha^{\prime}{}^{t}(Y^{(r-1)})^{-1t}\beta^{\prime}\neq 0\), we obtain that \(a_{r+1}=-x_{r,1}^{2}\frac{\det(Y^{(r-1)})}{\det(Y^{(r)})}\) as desired. When \(r<m\), recall that the toric action has a stabilizer isomorphic to the maximal torus of \(\text{Sp}_{2(m-r)}\), given by the entries \((s_{2r+1},\cdots,s_{r+m})\). So the above induction process terminates at the \(r\)-th step. As a result, the formula directly follows from the same procedure. Given the explicit formula for \(t_{1}\) and \(t_{2}\) in the Bruhat decomposition, when restricted to \(T_{X,Z}\), the map \(\Phi\) is completely determined by \(Y\mapsto\det(Y_{i})(1\leq i\leq r)\), \(Y\mapsto\det(Y^{(i)})(1\leq i\leq r)\), \(X\mapsto x_{r-j+1,j}^{2}(1\leq j\leq\min(r,m))\), which are clearly finite \(\acute{e}tale\) maps since we assumed that \(\det(Y^{i})\)'s, \(\det(Y^{(i)})\)'s, and \(x_{r-j+1,j}\)'s are all non-zero. The restriction of the toric action \(X\mapsto s^{\prime}{X_{s}}^{\prime}{}^{-1}\), \(Z\mapsto s^{\prime}Zs^{\prime}\), or equivalently, \(X\mapsto s^{\prime}{X_{s}}^{\prime}{}^{-1}\), \(Y\mapsto s^{\prime}{Y_{\theta}}_{r}(s^{\prime})^{-1}\), on \(T_{X,Z}\) is given by \(x_{r-i+1,i}\mapsto s_{r-i+1}s_{r-i}^{-1}x_{r-i+1}\), \(z_{j,j}\mapsto s_{z}^{2}z_{j,j}\) for \(1\leq i\leq m\) if \(r\geq m\), \(1\leq i\leq r\) if \(r<m\), and \(1\leq j\leq r\). Moreover, when \(r<m\), the action has a non-trivial stabilizer isomorphic to the maximal torus of \(\operatorname{Sp}_{2(m-r)}\), given by the variables \((s_{2r+1},\cdots,s_{r+m})\). A simple calculation shows that it takes \(\det(Y_{i})\) to \(\prod_{k=1}^{r}s_{r-k+1}^{2}\det(Y_{i})\), \(\det(Y^{i})\) to \(\prod_{k=1}^{i}s_{i}^{2}\det(Y^{(i)})\), and \(x_{r-k+1,k}\) to \(\frac{s_{r-k+1}}{s_{r+k}}x_{r-k+1}\). Together with the compatibility of the toric action, it implies that \[t_{1}\mapsto{s^{\prime}}^{-2}t_{1},t_{2}^{\prime}\mapsto\begin{cases}\operatorname {diag}\{s_{r+1}^{-2},\cdots,s_{2r}^{-2},1,\cdots,1,\}t_{2}^{\prime},&\text{ if }r<m,\\ {s^{\prime}}^{\prime}{}^{-2}t_{2}^{\prime},&\text{ if }r\geq m.\end{cases}\] Consequently, the toric action on \(R_{X}\times R_{Z}\) is completely determined by its action on \(T_{X,Z}\) up to the square of each \(s_{i}(1\leq i\leq r+\min\{r,m\})\). It follows that the covering group is isomorphic to finitely many copies of \(\mathbb{Z}/2\mathbb{Z}\). Finally, in order to apply our uniform smooth results for partial Bessel integrals, we need to verify the non-singular condition in Proposition 6.8: **Proposition 6.12**.: _In the Bruhat decompositions \(m_{1}=u_{1}\dot{w}_{1}t_{1}u_{2}\), and \(m_{2}=v_{1}\dot{w}_{2}t_{2}v_{2}\), the entries in the 1-parameter subgroups for \(u_{1},u_{2},v_{1},v_{2}\) are rational functions of the image of \(T_{X,Z}\) under \(\Phi\) without singularities._ Proof.: Based on the calculation in Proposition 6.11, it suffices to show that the denominator of entries in \(u_{1},u_{2},v_{1},v_{2}\) are polynomial functions of \(\det(Y^{(i)})\), \(\det(Y_{i})\) and \(x_{r-i+1,i}\) without singularities. In fact, we show that they are monomials of these factors. We start with \(m_{1}=u_{1}w_{1}t_{1}u_{2}\). Denote \(u_{1}=(u_{ij}^{1}),u_{2}=(u_{ij}^{2})\), \(Y=(y_{ij})\). Let \[Y_{i,j}=Y[r-j+1,r-j+2,\cdots,\widehat{r-i+1}\cdots,r;1,\cdots,j-1],\] the matrix of size \((j-1)\times(j-1)\) with elements from the indicated columns and rows of \(Y\). We will show by induction on \(r\) that \[u_{ij}^{1}=\frac{\det(Y_{i,j})}{\det(Y_{j-1})}.\] When \(r=1\) there is nothing to show. Suppose \(r>1\), write \(u_{1}=\begin{bmatrix}u_{1}&{}^{t}\delta_{1}\\ &1\end{bmatrix}\), \(u_{2}=\begin{bmatrix}1&\delta_{2}\\ &u_{2}^{2}\end{bmatrix}\), \(t=\operatorname{diag}\{a_{1},\cdots,a_{r}\}=\operatorname{diag}\{a_{1},t_{1}^ {\prime}\}\), and \(Y=\begin{bmatrix}\alpha&y_{1,r}\\ Y_{r-1}&{}^{t}\beta\end{bmatrix}\). Note that \(m_{1}=\theta_{r}(Y)=u_{1}J_{r}t_{1}u_{2}\Leftrightarrow Y=Y_{r}=J_{r}{}^{t}u_{1 }^{-1}J_{r}^{-1}t_{1}^{-1}u_{2}^{-1}J_{r}\). We compute that \[\begin{bmatrix}\alpha&y_{1,r}\\ Y_{r-1}&{}^{t}\beta\end{bmatrix} =\begin{bmatrix}&1\\ -J_{r-1}&1\end{bmatrix}\begin{bmatrix}u_{1}^{\prime-1}\\ -\delta_{1}{}^{t}u_{1}^{\prime-1}&1\end{bmatrix}\begin{bmatrix}&-J_{r-1}^{-1} \\ 1&1\end{bmatrix}\begin{bmatrix}a_{1}^{-1}&1\\ t_{1}^{\prime-1}\end{bmatrix}\begin{bmatrix}1&1\\ -t^{\prime}{}^{\prime}{}^{-1}\delta_{2}&{}^{t}u_{2}^{\prime-1}\end{bmatrix} \begin{bmatrix}-J_{r-1}\\ -J_{r-1}\end{bmatrix}\] \[=\begin{bmatrix}-\delta_{1}{}^{t}u_{1}^{\prime-1}J_{r-1}^{1}t_{1}^ {\prime-1}t_{1}^{\prime-1}t_{1}^{\prime-1}t_{1}^{\prime-1}J_{r-1}&a_{1}^{-1}- \delta_{1}{}^{t}u_{1}^{\prime-1}J_{r-1}^{-1}t_{1}^{\prime-1}t_{1}^{\prime-1}t_{ 2}^{\prime-1}\delta_{2}\\ -J_{r-1}^{-1}u_{1}^{\prime-1}J_{r-1}^{\prime-1}t_{1}^{\prime-1}t_{1}^{\prime-1}t_{ 2}^{\prime-1}t_{2}^{\prime-1}\delta_{2}\end{bmatrix}.\] Hence \(Y_{r-1}=-J_{r-1}{}^{t}u_{1}^{\prime}{}^{-1}J_{r-1}^{-1}t_{1}^{\prime-1}t_{1}^{ \prime-1}u_{2}^{\prime-1}J_{r-1}\). Compare this with the expression for \(Y_{r}\), we replace \(Y=Y_{r}\) by \(-Y_{r-1}\) and continue by induction. Suppose we show that \(u_{i,r}^{1}=\frac{\det(Y_{i,r})}{\det(Y_{r-1})},(1\leq i\leq r-1)\), then the induction hypothesis would imply that \[u_{i,r-1}^{1}={u^{\prime}}_{i,r-1}^{1}=\frac{\det((-Y_{r-1})_{i,r-1})}{\det((-Y_ {r-1})_{r-2})}=\frac{(-1)^{r-2}\det(Y_{i,r-1})}{(-1)^{r-2}\det(Y_{r-2})}=\frac{ \det(Y_{i,r-1})}{\det(Y_{r-2})},(1\leq i\leq r-2),\] since \[(Y_{r-1})_{i,r-1}=Y_{r-1}[1,\cdots,(r-1)-i+1,\cdots,r-1;1,\cdots,r-2]\] \[=Y_{r}[2,\cdots,r-i+1,\cdots,r;1,\cdots,r-2]=Y_{i,r-1}.\] Therefore it suffices to show that \(u_{i,r}^{1}=\frac{\det(Y_{i,r})}{\det(Y_{r-1})}\) for \(1\leq i\leq r-1\). From the above calculation we also have \(\alpha=-\delta_{1}J_{r-1}^{-1}Y_{r-1},\,^{t}\beta=-Y_{r-1}J_{r-1}^{-1}\delta_{2}\). So \[\delta_{1}=-\alpha Y_{r-1}^{-1}J_{r-1}=(u_{1,r}^{1},u_{2,r}^{1},\cdots,u_{r-1,r }^{1}),\] \[\delta_{2}=-\beta^{t}Y_{r-1}^{-1}J_{r-1}^{-1}=(u_{1,2}^{2},u_{1,3}^{2},\cdots,u _{1,r}^{2}).\] We will show the formula for \(u_{1}\) first. By the construction of \(Y_{i,j}\), we expand the determinant of \(Y\) along the last column to get \[\det(Y_{r})=(-1)^{r+1}y_{1,r}\det(Y_{r-1})+(-1)^{r+2}y_{2,r}\det(Y_{r-1,r})+ \cdots+(-1)^{2r}y_{r,r}\det(Y_{1,r}).\] Since \(\beta=(y_{2,r},\cdots,y_{r-1,r})\), if we set \[\delta=((-1)^{r+2}\frac{\det(Y_{r-1,r})}{\det(Y_{r-1})},(-1)^{r+3}\frac{\det(Y _{r-2,r})}{\det(Y_{r-2})},\cdots,(-1)^{r+l+1}\frac{\det(Y_{r-l,r})}{\det(Y_{r -1})},\cdots,(-1)^{2r}\frac{\det(Y_{1,r})}{\det(Y_{r-1})}),\] then the above formula is equivalent to \(\det(Y_{r})=(-1)^{r+1}y_{1,r}\det(Y_{r-1})+\det(Y_{r-1})\delta^{t}\beta\). On the other hand, we have \(\begin{bmatrix}Y_{r-1}^{-1}&\cr\alpha&y_{1,r}\end{bmatrix}=\begin{bmatrix}I_{ r-1}&Y_{r-1}^{-1}\beta\\ \alpha&y_{1,r}\end{bmatrix}\). Taking determinant on both sides, we obtain that \[(-1)^{r-1}\det(Y_{r})\det(Y_{r-1})^{-1}=\det\begin{bmatrix}I_{r-1}&Y_{r-1}^{- 1}\,t\beta\\ \alpha&y_{1,r}\end{bmatrix}=\det\begin{bmatrix}I_{r-1}&Y_{r-1}^{-1}\,t\beta\\ 0&-\alpha Y_{r-1}^{-1}\,t\beta+y_{1,r}\end{bmatrix}=y_{1,r}-\alpha Y_{r-1}^{- 1}\,t\beta.\] Hence \(\det(Y_{r})=(-1)^{r-1}y_{1,r}\det(Y_{r-1})+(-1)^{r}\det(Y_{r-1})\alpha Y_{r-1} ^{-1}\,t\beta\). Consequently, \(((-1)^{r}\alpha Y_{r-1}^{-1}-\delta)^{t}\beta=0\). Note that our formula works for any \(\beta\), therefore \(\delta=(-1)^{r}\alpha Y_{r-1}^{-1}\). As \(\delta_{1}=-\alpha Y_{r-1}^{-1}J_{r-1}\), we have \(\delta=(-1)^{r-1}\delta_{1}J_{r-1}^{-1}=\delta_{1}J_{r-1}\). It follows that \[(-1)^{r+l+1}\frac{\det(Y_{r-l,r})}{\det(Y_{r-1})}=(-1)^{r-l-1}u_{r-l,r}^{1}\] for each \(1\leq l\leq r-1\). i,e., \(u_{i,r}^{1}=\frac{\det(Y_{i,r})}{\det(Y_{r-1})}\) for \(1\leq i\leq r-1\). This completes the proof for the formula for \(u_{1}\). Now we turn to the formula for \(u_{2}\). Let \[Y_{ij}^{\prime}=Y[(i+1),\cdots,r;1,\cdots,r-j+1,\cdots,r-i+1]\] which is of size \((r-i)\times(r-i)\). We will show that \[u_{ij}^{2}=\frac{\det(Y_{ij}^{\prime})}{\det(Y_{r-i})}.\] Similar to the proof for \(u_{1}\), since \(Y_{1,1}^{\prime}=Y_{r-1}\), if we expand \(\det(Y)\) along the first row, we get \[\det(Y)=\det(Y_{r})=\sum_{j=1}^{r}(-1)^{j+1}y_{1,j}\det(Y_{1,r-j+1}^{\prime})= (=1)^{r+1}y_{1,r}\det(Y_{r-1})+\sum_{j=1}^{r-1}y_{1,j}\det(Y_{1,r-j+1}^{\prime}).\] Set \(\delta^{\prime}=((-1)^{j+1}\frac{\det(Y_{1,r-j+1}^{\prime})}{\det(Y_{r-1})})_{j =1}^{r-1}\), then \(\frac{\det(Y_{r})}{\det(Y_{r-1})}=(-1)^{r+1}y_{1,r}+\alpha^{t}\delta^{\prime}\). On the other hand, by the above argument we still have \((-1)^{r-1}\det(Y_{r})\det(Y_{r-1})^{-1}=y_{1,r}-\alpha Y_{r-1}^{-1}\,t\beta\). So \(\alpha((-1)^{r}t\delta^{\prime}-Y_{r-1}^{-1}\,t\beta)=0\). Since this works for any \(\alpha\), we obtain that \((-1)^{r+\delta^{\prime}}=Y_{r-1}^{-1}\beta\). Note that the formula \(\delta_{2}=-\beta^{t}Y_{r-1}^{-1}J_{r-1}^{-1}\) is equivalent to \(\delta^{\prime}=(-1)^{r-1}\delta_{2}J_{r-1}\). As \(\delta_{2}=(u_{1,2},\cdots,u_{1,r})\), this means that \((-1)^{j+1}\frac{\det(Y_{1,r-j+1}^{\prime})}{\det(Y_{r-1})}=\delta_{2}\). \((-1)^{j}u_{1,r-j+1}^{2},\ (1\leq j\leq r-1)\) and therefore \(u_{1,j}^{2}=\frac{\det(Y_{1,j}^{\prime})}{\det(Y_{r-1})},\ (1\leq j\leq r-1)\). Then a similar induction argument as the proof of formulas for entries of \(u_{1}\) implies that \(u_{ij}^{2}=\frac{\det(Y_{i,j}^{\prime})}{\det(Y_{r-1})}\) for all \((i,j)\). For entries of \(v_{1}\) and \(v_{2}\), recall that in the proof of Proposition 6.11, we wrote \(v_{i}=\begin{bmatrix}1&\gamma_{i}&b_{i}\\ &v_{i}^{\prime}&\gamma_{i}^{*}\\ &&1\end{bmatrix},(i=1,2)\), \(X=\begin{bmatrix}0&X_{1}&{}^{t}\gamma\\ x_{r,1}&0&0\end{bmatrix}\), \(Y=Y^{(r)}=\begin{bmatrix}{}^{t}\alpha^{\prime}&Y^{(r-1)}\\ &y_{r,1}&\beta^{\prime}\end{bmatrix}\). We assumed that \(\det(Y^{(j)})\neq 0,(1\leq j\leq r)\), \(y_{r,1}\neq 0\), and wrote \({}^{t}(Y^{(r)})^{-1}=\begin{bmatrix}{}^{t}\delta_{1}^{\prime}&H\\ x&\delta_{2}^{\prime}\end{bmatrix}\). It follows immediately from the computation of \(v_{2}(I_{2m}-J_{2m}^{\prime}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}=-t_{2}^{-1}J_{2m}^{\prime}\) that \[1-\gamma^{t}\delta_{1}^{\prime}x_{r,1}+\gamma_{2}J_{2m-2}^{\prime}{}^{t}X_{1 }{}^{t}\delta_{1}^{\prime}x_{r,1}+b_{2}x_{r,1}^{2}x=0, \tag{1}\] \[\gamma HJ_{r-1}X_{1}+\gamma_{2}(I_{2m-2}-J_{2m-2}^{\prime}{}^{t}X_{1}HJ_{r-1} X_{1})-b_{2}x_{r,1}\delta_{2}^{\prime}J_{r-1}X_{1}=0. \tag{2}\] Multiplying (1) by \(\delta_{2}^{\prime}J_{r-1}X_{1}\), (2) by \(x_{r,1}x\), and taking their sum, we get \[\gamma_{2}(J_{2m-2}^{\prime}{}^{t}X_{1}({}^{t}\delta_{1}^{\prime}\delta_{2}^{ \prime}-Hx)J_{r-1}X_{1}x_{r,1}+x_{r,1}xI_{2m-2})+\delta_{2}^{\prime}J_{r-1}X_{ 1}-\gamma({}^{t}\delta_{1}^{\prime}\delta_{2}^{\prime}-Hx)J_{r-1}X_{1}x_{r,1}=0.\] Since \({}^{t}Y^{(r-1)}H+{}^{t}\beta\delta_{2}^{\prime}=I_{r-1}\), it follows that \({}^{t}\delta_{1}^{\prime}\delta_{2}^{\prime}-Hx=-{}^{t}(Y^{(r-1)})^{-1}x\). By induction hypothesis, \(I_{2m-2}-J_{2m-2}^{\prime}{}^{t}X_{1}{}^{t}Y^{(r-1)}{}^{-1}J_{r-1}X_{1}=-t_{2} ^{\prime}J_{2m-2}^{\prime}\), where we write \(-t_{2}^{-1}J_{2m}^{\prime}=\begin{bmatrix}&a_{r+1}^{-1}\\ &-t_{2}^{\prime-1}J_{2m-2}^{\prime}\\ -a_{r+1}\end{bmatrix}\). Therefore \[\gamma_{2}=-x_{r,1}^{-1}x^{-1}(\delta_{2}^{\prime}J_{r-1}X_{1}+\gamma^{t}(Y^{( r-1)})^{-1}J_{r-1}X_{1}xx_{r,1})v_{1}^{\prime}t_{2}^{\prime}J_{2m-2}v_{2}^{\prime}.\] On the other hand, since \(x\) is the \((r,1)\)-th entry of \({}^{t}Y^{-1}\), and recall that \({}^{t}Y=\begin{bmatrix}{}^{t}\alpha&{}^{t}Y_{r-1}\\ y_{1,r}&\beta\end{bmatrix}\). By computing the inverse of a matrix using its adjoint matrix, we have \(x=(-1)^{r+1}\frac{\det(Y_{r-1})}{\det(Y^{(r)})}\). Since \({}^{t}Y^{-1}=\begin{bmatrix}{}^{t}\delta_{1}^{\prime}&H\\ x&\delta_{2}^{\prime}\end{bmatrix}\), the common denominator of entries in \(\delta_{2}^{\prime}\) is still \(\det(Y^{(r)})\) for the same reason. Similarly \({}^{t}(Y^{(r-1)})^{-1}=\frac{({}^{t}Y^{(r-1)})^{*}}{\det(Y^{(r-1)})}\), where \(({}^{t}Y^{(r-1)})^{*}\) is the adjoint matrix of \({}^{t}Y^{(r-1)}\). In addition, by induction hypothesis and the computation in Proposition 6.11, denominators of entries in \(v_{1}^{\prime}\), \(v_{2}^{\prime}\), and \(t_{2}^{\prime}\) are all monomials of \(\det(Y^{(j)}),\det(Y_{j}),(j\leq r-1)\), and \(x_{r-i+1,i},(2\leq i\leq\min\{r,m\})\). Moreover, since \(\gamma_{2}^{*}=v_{2}^{\prime}J_{2m-2}^{\prime}{}^{t}\gamma_{2}\), and by (1), with the fact that the common denominators of entries in \(\delta_{1}^{\prime}\) is still \(\det(Y^{(r)})\), we conclude that the denominators of entries in \(\gamma_{2},\gamma_{2}^{*}\), and \(b_{2}\) are monomials of \(\det(Y_{i})\), \(\det(Y^{(i)}),(1\leq i\leq r)\), and \(x_{r-i+1,i},(1\leq i\leq\min\{r,m\})\). The same computation of \(v_{2}(I_{2m}-J_{2m}^{\prime}{}^{t}X^{t}Y^{-1}J_{r}X)v_{1}=-t_{2}^{-1}J_{2m}^{ \prime}\) also implies that \[x_{r,1}^{2}x\gamma_{1}-x_{r,1}\delta_{2}^{\prime}J_{r-1}X_{1}v_{1}^{\prime}=0, \tag{3}\] \[x_{r,1}^{2}xb_{1}-x_{r,1}\delta_{2}^{\prime}J_{r-1}X_{1}\gamma_{1}^{*}+1-x_{r,1} \delta_{2}^{\prime}J_{r-1}{}^{t}\gamma=0. \tag{4}\] From the previous argument on \(x\), \(\delta_{1}^{\prime},\delta_{2}^{\prime}\), and the fact that \(\gamma_{1}^{*}=v_{1}^{\prime}J_{2m-2}^{\prime}{}^{t}\gamma_{1}\), the formula (3) and (4) imply the same conclusion for denominators of entries in \(\gamma_{1}\), \(\gamma_{1}^{*}\), and \(b_{1}\) ## 7. Proof of Stability In the final section, we prove Theorem 1.1 by proving the stability for the corresponding local coefficient. Recall that by the structure theorems it suffices to prove stability of local coefficient attached to \(\psi\)-generic supercuspidal representations. Given two \(\psi\)-generic supercuspidal representations \(\sigma_{i}\), and \(\tau_{i}\), \((i=1,2)\) of \(\operatorname{GL}_{r}(F)\) and \(\operatorname{Sp}_{2m}(F)\) respectively, with the same central character \(\omega_{\sigma_{1}}=\omega_{\sigma_{2}}\), \(\omega_{\tau_{1}}=\omega_{\tau_{2}}\). Denote \(\pi_{i}=\sigma_{i}\boxtimes\tau_{i}\), then \(\omega:=\omega_{\pi_{1}}=\omega_{\pi_{2}}\). Suppose \(\chi\) is a continuous character of \(F^{\times}\). Choose \(f_{1}\) and \(f_{2}\) to be matrix coefficients of \(\pi_{1}\) and \(\pi_{2}\) respectively, normalized so that \(W_{f_{1}}(e)=W_{f_{2}}(e)=1\). We also choose \(\kappa\) sufficiently large so that Proposition 5.3 and Proposition 6.7 hold for both \(f_{1}\), \(f_{2}\) and our fixed auxiliary function \(f_{0}\) in Proposition 6.7, where \(\varphi_{\overline{N}_{0},\kappa}\) and \(\varphi\) are related via Proposition 6.9. By Proposition 6.10, \[C_{\psi}(s,\pi_{1}\otimes\chi)^{-1}-C_{\psi}(s,\pi_{2}\otimes\chi)^{-1}=\gamma (2rs,\omega^{2}\chi^{2r},\psi^{-1})D_{\chi}(s)\] where \[D_{\chi}(s)=\int_{R_{X^{\prime}}\times R_{Z^{\prime}}}(B_{\varphi}^{M}(m(X^{ \prime},Z^{\prime}),f_{1})-B_{\varphi}^{M}(m(X^{\prime},Z^{\prime}),f_{2}))\] \[\cdot(\omega^{-2}\chi^{-2r})(\frac{P(X^{\prime},Z^{\prime})}{\det(Z^{\prime}J _{r}+\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2})})|\frac{P(X^{\prime},Z^{ \prime})}{\det(Z^{\prime}J_{r}+\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2})} |^{-rs}|\det(m_{1}(X^{\prime},Z^{\prime}))|^{s+\frac{r+2m+1}{2}}d\mu_{X^{ \prime}}\wedge d\mu_{Z^{\prime}}.\] By Proposition 6.7, we can find \(f_{1,w^{\prime}},f_{2,w^{\prime}}\in C_{c}^{\infty}(\Omega_{w^{\prime}};\omega)\), for each \(w^{\prime}\in B(M)\) with \(d_{B}(e,w^{\prime})\geq 1\), such that \[B_{\varphi}^{M}(m(X^{\prime},Z^{\prime}),f_{1})-B_{\varphi}^{M}(m(X^{\prime}, Z^{\prime}),f_{2})=\sum_{w^{\prime}\in B(M),d_{B}(w^{\prime},e)\geq 1}(B_{\varphi}^{M} (m(X^{\prime},Z^{\prime}),f_{1,w^{\prime}})-B_{\varphi}^{M}(m(X^{\prime},Z^{ \prime}),f_{2,w^{\prime}})).\] Since the toric action is determined by its restriction to \(T_{X,Z}\) up to squares of torus entries according to Proposition 6.11, we call \(T_{X,Z}\) the toric part of the orbit space \(R_{X,Z}=R_{X}\times R_{Z}\). Denote by \(N_{X,Z}\) the subset of \(R_{X,Z}\) given by the variables not in \(T_{X,Z}\), we call \(N_{X,Z}\) the non-toric part of \(R_{X,Z}\). Then by setting \(x_{r,1}=1\) and pass to \(R_{X^{\prime}}\times R_{Z^{\prime}}\), we can separate our integral over \(R_{X^{\prime},Z^{\prime}}=R_{X^{\prime}}\times R_{Z^{\prime}}\) as a double integral over \(N_{X^{\prime},Z^{\prime}}\) and \(T_{X^{\prime},Z^{\prime}}\). Proposition 6.12 implies that the assumption in Proposition 6.8 is satisfied, since the denominators of rational functions in the 1-parameter subgroups for \(u_{1},u_{2},v_{1},v_{2}\) in the Bruhat decomposition \(m_{1}=u_{1}\hat{w}_{1}t_{1}u_{2}\), \(m_{2}=v_{1}\hat{w}_{2}t_{2}u_{2}\) with \(m=m(X^{\prime},Z^{\prime})=(m_{1}(X^{\prime},Z^{\prime}),m_{2}(X^{\prime},Z^{ \prime}))\) are monomials of the image of \(\phi\) on \(T_{X^{\prime},Z^{\prime}}\). The integral over \(R_{X^{\prime}}\times R_{Z^{\prime}}\) breaks up into a double integral over \(N_{X^{\prime},Z^{\prime}}\) and \(T_{X^{\prime},Z^{\prime}}\). By Proposition 6.11, the restriction \(\Phi|_{T_{X^{\prime},Z^{\prime}}}\) is finite \(\acute{e}tale\) onto its image in \(A^{\prime}\), where \(A=Z_{M}A^{\prime}\), so the integral over \(T_{X^{\prime},Z^{\prime}}\) can be written as a finite sum of integrals over \(\Phi(T_{X^{\prime},Z^{\prime}})\subset A^{\prime}\). In addition, since \(A_{w^{\prime}}^{w^{\prime}}A_{w^{\prime}}^{\prime}\) is open of finite index in \(A^{\prime}\), we can further write the integral over \(\Phi(T_{X^{\prime},Z^{\prime}})\) as a double integral over \(\Phi(T_{X^{\prime},Z^{\prime}})\cap A_{w_{m}}^{w^{\prime}}\) and \(\Phi(T_{X^{\prime},Z^{\prime}})\cap A_{w^{\prime}}^{\prime}\), so we have \[D_{\chi}(s)=\sum_{\text{finite}}\sum_{1\leq d_{B}(e,w^{\prime})}\int_{N_{X^{ \prime},Z^{\prime}}}\int_{\Phi(T_{X^{\prime},Z^{\prime}})\cap A_{w_{M}}^{w^{ \prime}}}\int_{\Phi(T_{X^{\prime},Z^{\prime}})\cap A_{w_{M}}^{\prime}}(B_{ \varphi}^{M}(m(X^{\prime},Z^{\prime}),f_{1,w^{\prime}})-B_{\varphi}^{M}(m(X^{ \prime},Z^{\prime}),f_{2,w^{\prime}}))\] \[\cdot(\omega^{-2}\chi^{-2r})(\frac{P(X^{\prime},Z^{\prime})}{\det(Z^{\prime}J_{r} +\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2})})|\frac{P(X^{\prime},Z^{\prime})} {\det(Z^{\prime}J_{r}+\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2})}|^{-rs}|\det (m_{1}(X^{\prime},Z^{\prime}))|^{s+\frac{r+2m+1}{2}}d\mu_{X^{\prime}}\wedge d\mu _{Z^{\prime}}.\] By the proof of Proposition 6.8, the functions \(B_{\varphi}^{M}(m(X^{\prime},Z^{\prime}),f_{i,w^{\prime}})\), \((i=1,2)\), are compactly supported in the subtorus \(A_{w^{\prime}}^{\prime}=Z_{L}^{\prime}\) where \(w^{\prime}=w_{L}^{M}\in B(M)\). By the compatibility of the toric action as in Proposition 6.11, there exists an open compact subgroup \(Z_{L,0}^{\prime}\) of \(Z_{L}^{\prime}(F)\), such that given \(m=m(X^{\prime},Z^{\prime})\), for \(s=(s^{\prime},s^{\prime\prime})\in Z_{L,0}^{\prime}\), we have \[B^{M}(\Theta_{M}(s)m(X^{\prime},Z^{\prime})s^{-1},f_{i,w^{\prime}})=B_{\varphi}^{M}(m (s^{\prime}{X^{\prime}}{s^{\prime\prime}}^{-1},s^{\prime}Z^{\prime}s^{\prime}),f_ {i,w^{\prime}})=B_{\varphi}^{M}(m(X^{\prime},Z^{\prime}),f_{i,w^{\prime}})\] for all \(\kappa\) where \(\varphi_{\overline{N}_{\kappa,0}}\) is related to \(\varphi\) via Proposition 6.9. In our cases, \(L\simeq\prod_{i=1}^{k}\operatorname{GL}_{r_{i}}\times\prod_{j=1}^{t} \operatorname{GL}_{m_{i}}\times\operatorname{Sp}_{2m^{\prime}}\) with \(\sum_{i=1}^{k}r_{i}=r\) and \(\sum_{j=1}^{t}m_{j}+m^{\prime}=m\). So we can pick \(s=(s^{\prime},s^{\prime\prime})\in Z^{\prime}_{L,0}\) with \[s^{\prime}=\operatorname{diag}\{\underbrace{s_{1},\cdots,s_{1}}_{r_{1}}, \underbrace{s_{2},\cdots,s_{2}}_{r_{2}},\cdots,\underbrace{s_{k},\cdots,s_{k }}_{r_{k}}\},\] \[s^{\prime\prime}=\operatorname{diag}\{\underbrace{s_{k+1},\cdots,s_{k+1}}_{m_ {1}},\cdots,\underbrace{s_{k+1},\cdots,s_{k+1}}_{m_{1}},\underbrace{1,\cdots, 1}_{2m^{\prime}},\underbrace{s_{k+1}^{-1},\cdots,s_{k+1}^{-1}}_{m_{1}}, \cdots,\underbrace{s_{k+1}^{-1},\cdots,s_{k+1}^{-1}}_{m_{1}}\}.\] Recall that the term \(\frac{P(X^{\prime},Z^{\prime})}{\det(Z^{\prime}J_{r}+\frac{X^{\prime}\theta _{r,m}(X^{\prime})}{2})}\) is just \(\frac{y^{\prime\ast}}{\det(Y^{\prime})}\), where \(y^{\prime\ast}_{rr}\) is the \((r,r)\)-th entry of the adjoint matrix of \(Y^{\prime}\). Here \(Y^{\prime}=Z^{\prime}J_{r}+\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2}\). One computes that the toric action takes \(\frac{y^{\prime\ast}_{rr}}{\det(Y^{\prime})}\) to \(\frac{1}{s_{k}^{2}}\frac{y^{\prime\ast}_{rr}}{\det(Y^{\prime})}\), and takes \(\det(m_{1}(X^{\prime},Z^{\prime}))\) to \(\prod_{i=1}^{k}s_{i}^{2}\det(m_{1}(X^{\prime},Z^{\prime}))\), since \(m_{1}(X^{\prime},Z^{\prime})=\theta_{r}(Y^{\prime})\). The action of \(Z_{L,0}\) preserves \(\Phi(T_{X^{\prime},Z^{\prime}})\cap A^{\prime}_{w^{\prime}}\), therefore if we change variables under the toric action, the inner integral is equal to \[(\omega\chi^{r})(s_{k}^{4})|s_{k}|^{rs}\prod_{i=1}^{k}|s_{i}|^{2s +r+2m+1}\int_{\Phi(T_{X^{\prime},Z^{\prime}})\cap A^{\prime}_{w^{\prime}}}(B^ {M}_{\varphi}(m(X^{\prime},Z^{\prime}),f_{1,w^{\prime}})-B^{M}_{\varphi}(m(X^ {\prime},Z^{\prime}),f_{2,w^{\prime}}))\] \[\cdot(\omega^{-2}\chi^{-2r})(\frac{P(X^{\prime},Z^{\prime})}{ \det(Z^{\prime}J_{r}+\frac{X^{\prime}\theta_{r,m}(X^{\prime})}{2})})|\frac{P( X^{\prime},Z^{\prime})}{\det(Z^{\prime}J_{r}+\frac{X^{\prime}\theta_{r,m}(X^{ \prime})}{2})}|^{-rs}|\det(m_{1}(X^{\prime},Z^{\prime}))|^{s+\frac{r+2m+1}{2}}.\] So \[D_{\chi}(s)=(\omega\chi^{r})(s_{k}^{4})|s_{k}|^{rs}\prod_{i=1}^{k}|s_{i}|^{2s +r+2m+1}D_{\chi}(s).\] Choose \(\chi\) sufficiently ramified so that \((\omega\chi^{r})(s_{k}^{4})|s_{k}|^{rs}\prod_{i=1}^{k}|s_{i}|^{2s+r+2m+1}\neq 1\), therefore \(D_{\chi}(s)=0\). We finally conclude that \[C_{\psi}(s,\pi_{1}\otimes\chi)=C_{\psi}(s,\pi_{2}\otimes\chi).\] Consequently, \[\gamma(s,(\sigma_{1}\times\tau_{1})\otimes\chi,\psi)=\gamma(s,(\sigma_{2}\times \tau_{2})\otimes\chi,\psi)\] for sufficiently ramified \(\chi\).
2304.13727
Ensemble CNNs for Breast Tumor Classification
To improve the recognition ability of computer-aided breast mass classification among mammographic images, in this work we explore the state-of-the-art classification networks to develop an ensemble mechanism. First, the regions of interest (ROIs) are obtained from the original dataset, and then three models, i.e., XceptionNet, DenseNet, and EfficientNet, are trained individually. After training, we ensemble the mechanism by summing the probabilities outputted from each network which enhances the performance up to 5%. The scheme has been validated on a public dataset and we achieved accuracy, precision, and recall 88%, 85%, and 76% respectively.
Muhammad Umar Farooq, Zahid Ullah, Jeonghwan Gwak
2023-04-11T10:59:38Z
http://arxiv.org/abs/2304.13727v1
# Ensemble CNNs for Breast Tumor Classification ###### Abstract. To improve the recognition ability of computer-aided breast mass classification among mammographic images, in this work we explore the state-of-the-art classification networks to develop an ensemble mechanism. First, the regions of interests (ROIs) are obtained from the original dataset and then three models, i.e., XceptionNet, DenseNet, and EfficientNet, are trained individually. After training, we ensemble mechanism by summing the probabilities outputted from each network which enhances the performance up to 5%. The scheme has been validated on a public dataset and we achieved accuracy, precision, and recall 88%, 85%, and 76% respectively. Deep learning, DenseNet121, EfficientNet, embedded models, embedded schemes, XceptionNet. + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition ### EfficientNet The EfficientNet model was proposed by Tan and Le, 2019 [9], which can achieve a suitable effect on the expansion of the depth, width, and resolution of the network, and then obtain good model performance. The EfficientNet Models are based on simple and highly effective compound scaling methods. This method enables to scale up a baseline ConvNet to any target resource constraints while maintaining model efficiency, used for transfer learning datasets. In general, EfficientNet models achieve both higher accuracy and better efficiency over existing CNNs such as AlexNet, ImageNet, GoogleNet, and MobileNetV2 [10]. EfficientNet could serve as a new foundation for future computer vision tasks. EfficientNet includes models from B0 to B7, and each one has different parameters from 5.3M to 66M. ### DenseNet DenseNets [11] has been introduced recently in literature. It reduces the connection between the input and output which helps in overcoming the vanishing gradient problem. Each layer in the DenseNet has a reduced feature map size which is important for training the CNN's on a small dataset leading to less probability of facing the over-fitting problems and to ensure that there is no loss in the transmitted information [41]. Additionally, each layer receives supervision from the loss function and a regularizing effect through shorter connections leading to an easier training process. The DenseNet is mainly composed of DenseBlock, Transition Layer, and Growth Rate. ### XceptionNet The Xception architecture introduced by Francois Chollet is an extension of the Inception architecture. This architecture is a linear stack of depthwise separable convolution layers with residual connections. The depthwise separable convolution aims to reduce computational cost and memory requirements. Xception has 36 convolutional layers structured into 14 modules, all include linear residual connections, except for the first and last modules. The separable convolution in Xception separates the learning of channel-wise and space-wise features. Also, the residual connection helps to solve the issues of vanishing gradients and representational bottlenecks by creating a shortcut in the sequential network. This shortcut connection is making the output of an earlier layer available as input to the later layer using a summation operation rather than being concatenated. Figure 1: Our proposed model diagram. ### Ensembling Scheme To ensemble three networks, we apply summation on the output probabilities and then choose the maximum as defined in the following Equation. \[R=Max(Eff(x)+Dense(x)+XcepNet(x))\] Where _Eff(x), Dense(x) and XcepNet(x)_ are the probabilities for each class from EfficientNet, DenseNet and XceptionNet, respectively. ## 3 Results and Discussion The experiment results for Densenet121, Efficientnet_B4, XceptionNet, and ensembled scheme are shown in Table 1. The experiment is performed for 100 epochs with a learning rate of 0.001. From the table, the ensembled scheme achieves the highest accuracy compared to the three individual models. To analyze the performance of each model separately and ensembled model, the confusion matrices are shown in Figure 2 which is based on, (a) DenseNet121, (b) EfficientNet B4, (c) XceptionNet, and (d) is ensembled model performance respectively. The results demonstrate that the proposed ensemble scheme enhances the performance for the classification of breast tumor ## 4 Conclusion This work presents an ensemble scheme for the classification of breast tissues in normal, benign, and malignant tissues, from mammograms. We use three state-of-the-art classification models (i.e., EfficientNet, DenseNet, and XceptionNet) with an ensemble mechanism. The core objective is to obtain the probability of each class from three classification models, and combine them to enhance the classification performance as compared to the individual obtained by each of them separately. The experiment results show that by using the ensembling scheme we are able to achieve the highest accuracy compared to the three individual model accuracies. The ensembled model's accuracy is 88.33% with precision, recall, and F1 scores of 85.62, 76.29, and 75.82 respectively. Future research plans include the exploration of more classification models and improve them further before ensembling to further optimize the results.
2306.12240
ICAR, a categorical framework to connect vulnerability, threat and asset managements
We present ICAR, a mathematical framework derived from category theory for representing cybersecurity NIST and MITRE's ontologies. Designed for cybersecurity, ICAR is a category whose objects are cybersecurity knowledge (weakness, vulnerability, impacted product, attack technique, etc.) and whose morphisms are relations between this knowledge, that make sense for cybersecurity. Within this rigorous and unified framework, we obtain a knowledge graph capable of identifying the attack and weakness structures of an IS, at the interface between description logics, database theory and cybersecurity. We then define ten cybersecurity queries to help understand the risks incurred by IS and organise their defence.
Arnaud Valence
2023-06-21T12:59:29Z
http://arxiv.org/abs/2306.12240v1
# ICAR, a categorical framework to connect vulnerability, threat and asset managements ###### Abstract We present ICAR, a mathematical framework derived from category theory for representing cybersecurity NIST and MITRE's ontologies. Designed for cybersecurity, ICAR is a category whose objects are cybersecurity knowledge (weakness, vulnerability, impacted product, attack technique, etc.) and whose morphisms are relations between this knowledge, that make sense for cybersecurity. Within this rigorous and unified framework, we obtain a knowledge graph capable of identifying the attack and weakness structures of an IS, at the interface between description logics, database theory and cybersecurity. We then define ten cybersecurity queries to help understand the risks incurred by IS and organise their defence. KeywordsVulnerability management, threat management, asset management, CVE, CWE, CAPEC, CVSS, CPE, Category theory ## 1 Introduction When it comes to cyber systems defense, security operations management has long involved separate tasks: vulnerability management, cyber threat management, and asset management. Today, these disciplines are intended to interoperate within a broader framework, supported by public knowledge bases about cyber threats, vulnerabilities, and IT assets. This interoperability draws an integral and integrated research path, at the interface between ontology language, database theory and cybersecurity, in order to understand how adversaries use vulnerabilities to achieve their goals. From a general perspective, the research efforts strive to integrate several repositories: the Common Platform Enumeration (CPE) listing IT assets, the Common Vulnerabilities and Exposures (CVE) listing discovered vulnerabilities, the Common Weakness Enumeration (CWE) listing commonly appearing weaknesses, the MITRE ATT&CK framework listing Adversary Tactics and Techniques (ATT) and the Common Attack Pattern Enumeration and Classification (CAPEC) which helps facilitate attack identification and understanding. The latter repository thus acts as a bridge connecting vulnerability management and threat management. On this basis, research work has explored several avenues. * Some works propose unified ontologies, more or less interoperable, such as Kurniawan et al. [8], preceded in this by partial ontologies such as UCO and SPESES, which do not yet include the CTI incorporated in the ATT&CK (even if they can include other vulnerability repositories, such as the CYBOX, KillChain or STUCCO standards). * Other research explores the track of domain-specific languages (DSL), and in essence that of the Meta Attack Language (MAL) meta-language. This is the case for Xiong et al.'s EnterpriseLang meta-language [14] and Aberg and Sparf's AttackLang meta-language [1]. * A third research direction proposes to deepen the graph visualization aspects of attack paths through a relational representation of threats and vulnerabilities. This is the case of the BRON model of Hemberg et al. [4]. The approach proposed here is a new way to deepen the mathematical aspects of integrated security operations management. This approach combines three advantages in that * like the first approaches mentioned above, it develops a unified vision of vulnerability and threat repositories; * like the second ones, they articulate vulnerabilities and threats within the framework of a cybersecurity-oriented meta-language, except that -- and this is a fundamental point -- it is a _mathematical_ meta-language rather than an ontological one1. Footnote 1: It may be noted that the DSL approach adds an ontological layer to the ontology already at work in the MITRE and NIST repositories. * like the third ones, it deepens the study of graph visualization and structural properties of the unified cybersecurity ontology, by borrowing the powerful and rigorous graph-theoretic concepts of category theory. We believe that category theory can be put to good use by cybersecurity teams. Following the example of a growing number of researchers, involved in more and more diverse fields of knowledge, we believe that the concepts of category theory offer important keys to understanding that simplify and unify the treatment of security operations. We see category theory as the very language of interoperability that enables the integrated management of assets, vulnerabilities, and cyber threats. The article is organized as follows. The second section discusses the construction of the integrated cybersecurity resource, which will lead to the knowledge graph called ICAR. Based on this, the third section shows how to exploit the knowledge graph to answer different concrete cybersecurity queries. We will see how the categorical concepts allow us to handle bottom-up (from assets to defend to adversaries) as well as top-down (from adversaries to assets) queries. The fourth section concludes. ## 2 Building ICAR ### Data sources The data sources are from the knowledge bases provided by the NIST (National Institute of Standards and Technology) and the MITRE Corporation. * Common Platform Enumeration (CPE) is a way of assigning standardized identifiers to classes of IT assets. * Common Vulnerabilities and Exposures (CVE) is a knowledge base listing publicly known vulnerabilities. Each CVE entry contains an identification number, a description and at least one reference to publicly known cyber security vulnerabilities. Additional information may include patch information, severity scores and impact assessments according to the Common Vulnerability Scoring System (CVSS), as well as links to exploit information and advisories. * Common Weakness Enumeration (CWE) is a knowledge base listing software and hardware weaknesses: flaws, features, breaches, bugs, and other errors in the design, architecture or implementation of software and hardware components that, if left unfixed, can make systems and networks vulnerable to attack. CVE entries have a relational link to CWE entries, as an example of a weakness that actually affects a computer system. * Common Attack Pattern Enumeration and Classification (CAPEC) enumerates and classifies attack patterns to facilitate the identification and understanding of attacks. The attack patterns have a tree structure, i.e. they are organised into categories and sub-categories of attacks. They allow the ATTs to be linked to CWE weaknesses. * MITRE ATT&CK framework abstractly describes cyber attack techniques organised into twelve sequential tactics. The framework is presented in a matrix format where the columns represent tactics and the rows represent techniques. These five knowledge bases (or six including CVSS) thus make up an integrated ontological resource for cybersecurity (which we will call ICAR). At this point, it is important to note that this resource only represents the abstract relationships between the data sources. In the language of database, we would say that it shows the column headings of the primary and secondary keys, but not the column entries themselves. ### Ontologies as knowledge graphs The integrated ontological resource can be represented more formally as a graph. **Definition 1** (Graph).: _A graph \(G\) is a sequence \(G:=(V,E,src,tgt)\), where \(V\) et \(E\) are sets (respectively the set of vertices and the set of arrows of \(G\)), and \(src,tgt:E\to V\) are functions (respectively the source and target function of \(G\)). An arrow \(e\in E\) with source \(src(e)=v\) and target \(tgt(e)=w\) is represented as follows:_ \[v\xrightarrow{e}w.\] On this basis, it is possible to represent each dictionary (or ontology) by a vertex and each link between dictionaries by an arrow, without forgetting that dictionaries can have internal links. This is the case of CAPEC patterns. For example, the CAPEC-593 pattern (Session Hijacking), linked to the CWE-287 weakness (Improper Authentication) and to several techniques, sub-techniques and MITRE ATT&CK tactics, has itself children (the CAPEC-60, CAPEC-61, CAPEC-102, CAPEC-107 patterns) and is itself linked to the CAPEC-21 pattern (Exploitation of Trusted Identifiers). We must therefore add to the knowledge graph a loop on CAPEC representing the ChildOf dependency relation. It is also possible to add the dual relation ParentOf, although redundant, as foreseen by the MITRE corporation. This is also the case for weaknesses. For example, the aforementioned weakness CWE-287 has children CWE-295, CWE-306, CWE-645, CWE-1390, and is itself a child of weakness CWE-284 (Improper Access Control). Finally, it remains to take into consideration the internal structure of the ATT&CK framework, which is broken down into the dictionaries _Tactics_, _Techniques_ (including sub-techniques) and _Procedures_. In this article, we will only deal with tactics and techniques. Sub-techniques will be assimilated to techniques of which they are children. Taking into account these additional specifications, we finally obtain the graph depicted in figure 1, faithful to the structure of the asset, attack and weakness ontologies. Remark the complementarity of the CAPEC and Techniques dictionaries in the overall understanding of threat, beyond their simple logical link. Techniques and (attack) patterns contextualise threat differently. Patterns are intended to focus on the compromise of applications in order to understand the path taken by adversaries to exploit end-to-end application weaknesses in the information system (IS), while techniques describe the concrete dynamics of an attack scenario executed step by step to compromise the IS (see [9] for more details). Thus, technique T1528, which describes the theft of application access tokens in order to obtain credentials for access to remote systems and resources, can be contextualised in two different ways: (i) as a step to legitimise application actions under the guise of an authenticated user or service by obtaining a trusted identifier, hence its belonging to the CAPEC-21 (Exploitation of Trusted Identifiers) pattern, (ii) as a strategic step to steal account names and passwords, hence its belonging to the TA0006 (Credential Access) tactic. Ultimately, the techniques embody attack tactics as the "how" of the attack (where tactics characterise the "why"). Figure 1: Representation of the security knowledge graph ### Semantic facts and knowledge schema We can do better. What the knowledge graph represents are roughly the data tables (the vertices) and the data columns (the arrows). However, there is still some information missing which is not made explicit in the graph: the path equivalences in \(G\). **Definition 2** (Path).: _Let \(G=(V,E,src,tgt)\) be a graph. A path of length \(n\) in \(G\), denoted \(p\in Path_{G}^{(n)}\) is a sequence_ \[p=(v_{0}\xrightarrow{a_{1}}v_{1}\xrightarrow{a_{2}}v_{2}\xrightarrow{a_{3}} \ldots\xrightarrow{a_{n}}v_{n})\] _of arrows in \(G\). In particular, \(Path_{G}^{(0)}=V\) and \(Path_{G}^{(1)}=E\). The set of all paths in \(G\) is denoted_ \[Path_{G}:=\bigcup_{n\in\mathbb{N}}Path_{G}^{(n)}.\] Paths may themselves carry higher level information about the knowledge structure. This is the case if constraints are imposed on the paths to translate properties that make sense. These constraints can then be expressed as path equivalences. **Definition 3** (Path equivalence).: _Let \(G=(V,E,src,tgt)\) be a graph and \(p,q:b\to c\in Path_{G}^{(n)}\) two paths in \(G\). A categorical path equivalence relation in \(G\), or simply a path equivalence in \(G\), is a relation denoted \(\simeq\) such that \(p\simeq q\) if and only if \(src(p)=src(q)\) and \(tgt(p)=tgt(q)\). Moreover, if \(m:a\to b\) and \(n:c\to d\) are two arrows in \(G\), then \(m\) and \(n\) are respectively an epimorphism (a right simplifiable morphism) and a monomorphism (a left simplifiable morphism), i.e. \(p\simeq p\) if and only if \(mp\simeq mq\) and \(pn\simeq qn\)._ Following Spivak [11], we call this equivalence relation _facts_. There are facts in our study. It is indeed natural to ask for a form of reciprocity in the links between weaknesses and attack patterns. If an attack pattern \(CAPEC-X\) exploits a weakness \(CWE-Y\), it is natural that it is part of the patterns referenced by this weakness. We can therefore add a path equivalence in the knowledge structure to obtain the following fact: \[\text{(CAPEC-X}\xrightarrow{Has}\text{CWE-Y}\xrightarrow{Has})\simeq\text{ CAPEC-X}.\] for all CAPEC-X \(\in\) CAPEC and CWE-Y \(\in\) CWE. Parent/child relations express other facts. It is natural to require that a weakness CWE-X declaring a child CWE-Y is itself declared as the parent of the child. We therefore have a constraint such as \[(\text{CWE-X}\xrightarrow{isParentOf}\text{CWE-Y}\xrightarrow{isChildOf})\simeq \text{CWE-X}.\] It is also possible to express path equivalences in the more convenient algebraic form of _path equalities_, using composition operator \((.)\). The two previous equivalence relations can then be rewritten, for any \(i\in\{\text{CWE},\text{CAPEC}\}\) \[i.\text{Has.Has}= i,\] \[i.\text{isChildOf.isParentOf}= i, \tag{1}\] \[i.\text{isParentOf.isChildOf}= i.\] Are there other facts? Could we not ask that the child of a weakness belong to the same attack pattern as its parent, or one of its children? The answer is no. The data structure of the CWE and CAPEC does not have this characteristic. No hybrid facts can be derived from the two previously defined facts. This negative result can be attributed to the meaning provided by the labelling of arrows. Stress that facts are dependent on the meaning of arrows. _They are semantic facts_. For example, in a bijective data structure where each parent has exactly one child and a child exactly one parent, there is an equivalence between the path \(p=(v_{0}\xrightarrow{isOnlyParentOf}v_{1}\xrightarrow{isOnlyChildOf}) \in Path_{G}^{(2)}\) and the path \(p^{\prime}=v_{0}\in Path_{G}^{(0)}\), but this equivalence no longer holds in a data structure with multiple parents and children. This is why it is not possible to apply Spivak's theory of ologs [12]. Ologs are elegant categorical frameworks for rigorously representing knowledge structures exploiting databases, but are limited to structures of functional type. It is inappropriate in this study since the vertices of our knowledge graph generally have several arrows, and may in some cases have none. On the other hand, the usual theory of oriented (multi)graphs is too broad to capture all the properties present in this study, since we added a path equivalence property. If we add the facts 1 to the knowledge graph 1, we obtain a richer structure called (knowledge) _schema_. **Definition 4** (Categorical schema).: _A categorical schema \(S\) consists of a pair \(S:=(G,\simeq)\) where \(G\) is a graph and \(simeq\) a path equivalence on \(G\)._ In the remainder of this study, we will speak more simply of a schema, in the absence of any risk of confusion. ### More about relation \(CWE\leftrightarrows CAPEC\) It was noted that the CWE and CAPEC dictionaries are linked in both directions. This may seem strange, as a mapping can in principle be read both ways: if the weaknesses correctly refer to the attack patterns, it should be possible to recover the former from the latter. Actually, this is not always the case. Kanakogi et al. [5] report some CAPEC-IDs that are not identified by CWE-IDs that fall within their attack pattern. As a result, some CVE-IDs would not be correctly mapped to their attack pattern(s). The authors give the example of the CVE-2018-18442 vulnerability, which is linked to a weakness due to network packet flooding. However, while there is an attack pattern for this weakness (the CAPEC-125 pattern), the fact is that the vulnerability is also associated with the CWE-20 weakness (incorrect input validation) which, according to the authors, prevents the vulnerability from being linked to the CAPEC-125 pattern, as the latter is not referenced by the CWE-20 weakness. This problem then motivates the authors to link CVE-IDs directly to CAPEC-IDs. Their solution is to use similarity indicators between CVE-IDs and CAPEC-IDs, using machine learning and natural language processing. In fact, the traceability problem discussed by Kanakogi et al. does not describe an architectural flaw (since weaknesses can list several attack patterns), but reflects the incomplete mapping between dictionaries. From this point of view, the strategy of the authors seems to be good, even if it consists in directly linking dictionaries that are not graphically related. In the end, this direct approach seems to be complementary to ours in that it allows to complete the collection of arrows that will be used to populate the knowledge schema. This remark is also valid for other approaches of direct mapping between dictionaries, like the projects of Grigorescu et al. [3], Kuppa et al. [7] or Ampel et al. [2], which aim to link CVE-IDs to MITRE ATT&CK _tactics_ and _techniques_. ### ICAR as schema instance The knowledge schema provides an abstract view of cybersecurity data ontologies, the "skeleton". It represents the structure of the data in the form of a triplet (of vertices, arrows and equivalence relations) in exactly the same way as the attributes of database tables present the \(n\)-uplets of the database. It is now a question of populating the knowledge schema in such a way as to make the knowledge base explicit. This explicitation is in fact an _instantiation_ (a "concretisation") of its schema. **Definition 5** (Instance).: _Let \(S:=(G,\simeq)\) a categorical schema where \(G:=(V,E,src,tgt)\) is a graph. An instance \(I\) on \(S\) is given by_ 1. _a set_ \(I(v)\) _for any vertex_ \(v\in V\) _;_ 2. _a function_ \(I(e):I(v)\to I(v^{\prime})\) _for any arrow_ \(e:v\to v^{\prime}\) _;_ 3. _the equality_ \(I(p)=I(q)\) _for any path equivalence_ \(p\simeq q\)_._ In other words, an instance on \(S\) is a path equivalence preserving functor \(F:S\to\mathsf{Set}\). Among the infinite number of instances that can be generated by \(C\), there is one that interests us the most: the up-to-date resource for cybersecurity ontologies. We call this instance \(\mathsf{ICAR}\) for _Integrated CAtegorical Resource_. To fix ideas, we represent in the tables 1 an extract of ICAR, where appear at the time of writing the most salient added or updated entries, among more than 20,000 CPE, about 176,000 CVEs, 668 CWEs, 559 CAPECs, 193 Techniques and 14 Tactics. It is difficult not to make a connection with a database schema, as we suggested above. It is indeed possible to see an arrow \(e\in E\in G\in C\) as a relation linking the table identified by \(src(e)\) with a table identified by \(tgt(e)\). For example, the arrow CWE \(\to\) CAPEC expresses that the table CWE points to the table CAPEC, i.e. entries that have a primary key in CWE are related to entries that have a primary key in CAPEC, via the secondary keys found in the CAPEC column of the table CWE. At this point we can see that the database schema is not in normal form, since the attribute values are not necessarily atomic (so a weakness frequently has several parents and several CAPECs). Strictly speaking, we should decompose the database schema so as to express it in first normal form. In fact, we do not need such a normalization in this study because it would unnecessarily transform the resource ICAR by adding redundancy. We do, however, need a normal form to check the consistency of ICAR. This leads us to a concept of categorical normal form. **Definition 6** (Categorical normal form).: _A database is said to be in categorical normal form if_ 1. _any table_ \(t\) _has a single primary key column_ \(ID_{t}\) _fixed at the beginning;_ 2. _any entry belonging to a column_ \(c\in t\) _refers to a primary key in a single table_ \(t^{\prime}\)_, which is denoted by_ \(p_{c}:t\to t^{\prime}\) _;_ 3. _any database equivalence between two relations_ \(p_{c},q_{c}:t\to t^{\prime}\) _must be declared as a path equivalence in the corresponding categorical schema, i.e._ \(p_{c}\simeq q_{c}\) We check that ICAR actually is in categorical normal form. Condition 1 is met because each dictionary has a single primary key column. Condition 2 is assumed to be met by the successive updates of the dictionaries: if a new entry appears in the foreign key columns, it is assumed that it is indexed at the same time in another table as a primary key. _There are no unreferenced \begin{table} \begin{tabular}{|l||l|l|l|l|} \hline \multicolumn{5}{|l|}{**ID**} \\ \hline 240.99\_kindle\_books\_project:240.99\_kindle\_books & \multicolumn{1}{l|}{} \\ @nubosoftware/node-static\_project:@nubosoftware/node-static & \multicolumn{1}{l|}{} \\ @thi.ng/egf\_project:@thi.ng/egf & \multicolumn{1}{l|}{} \\ gwa\_autoresponder\_project:gwa\_autoresponder & \multicolumn{1}{l|}{} \\ 01org:tpm2.0-tools & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} \\ \hline \multicolumn{5}{|l|}{CVE} \\ \hline **ID** & **CWE** & **CPE** & **CVSS** & **ID** \\ \hline CVE-2023-1684 & CWE-434 & NA* & 2.1 & 6.8 \\ CVE-2023-28371 & CWE-22 & Stellarium:Stellarium & 4.3 & 6.9 \\ CVE-2023-21038 & NA* & NA* & 9.5 & 7.0 \\ CVE-2023-21039 & NA* & NA* & 2.1 & 7.1 \\ CVE-2023-21032 & NA* & NA* & 4.1 & 7.2 \\ \hline \multicolumn{5}{|l|}{CWE} \\ \hline **ID** & **ChildOf** & **ParentOf** & **CAPEC** & \\ \hline CWE-787 & 119 & 121-124 & NA* & \\ CWE-79 & 74 & 80,81,83-87,692 & 63,85,209,588,591,592 & \\ CWE-89 & 943 & 564 & 7,66,108-110,470 & \\ CWE-20 & 707 & 179,622,1173,1284-1289 & 3,7-10,13,14,22-24... & \\ CWE-125 & 119 & 126,127 & 540 & \\ \hline \multicolumn{5}{|l|}{CAPEC} \\ \hline **ID** & **ChildOf** & **ParentOf** & **CWE** & **Techniques** \\ \hline CAPEC-698 & 542 & - & 507,829 & 1027,1176,1505,1587 \\ CAPEC-699 & 651 & - & 1300 & 1111 \\ CAPEC-700 & 161 & - & 284 & 1599 \\ CAPEC-701 & 94 & - & 294,345 & 1557 \\ CAPEC-702 & 180 & - & 1296 & 1574 \\ \hline \multicolumn{5}{|l|}{Techniques} \\ \hline **ID** & **Tactics** & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} \\ \cline{2-5} \cline{7-5} entries in primary key_. On the other hand, it is possible that no foreign key is associated with the entry of a new item as a primary key. This is typically the case when, for instance, an asset affected by a vulnerability has not yet been found, or the weakness corresponding to this vulnerability is still awaiting identification, etc. It is also possible for a primary key column to have no foreign key column. In this case (very common in databases), the table is limited to a single column. This is the case here for the CPE and Tactics tables. In this case, we speak of a leaf column. Condition 3 is respected because it is easy to check that the facts 1 are translated into relational equivalences in database: the attack patterns declared in the weaknesses declare in turn the declaring weaknesses, and vice versa, and the children declared by the weaknesses or the attack patterns declare in turn their declaring parents. ## 3 Using ICAR In this section, we illustrate the applicability of ICAR through several use cases. First of all, we must start by introducing the assets of the IS subject to attack. ### Instantiate ICAR with an IS Graph 1 brings together knowledge about vulnerability and threat managements in a single categorical schema. But asset management is still to be considered. Assets are explicitly taken into account by Kiesling et al. [6] in the SEPSES knowledge graph. Indeed, we find there the sub-graph CPE \(\xrightarrow{hasProduct}\). We take up this idea with two differences. Firstly, we consider only a subset of assets. This restriction allows us to refer to a concrete entity to be analysed, i.e. an IS made up of assets inventoried in a database (to be monitored or investigated). This inventory of assets is commonly materialised by a configuration management database (CMDB). Secondly, and by pure convention, we reverse the arrow formalising the dependency between CPEs and assets. This is indeed what CMDBs suggest, which normally provide for each component added to the database as a primary key a foreign key CPE as illustrated in table 2. CMDB can thus be connected to ICAR via the CPE attribute. It can be noted that this correspondance is surjective (each CPE reference refers to at least one asset in the CMDB) but not necessarily injective since a CMDB can have several assets with the same CPE2. And finally, it is possible to complete the knowledge schema \(C\) of which ICAR is the instance, which is represented in figure 2 by noting \(DB_{X}\) the inventory of assets from the CMDB of SI \(X\). Footnote 2: It can also happen that the CPE reference is not entered in the CMDB. Furthermore, there are many "exotic" assets that are not listed in the CPE dictionary We therefore have the following **Q1** query: **Query 1** (**Q1**).: _Instantiate an inventory of assets \(DB_{X}\in\textsf{Product}\)._ We start by noting that the instantiation already referred to here is different from the instantiation of the knowledge schema. The idea now is to instantiate an _object which already has the database structure_ (ICAR), in other words to _populate_ ICAR (where ICAR instantiates the knowledge schema as a "concretisation"). In category theory, this notion of instantiation can be approached in many ways. In fact, there are at least two ways of dealing with **Q1**, either by first "connecting" table \(\textsf{Product}\) to table CPE and then filtering on the assets \(DB_{X}\subset\textsf{Product}\), or by directly connecting \begin{table} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|c||}{CMDB} \\ \hline **ID** & **CPE** \\ \hline A0006 & cpe:2.3:a:microsoft:internet\_explorer:8.0.6001:beta:*:*:*:*:* \\ VM008 & cpe:2.3:a:vmware:vcenter\_server:6.0:3b:*:*:*:*:*:* \\ LB001 & cpe:2.3:h:f5:big-10250v:-:*:*:*:*:*:*:*:* \\ OS007 & cpe:2.3:o:linux:linux\_kernel:2.6.39:*:*:*:*:*:*:*:* \\ OS008 & cpe:2.3:o:paloaltonetworks:pan-os:8.1.16:**:**:**:*:*:*:*:*:* \\ \hline \end{tabular} \end{table} Table 2: Extract columns ID and CPE from a CMDB Figure 2: Knowledge schema with inventory of assets \(DB_{X}\) and CPE tables. In the first case, a filtering operation must be added to the asset connection operation. This operation is not trivial in category theory. Moreover, it implies adding ex post the quantitative aspect induced by the potential presence of more assets with same CPE reference. This is why we will apply the second method, which is easier and more direct. The idea of filtering will nevertheless be discussed later in order to answer query **Q6**. In pratical terms, if we think in terms of database management, the addition of \(DB_{X}\) to ICAR can be understood as a database migration, and more precisely as a database union. This intuition can be translated into terms of "categorical data". The idea of "migration" finds a natural translation in category theory with the concept of _functor_. Let \(S\) be the (categorical) schema associated with figure 1 (i.e. devoid of assets) and \(T\) the schema associated with figure 2 (i.e. enriched with an inventory of assets). Following the example of Spivak [11, 13], we can then define a schema morphism (i.e. a functor) \(F:S\to T\). Migration functors follow. **Definition 7** (Migration functors).: _Let \(S\) and \(T\) be two schemas, \(S-\mathsf{Inst}\) and \(T-\mathsf{Inst}\) instances on \(S\) and \(T\) respectively, \(F:S\to T\) a schema morphism and \(I\in T-\mathsf{Inst}:T\to\mathsf{Set}\). Then the composite functor \(S\xrightarrow{F}T\xrightarrow{I}\mathsf{Set}\) lives in the \(S\)-instance (\(I\circ F\in S-\mathsf{Inst}\)) and we define the functor \(\Delta_{F}\) such that_ \[\Delta_{F}:\ \ T-\mathsf{Inst}\to S-\mathsf{Inst} \tag{2}\] \[I\leadsto I\circ F \tag{3}\] _as well as the functors \(\Sigma_{F},\Pi_{F}:S-\mathsf{Inst}\to T-\mathsf{Inst}\) as adjoint functors of \(\Delta_{F}\), respectively on the left and on the right._ In the language of category theory, \(\Delta_{F}\), \(\Sigma_{F}\) and \(\Pi_{F}\) are called pullback3, left pushforward and right pushforward respectively. Footnote 3: Here, the term ”pullback” is understood as ”a category of instances assigning a set of row-IDs to a schema element”. This definition is related to Grothendieck’s construction (and fibration). Intuitively, \(\Delta_{F}\) can be understood as a projection operator in the sense that data (tables, columns) is duplicated. In contrast, \(\Sigma_{F}\) is interpreted in terms of unifying tables, and \(\Pi_{F}\) in terms of joining tables. This difference between left and right pushforwards (between unification and junction) is important. When the tables to be joined have no common key, the merging operation can take place in one of two ways: * either by adding the rows of the second table to those of the first, which has the effect of creating Skolem variables in the unfilled "foreign" columns (in this case we reason on the sum of the primary key spaces); * either by multiplying the rows of the second table with those of the first, which has the effect of duplicating the rows of the first table as many times as there are rows in the second (in this case we reason on the product of the primary key spaces). But the situation is simplified if tables have a common key. In this case, left pushforward and right pushforward are equivalent and there is no duplication of rows or new variables created. This is exactly what happens in our case since asset inventories are supposed to include CPE IDs. Table reconciliation therefore occurs naturally by matching the foreign keys of the inventories with the primary keys of the CPE dictionary. ### List all vulnerable assets For a CISO or security analyst, one of the most natural queries is to list the vulnerable assets of the IS. **Query 2** (**Q2**).: _List all vulnerable assets of a given IS_ To process this query, one must first list the entries in the CMDB whose foreign key (i.e. the CPE attribute) also appears as a foreign key in the CVE table. In category theory terminology, we say that we use a pullback (or fiber product), which is one of the many variations of the categorical concept of limit. **Definition 8** (Pullback).: _Let be the dictionaries CVE and CPE, \(DB_{X}\) the inventory of the IS \(X\), and the relations \(DB_{X}\xrightarrow{\mathit{has}}\) CPE and CVE \(\xrightarrow{\mathit{has}}\) CPE. The pullback of the cospan \(DB_{X}\xrightarrow{\mathit{has}}\) CVE \(\xleftarrow{\mathit{has}}\) CPE, denoted \(DB_{X}\underset{\mathit{CPE}}{\times}\) CVE, is defined by the set_ \[DB_{X}\underset{\mathit{CPE}}{\times}\mathit{\mathit{CVE}}:=\{(x,y)|x\in DB_{ X},y\in\mathit{\mathit{CVE}},has(x)=has(y)\}\] respecting the commutative diagram Figure 3: Pullback od \(DB_{X}\) and CPE To obtain the only vulnerable assets (dissociated from their vulnerabilities), it is sufficient to retain only the left projection of the pullback. For assets affected by several vulnerabilities, an additional projection morphism is necessary. We then obtain the set of vulnerable assets denoted by \(\mathsf{AffectedAssets_{X}}\). ### List all vulnerabilities of the IS In the same way, it is also useful to list all vulnerabilities affecting a given IS. **Query 3** (**Q3**).: _List all vulnerabilities of a given IS_ This query, which is a dual of the previous one, consists in keeping only the vulnerabilities from the pullback 3. This list is obtained by using the right projection of \(DB_{X}\underset{\mathit{CPE}}{\times}\text{ CVE}\). The resulting set is denoted \(\mathsf{Vuln_{X}}\). ### List the vulnerabilities affecting an asset Similarly, it is natural to ask for a list of vulnerabilities affecting a particular asset in the IS. **Query 4** (**Q4**).: _List the vulnerabilities affecting an asset \(x\in DB_{X}\)._ To process this query, we have to isolate the pairs (asset, vulnerability) of the same asset \(x\) in the pullback 3. We therefore need to reason about the following commutative diagram: It turns out that this diagram also defines a pullback, by virtue of the pullback propagation theorem. Consider the following diagram: such that the commutative square on the right-hand side is a pullback. It follows that that of the left-hand side is also a pullback, and consequently the entire commutative diagram. The set \(x\underset{\mathit{CPE}}{\times}\) CVE thus satisfies **Q4** by providing all vulnerabilities impacting asset \(x\). ### List the assets affected by a vulnerability From the pullback \(DB_{X}\underset{\mathit{CPE}}{\times}\) CVE, we see that it is also possible to filter the resulting pairs by CVE rather than by asset. This filtering fulfils another mission of the CISO (or of any administrator or analyst whether or not they have been mandated to do so): that of monitoring the changes needed to guarantee the logical and physical security of the IS for which he is responsible. This task includes monitoring vulnerabilities likely to affect the IS, and in practice begins by consulting the security alerts issued by the CERT (to which every CISO is in principle a subscriber). Each alert contains one or more CVE entries on a given subject. When a CISO becomes aware of a vulnerability, s/he has to ask her/himself whether the IS is affected, with the level of attention weighted by its CVSS score. Assuming that the new vulnerability is added to ICAR, we therefore have the following **Q5** query, dual to **Q4** : **Query 5** (**Q5**).: _List the assets affected by a vulnerability \(y\in\) CVE._ This query is processed by choosing from \(DB_{X}\underset{\mathit{CPE}}{\times}\) CVE the pairs corresponding to the vulnerability \(y\) we are looking for, which we note \(DB_{X}\underset{\mathit{CPE}}{\times}y\) (i.e. as many pairs as assets impacted by \(y\)). The same applies to the resulting commutative diagram, which is a pullback, and by combining **Q4** with **Q5** we obtain the pair \((x,y)\) giving the vulnerability \(y\) of the asset \(x\), that is useful for consulting the remediation status of a vulnerability to be treated (is it fixed, in progress, scheduled...?). ### List vulnerabilities by criticality In cybersecurity, vulnerabilities are not of equal importance. There is a tendency to focus on the most severe vulnerabilities. It is not uncommon for a CISO to plan enhanced monitoring for critical vulnerabilities. Typically, s/he may request a regular report on vulnerabilities with a score of 9 or more (in CVSS v3.0 notation), or more generally with a score within a range \(S\subset[0.0,10.0]\). Query **Q6** follows. **Query 6** (**Q6**).: _List vulnerabilities by CVSS score \(s\in S\subset[0.0,10.0]\)._ As we saw with **Q1**, pullback can be used to assign a set of row-IDs to a schema element, which seems to do the trick. However, we need an additional ingredient to filter on the values taken by the entries in the CVSS score column. Indeed, migration functors defined above do not operate in the context of schema morphism, but in that of _type_ morphism. We therefore need a notion of _typing_. **Definition 9** (Typing).: _Let \(S\) be a schema and \(A\) a discrete category (i.e. a category containing only objects and identity morphisms) composed of attribute names. A typing for \(S\) is a triplet \((A,i,\gamma)\) where \(i\) is a functor from \(A\) to \(S\) mapping each attribute to its vertex, and \(\gamma\) is a functor from \(A\) to \(\mathsf{Set}\), mapping each attribute to its type._ Then, \(i\) reflects the pairing of the knowledge graph's vertices with the attributes of \(A\) and \(\gamma\) reflects the assignment of the attributes of \(A\) to their type. Consequently, we call a _typed instance_ a pair \((I,\delta)\) where \(I:S\to Set\) is an instance together with a natural transformation \(\delta:I\circ i\Rightarrow\gamma\). Intuitively, \(\delta\) reflects the assignment of a type to each ID in \(I\). Typically, this could be the assignment of a string type or a float type, but more generally it can be any type. Now, as Spivak points out [11], if we go back to pulback, we see that it is possible to adapt migration functors to _type-change functors_. Figure 4: Typing **Definition 10** (Type-change functor).: _Let \(S\) be a schema and \(k:A\to B\) a morphism of typing instances. We refer to the induced functors \(\hat{\Delta}_{k}:\mathsf{S}-\mathsf{Inst}_{/A}\to\mathsf{S}-\mathsf{Inst}_{/B}\) and \(\hat{\Sigma}_{k},\hat{\Pi}_{k}:\mathsf{S}-\mathsf{Inst}_{/A}\to\mathsf{S}- \mathsf{Inst}_{/B}\) as type-change functors. \(\hat{\Delta}_{k}\), \(\hat{\Sigma}_{k}\) and \(\hat{\Pi}_{k}\) are respectively called the pullback, the left pushforward and the right pushforward type-change functor._ In the context of \(\mathbf{Q8}\), we are therefore dealing with a morphism of typing instances which associates a subtype \(B\) with the predefined type \(A=[0.0,10.0]\supset B\). ### Measuring the attack surface of an IS The attack surface is a summary of the weak points in a IS that an attacker can exploit to gain access and carry out malicious actions. The more weak points there are, the greater the attack surface and the greater the risk of being attacked. Measuring the attack surface therefore makes it possible to assess the barriers an attacker needs to overcome to exploit the weakness. **Query 7** (**Q7**).: _Measuring the attack surface of an IS \(X\)._ There are myriad ways of defining the attack surface of an IS, and just as many ways of measuring it once it has been defined. One of the simplest definitions is based on the CVSS scores of the vulnerabilities present in the IS. From that point on, the attack surface can be measured in different ways, bearing in mind that the CVSS standard is itself a system of metrics based on three metric groups [10]4. Footnote 4: Base, Temporal, and Environmental. The base metrics produce a score ranging from 0 to 10, which can then be modified by scoring the temporal and environmental metrics. In addition to the base score, the CVSS standard is made up of two other groups of measures: temporal scores and environmental scores. The latter are not provided by the NVD, either because they change over time due to events external to the vulnerability (temporal scores), or because they refer to impacts that are relative to the organisation (environmental scores). The simplest indicators are : 1. the list of assets affected by a vulnerability with their associated CVSS score (as many weak points exploitable by an attacker) ; 2. the sum of the assets affected by a vulnerability weighted by their CVSS score. Formally, indicator (i) corresponds to the set of pairs \(\{(DB_{X}-ID,CVSS-ID)\}\) for any asset \(DB_{X}-ID\) and for any score vulnerability \(CVSS-ID\). It is obtained from the schema morphism \(CVE\xrightarrow{has}CVSS\) and the pullback \(DB_{X}\underset{CPE}{\times}CVE\) previously defined as follows: The product \(DB_{X}\underset{CPE}{\times}CVSS\) summarises as a simple list the mapping of possible entry points for a potential attacker, with their associated criticality. Seen as the product of \(DB_{X}\) and \(CVSS\), \(DB_{X}\times_{CPE}CVSS\) can then be used to define the synthetic indicator (ii). Assuming that the assets affected are of equal importance, the synthetic attack surface indicator, AttackSurface, is easily obtained as the sum of the CVSS scores projected into the list on the right: \[\mathsf{AttackSurface}:=\sum_{(x,y)\in DB_{X}\underset{CPE}{\times}CVSS}right(x,y)\] We note that, despite their equal importance, vulnerable assets do not involve equally important _threats_ (such as attack media). Not only do the assets affected differ in the severity of their vulnerabilities, but they can also differ in the number of vulnerabilities affecting them, and it is not uncommon for an asset to accumulate vulnerabilities. For example, Gitlab 15.8.0 has vulnerabilities CVE-2022-3411, CVE-2022-4138, CVE-2022-3759 and CVE-2023-0518, the last three of which are of high severity. These indicators obviously give a simplistic view of attack surfaces as they actually characterise IT systems. In reality, the assets of an IS do not have the same sensitivity for a variety of reasons: some assets are exposed to the Internet, others are not; some are in production, others in pre-production, development, decommissioning, etc.; some are constrained to high availability, others are not, etc. However, it is possible to take into account the importance of assets by adding a sensitivity criterion. This criterion is generally incorporated into CMDBs, which include a "CI Importance" property for this purpose, in line with ITIL architecture. If affected assets are of unequal importance, then each asset must be weighted by an importance indicator, i.e. a new \(IMPT_{X}\) data set connected to \(DB_{X}\) must be added to ICAR. In this case, it is sufficient to repeat the previous developments by reasoning about the pullback \(IMPT_{X}\times_{CPE}CVSS\) : Note that an attack surface cannot be interpreted as measures of risk; as the NVD points out [10], "CVSS is not a measure of risk". In risk analysis, risk is always the product of a threat, a vulnerability and a severity. ICAR lacks far too much information to be used as a basis for risk analysis, both in terms of business analysis (business values, feared events, impact of damage suffered) and threat analysis (sources of risk, attractiveness of the IT target, etc.). CVSS metrics can only measure the severity of vulnerabilities, which is only one component of risk. ### List vulnerabilities that can be exploited by a technique or tactic We now turn to the long paths to examine how vulnerability management is linked to threat management. This link is bidirectional: top-down and bottom-up. We start with the top-down approach. It is natural to ask what vulnerabilities can be exploited by a given technique pursuing a given tactic. This approach makes it possible to map the dangers corresponding to the different tactical stages of the _kill chain_, which is useful for organisations' defenders, who can prioritise vulnerabilities to be remedied, and for its adversaries, who can investigate their attacks. For example, at the start of an attack, the adversaries apply one or more reconnaissance techniques. They may, for example, target a website or an active directory with the aim of compromising accounts, creating accounts, obtaining capabilities (resource development tactics) or even taking their attack a step further with initial access tactics (remote access to the network, installation of a passive listening system, etc.). The list of vulnerabilities that can be exploited by this tactic can then enable the defender to be more vigilant about the assets that could be targeted by the adversary (i.e. a Wordpress application, an LDAP server, etc.). This knowledge is also useful to the adversaries because it tells them what they should be looking for, an asset or a version number if they already know an asset. So we have request \(\mathbf{Q8}\). **Query 8** (**Q8**).: _List vulnerabilities that can be exploited by a technique or tactic_ There are several ways of dealing with this query. The simplest is probably to observe techniques and tactics as sieves. **Definition 11** (Sieve).: _Let \(v\) be a technique or a tactic. A sieve on \(v\) is a collection \(S\) of morphisms such that :_ 1. \(e\in S\Rightarrow\operatorname{cod}(e)=v\)_,_ 2. \((e\in S\wedge\operatorname{cod}(f)=\operatorname{dom}(e))\Rightarrow e \circ f\in S\)_._ In other words, a sieve on an object \(A\) in ICAR is a collection of arrows of codomain \(v\) closed by precomposition of morphisms in ICAR. However, this definition does not correspond exactly to \(\mathbf{Q8}\). On the one hand, the universal aspect of the collection of arrows is missing, as we are looking for the list of _all_ vulnerabilities that can be exploited by a technique or tactic. This universality property is provided by the notion of _maximal sieve_. **Definition 12** (Maximal sieve).: _A sieve \(S\) on \(v\) is said to be maximal (or principal) if it contains all the arrows of codomain \(v\). It is denoted \(\uparrow v\)._ On the other hand, the resulting sieve has too many arrows, since it includes all the precompositions of \(v\)-target morphisms. However, what counts for \(\mathbf{Q8}\) are only the CVE-domain arrows. To subtract the other arrows (i.e. arrows of CWE, CAPEC or Sub-technique domain), we need a notion of _differential sieve_. Let \(S\) be a sieve on \(v\) in ICAR and \(S^{\prime}\) a sieve on \(v\) in ICAR', where ICAR' is the subcategory of ICAR without CVEs. In other words, ICAR' consists of the sub-collection of objects from ICAR such that \(\operatorname{Ob}(ICAR^{\prime})=\operatorname{Ob}(ICAR)-\{w\in CVE\}\), and the subcollection of morphisms of ICAR such that \(\operatorname{Mor}(ICAR^{\prime})=\operatorname{Mor}(ICAR)-\{e\in \operatorname{Mor}(ICAR)|src(e)\in CVE\}\). In this context, the object satisfying \(\mathbf{Q8}\) for techniques is the differential sieve \(S^{T}=S\backslash S^{\prime}\). \(S^{T}\) therefore contains all the arrows whose domain is the set Techniques and whose codomain is the set CVE. To give a clearer idea, figure 5 represents the construction stages of \(\mathbf{Q8}\) for technique T1499 (Endpoint Denial of Service), from the subcategory extracted "under technique T1499" (a), to the maximum sieve on T1499 (b), and finally to the differential sieve (c) answering to \(\mathbf{Q8}\). Obviously, the reasoning is the same for the list of vulnerabilities that can Figure 5: Construction stages of **Q8** be exploited by a tactic. All we have to do is point the sieve construction \(S^{TA}\) to the tactic(s) we want, for example to tactic TA0040 (Impact), which is the tactic performed by technique T1499. ### List techniques and tactics related to a vulnerability We now turn our attention to the bottom-up approach. From the point of view of the defender, it is natural to ask what attack techniques (and therefore tactics) are associated with its vulnerabilities. This knowledge enables him to focus on the vulnerabilities deemed most dangerous from the point of view of their tactical exploitation. This knowledge is also useful for the adversary if he knows some of the targeted assets or even in the absence of any information about the attacked IS. We therefore have query **Q9**: **Query 9** (**Q9**).: _List techniques and tactics related to a vulnerability_ This is essentially the dual request of **Q8**. Since category theory is an ideal framework for studying all kinds of dualities, we just have to do use the dual notions of the two notions defined previously. We thus introduce a notion of _cosieve_. **Definition 13** (Cosieve).: _Let \(v\) be a vulnerability. A cosieve on \(v\) is a collection \(coS\) of morphisms such that :_ 1. \(e\in coS\Rightarrow\mathrm{dom}(e)=v\)_,_ 2. \((e\in coS\wedge\mathrm{dom}(f)=\mathrm{cod}(e))\Rightarrow f\circ e\in coS\)_._ We then define the notions of maximal cosieve and differential cosieve as before. The differential cosieve \(\mathsf{coS}^{TA}\) corresponding to **Q8** is then given by the complement of the cosieve on \(v\) whose target is not a tactic: \(\mathsf{coS}^{TA}=\mathsf{C}_{\mathsf{coS}}\mathsf{coS}^{\prime}=\mathsf{coS} \backslash\mathsf{coS}^{\prime}\), where \(\mathsf{coS}\) and \(\mathsf{coS}^{\prime}\) are cosieve on a vulnerability \(v\) in ICAR and ICAR' respectively. \(\mathsf{coS}^{TA}\) is the collection of arrows with source \(v\) and target \(Tactics\). The construction is the same for techniques. Simply define the set \(ICAR^{\prime\prime}=ICAR^{\prime}-Techniques\) and repeat the reasoning from the cosieves in ICAR' and ICAR". Figure 6 depict the construction of the differential cosieve on vulnerability CVE-2006-5268 (administrative access to the RPC interface) for techniques, from (a) the sub-category of objects and morphisms above CVE-2006-5268 to (b) the final differential cosieve on CVE-2006-5268 satisfying **Q9**. ### Measuring the threat surface of an IS The "threat surface" is the set of techniques (or tactics) that an attacker can use to exploit the vulnerabilities of an IS. The threat surface is the counterpart of the attack surface on threat management5. Footnote 5: The threat surface is strictly speaking an attack surface, but since this name is usually used to describe the vulnerabilities of the IS, we use the term “threat surface” **Query 10** (**Q10**).: _Measuring the threat surface of IS \(X\)_ Formally, the threat surface is a simple extension of the differential cosieve used to list the techniques and tactics associated with a vulnerability. We just need to apply the differential cosieve to all the vulnerabilities in the IS, i.e. to the set \(\mathsf{Vuln}_{X}\). ## 4 Conclusion and future work The aim of this article was to provide a mathematical foundation for common queries in cybersecurity management. The proposed ICAR categorical model thus covers vulnerability management, threat management and asset management in a unified framework. However, ICAR is not a method for enriching cybersecurity ontologies. In particular, it does not allow the investigation of relations between vulnerability management and threat management. In this sense, the empirical results of the queries examined here are dependent on the quality of the data they use. Our model therefore underlines the importance of work aimed at more finely meshing the various dictionaries of the NIST and the MITRE corporation. Generally speaking, Figure 6: Construction of **Q9** for techniques associated with vulnerability CVE-2006-5268 it is clear that query and visualisation models will be enhanced by AI-based works mentioned above. This article only gives an overview of possible queries for cybersecurity operations. Others could naturally have been envisaged, such as the search for the shortest attack path (i.e. the path with the fewest breaches to exploit). Other queries will be considered later on. Future work will also address the algorithmic design of queries. In this sense, ICAR model should also be seen as a mathematical foundation for establishing a database schema compatible with the defined categorical schema and associated categorical notions. In other words, the queries dealt with in this article will subsequently be extended in terms of query language (SQL), with the aim of providing a bidirectional dictionary between conceptual categorical queries and database queries.
2302.08937
Orthogonal Projection of Convex Sets with a Differentiable Boundary
Given an Euclidean space, this paper elucidates the topological link between the partial derivatives of the Minkowski functional associated to a set (assumed to be compact, convex, with a differentiable boundary and a non-empty interior) and the boundary of its orthogonal projection onto the linear subspaces of the Euclidean space. A system of equations for these orthogonal projections is derived from this topological link. This result is illustrated by the projection of the unit ball of norm $4$ in $\mathbb{R}^3$ on a plane.
Gustave Bainier, Benoit Marx, Jean-Christophe Ponsart
2023-02-17T15:22:18Z
http://arxiv.org/abs/2302.08937v4
# Orthogonal Projection of Convex Sets with a Differentiable Boundary ###### Abstract Given an Euclidean space, this paper elucidates the topological link between the partial derivatives of the Minkowski functional associated to a set (assumed to be compact, convex, with a differentiable boundary and a non-empty interior) and the boundary of its orthogonal projection onto the linear subspaces of the Euclidean space. A system of equations for these orthogonal projections is derived from this topological link. This result is illustrated by the projection of the unit ball of norm \(4\) in \(\mathbb{R}^{3}\) on a plane. _Keywords--_ orthogonal projection, Minkowski functional, convex analysis, topology, Euclidean space _MSC codes--_ 52A20, 53A07 ## 1 Introduction In analytical geometry, given a family of curves \((\mathcal{C}_{t})_{t\in\mathbb{R}}\) defined on the plane \(\mathbb{R}^{2}\) by \[\mathcal{C}_{t}:F(x,y,t)=0 \tag{1}\] with \(F\) a differentiable function, the envelope of \((\mathcal{C}_{t})_{t\in\mathbb{R}}\) is defined as the set of points \((x,y)\in\mathbb{R}^{2}\) such that Eisenhart (1909); Pottmann and Peternell (2009) \[\exists t\in\mathbb{R},\quad F(x,y,t)=0\quad\quad\frac{\partial F}{\partial t}(x,y,t)=0 \tag{2}\] The well-known envelope theorem, mainly used in economics and optimization Afriat (1971); Carter (2001); Milgrom and Segal (2002); Lofgren (2011), provides conditions for the envelope of a family of curves \((\mathcal{C}_{t})_{t\in\mathbb{R}}\) to coincide with a single curve tangent to all of the \(\mathcal{C}_{t}\). Under some circumstances, this curve is also the boundary of the region filled by \((\mathcal{C}_{t})_{t\in\mathbb{R}}\), and despite this characterization being visually clear (Figure 1), the authors have not been able to find a satisfying topological discussion on this matter in the literature Milnor (1997); Jottrand (2013). Now, given \(A\) a convex set of \(\mathbb{R}^{3}\) with a boundary characterized by \(F(x,y,z)=0\) where \(F\) is differentiable, one can intuitively see by the envelope theorem how characterizing the boundary of \(A\) projected along the \(z\)-axis onto the \(xy\)-plane relates to the partial derivative of \(F\) with respect to \(z\) vanishing (Figure 2). Moreover, the function \(F\) can be obtained from \(\mu_{A}\), the Minkowski functional associated with \(A\), usually with the relation \(F=\mu_{A}-1\)Luenberger (1968). In a more general setting, with \(E\) a Euclidean space and \(A\) a compact and convex set of \(E\) with a differentiable boundary and a non-empty interior, the aim of this document is to elucidate the link between the partial derivatives of \(\mu_{A}\) and the boundary of the orthogonal projection of \(A\) onto the linear subspaces of \(E\). Leveraging results from convex analysis Rockafellar (1970), a system of equations for the orthogonal projection of \(A\) onto any linear subspace of \(E\) is obtained. This is the main contribution of the document. The paper is organised as follows: first, in Section 2, the main definitions and notations used throughout the document are introduced. Then, in Section 3, preliminary results are derived from topology, convex analysis and properties of the Minkowski functional. These results are applied in Section 4 to elucidate the topological link between the partial derivatives of \(\mu_{A}\) and the boundary of the projection of \(A\) onto the linear subspaces of \(E\), and a system of equations for the orthogonal projection of \(A\) onto the linear subspaces of \(E\) is obtained. Section 5 provides an illustrative example of the main result of this document by computing the projection of the unit ball of norm \(4\) in \(\mathbb{R}^{3}\) on a plane. Finally, Section 6 concludes the document with some application perspectives. Figure 2: \(A\), the \(3\)-dimensional ellipsoid in red, is a convex and compact set of \(\mathbb{R}^{3}\). \(\partial p(A)\), the boundary of a \(2\)-dimensional ellipsoid with a blue outline, is the boundary of the projection of \(A\) along the \(z\)-axis onto the \(xy\)-plane represented in gray. Definitions, Notations \(\mathbb{R}\) denotes the field of real numbers. \(\mathbb{R}^{*}\) denotes \(\mathbb{R}\setminus\{0\}\). \(\mathbb{R}_{+}\) denotes \([0,+\infty)\). \(\mathbb{R}_{+}^{*}\) denotes \((0,+\infty)\). Let \(E\) denote a Hilbert space of finite dimension over \(\mathbb{R}\). \(E\) possesses an inner product \(\langle\cdot|\cdot\rangle\) which naturally induces a norm \(\|\cdot\|\) and a distance \(d(\cdot,\cdot)\) on \(E\). \(\mathcal{B}_{E}(x,r)\) denotes the open ball of \(E\) centered at \(x\) and of radius \(r\). Let \(A\) and \(B\) be two subsets of \(E\). \(A+B\) denotes the Minkowski sum of the two sets. \(\mathrm{conv}(A)\) denotes the convex hull of \(A\) in \(E\). \(\mathrm{span}(A)\) denotes the linear span of \(A\) in \(E\). \(\mathrm{int}_{E}(A)\), \(\mathrm{cl}_{E}(A)\) and \(\partial_{E}(A)\) denote respectively the interior, the closure and the boundary of \(A\) in \(E\). \(tA\) denotes the scaled set \(\{x\in E:x=ty,y\in A\}\). \(A\) is said to be absorbing if for all \(x\in E\) there exists \(t\in\mathbb{R}_{+}\) such that \(x\in tA\). Let \(\mathcal{V}\) and \(\mathcal{W}\) be two linear subspaces of \(E\). \(\mathcal{V}\oplus\mathcal{W}\) denotes the direct sum of \(\mathcal{V}\) and \(\mathcal{W}\). \(\mathcal{V}^{\perp}\) denotes the orthogonal complement of \(\mathcal{V}\) in \(E\). \(\mathrm{dim}(\mathcal{V})\) denotes the dimension of \(\mathcal{V}\). Let \(F\) be another Hilbert space of finite dimension over \(\mathbb{R}\) and let \(U\) be a subset of \(E\). \(\mathcal{C}^{0}(U,F)\) denotes the set of continuous maps from \(U\) to \(F\). \(\mathcal{C}^{1}(U,F)\) denotes the set of differentiable maps from \(U\) to \(F\) with a continuous derivative. Given \(L\) a linear map from \(E\) to \(F\), \(|||L|||\) denotes the operator norm of \(L\). Given \(f\in\mathcal{C}^{1}(U,\mathbb{R})\), \(\nabla f(x)\) denotes the gradient of \(f\) at \(x\). The Minkowski functional of \(A\) is the map \(\mu_{A}:E\rightarrow\mathbb{R}_{+}\) defined by \(\mu_{A}(x):=\inf\{t\in\mathbb{R}_{+}^{*}:x\in tA\}\). \(A\) has a differentiable boundary if \(\mu_{A}\in\mathcal{C}^{1}(E\setminus\{0\},\mathbb{R})\). Let \(z\in E\), the hyperplane \(H\) defined by \(H=\mathrm{Ker}(\langle z|\cdot\rangle)\) is called a supporting hyperplane of \(A\) at \(x\in\partial_{E}(A)\) if for all \(y\in E\), \(\mu_{A}(y)\geq\mu_{A}(x)+\langle z|y-x\rangle\). In the following, \(A\) always denotes a convex, bounded set of \(E\) with \(0\in\mathrm{int}_{E}(A)\) (hence \(A\) is absorbing). For all \(x\in E\), there exists a unique \(x_{\mathcal{V}}\in\mathcal{V}\) and a unique \(x_{\mathcal{V}^{\perp}}\in\mathcal{V}^{\perp}\) such that \(x=x_{\mathcal{V}}+x_{\mathcal{V}^{\perp}}\). From now on, \(p_{\mathcal{V}}\) always denotes the map \(x\mapsto x_{\mathcal{V}}\), that is to say the orthogonal projection along \(\mathcal{V}^{\perp}\) onto \(\mathcal{V}\), with \(\mathcal{V}\neq\{0\}\). ## 3 Preliminary results As stated in the introduction, the partial derivatives of the equation of the boundary of \(A\) (a notion of convex analysis) are related to the boundary of the orthogonal projection of \(A\) onto the linear subspaces of \(E\) (a topological consideration). The main purpose of these preliminary results is to draw a link from convex analysis to topology via the Minkowski functionals associated with \(A\). In particular, these preliminary results mainly focus on the link between the gradient of \(\mu_{A}\) and a topological characterization of the supporting hyperplanes of \(A\) (Corollary 3.1, Corollary 3.2 and Figure 3). The topological characterization of the supporting hyperplanes of \(A\) then provides a characterization of the boundary of the projection of \(A\) onto \(\mathcal{V}\) (Lemma 3.6 and Figure 5), which can finally be linked back to the gradient of \(\mu_{A}\). First, the following classical properties on the Minkowski functional are recalled. **Property 3.1**.: _The Minkowski functional \(\mu_{A}\) satisfies:_ 1. _For all_ \(x\in E\)_,_ \(0\leq\mu_{A}(x)<+\infty\) _._ 2. _For all_ \(x\in E\) _and_ \(t\in\mathbb{R}_{+}\)_,_ \(\mu_{A}(tx)=t\mu_{A}(x)\)_,_ 3. _For all_ \(x_{1},x_{2}\in E\)_,_ \(\mu_{A}(x_{1}+x_{2})\leq\mu_{A}(x_{1})+\mu_{A}(x_{2})\)_,_ 4. \(\mu_{A}\in\mathcal{C}^{0}(E,\mathbb{R}_{+})\)__ 5. \(\mu_{A}^{-1}([0,1))=\mathrm{int}_{E}(A)\)_,_ \(\mu_{A}^{-1}([0,1])=\mathrm{cl}_{E}(A)\)_,_ \(\mu_{A}^{-1}(\{1\})=\partial_{E}(A)\)__ Proof.: See Lemma 1 at page 131-132 of Luenberger (1968). In particular, item 2 and 3 combined provides the fact that \(\mu_{A}\) is a convex function on \(E\). Together with item 5, this establishes a first link between convex analysis and topology. Considering that \(A\) has a differentiable boundary (i.e. \(\mu_{A}\in\mathcal{C}^{1}(E\setminus\{0\},\mathbb{R})\)), unicity of the supporting hyperplanes of \(A\) is demonstrated using the following result from convex analysis. **Property 3.2**.: _Let \(f\in\mathcal{C}^{1}(U,\mathbb{R})\) be a convex function. For all \(x\in E\), we have:_ \[\{z\in E\,:\,\forall y\in U,\,f(y)\geq f(x)+\langle z|y-x\rangle\}=\{\nabla f (x)\} \tag{3}\] Proof.: See Theorem 25.1 at page 242 of Rockafellar (1970). **Remark 3.1**.: _The set on the left-hand side of (3) contains the subgradients of \(f\) at \(x\) and is not necessarily a singleton when \(f\) is not differentiable at \(x\)._ Indeed, if \(A\) has a differentiable boundary, then \(\mu_{A}\) is a \(\mathcal{C}^{1}\) convex function, and if for all \(y\in E\), \(\mu_{A}(y)\geq\mu_{A}(x)+\langle z|y-x\rangle\), then \(\mathrm{Ker}(\langle z|\cdot\rangle)\) is by definition a supporting hyperplane of \(A\) at \(x\in\partial_{E}(A)\). Unicity of the supporting hyperplanes of \(A\) is obtained from the unicity of such \(z\). This links the supporting hyperplanes of \(A\) with the gradient of \(\mu_{A}\) (Figure 2(a)). **Corollary 3.1** (The gradient characterization).: _If \(A\) has a differentiable boundary, then there is only one supporting hyperplane of \(A\) at \(x\in\partial_{E}(A)\): it is the hyperplane orthogonal to \(\nabla\mu_{A}(x)\)._ Figure 3: Illustration of the gradient characterization (Corollary 3.1) and of the topological characterization (Corollary 3.2) of the supporting hyperplane of \(A\) at \(x\in\partial_{E}(A)\) when \(A\) has a differentiable boundary. _From now on, this supporting hyperplane is denoted \(H_{x}(A)\). Formally, for all \(x\in\partial_{E}(A)\), the following holds:_ \[H_{x}(A)=\operatorname{Ker}(\langle\nabla\mu_{A}(x)|\cdot\rangle) \tag{4}\] Now that the supporting hyperplane of \(A\) at \(x\) is linked with the gradient of \(\mu_{A}\) at \(x\), the previous results are now leveraged to obtain a topological characterization of the supporting hyperplanes of \(A\). For a convex shape with a differentiable boundary, the supporting hyperplane at a boundary point of this shape is the only hyperplane that, once translated to this point, does not intersect the interior of the shape (Figure 2(b)). Lemma 3.1 provides the fact that a supporting hyperplane of \(A\) never intersects the interior of \(A\), and Lemma 3.3 provides the fact that, if \(A\) has a differentiable boundary, then any affine vector line going through \(x\in\partial_{E}(A)\) that is not included in the supporting hyperplane of \(A\) at \(x\) will cross the interior of \(A\). **Lemma 3.1**.: _If \(H\) is a supporting hyperplane of \(A\) at \(x\in\partial_{E}(A)\), then \((H+\{x\})\cap\operatorname{int}_{E}(A)=\emptyset\)._ Proof.: By definition of the supporting hyperplane, for all \(h\in H\) the following inequality holds \(\mu_{A}(x+h)\geq\mu_{A}(x)\). Moreover since \(x\in\partial_{E}(A)\), then \(\mu_{A}(x)=1\), which provides \(\mu_{A}(x+h)\geq 1\), hence \((H+\{x\})\subseteq\mu_{A}^{-1}([1,+\infty))\), yet \(\operatorname{int}_{E}(A)=\mu_{A}^{-1}([0,1))\). **Lemma 3.2**.: _By parallelism, if \((H+\{x\})\cap\operatorname{int}_{E}(A)=\emptyset\), then \((H+\{x\})\cap(H+\operatorname{int}_{E}(A))=\emptyset\) as well._ Proof.: This statement is proved by contraposition. If there exists \(y\in(H+\{x\})\cap(H+\operatorname{int}_{E}(A))\), then there exists \(z\in\operatorname{int}_{E}(A)\) and \(h_{1},h_{2}\in H\) such that \(y=x+h_{1}=z+h_{2}\), providing \(z=x+(h_{1}-h_{2})\) where \((h_{1}-h_{2})\in H\), hence \(z\in(H+\{x\})\cap\operatorname{int}_{E}(A)\). **Lemma 3.3**.: _Suppose \(A\) has a differentiable boundary. If \(v\notin H_{x}(A)\), then \((\operatorname{span}(v)+\{x\})\cap\operatorname{int}_{E}(A)\neq\emptyset\)._ Proof.: This statement is proved by contraposition. Suppose \((\operatorname{span}(v)+\{x\})\cap\operatorname{int}_{E}(A)=\emptyset\) and consider the function \(\phi(t)=\mu_{A}(x+tv)\). Since \(\mu_{A}\in\mathcal{C}^{1}(E\setminus\{0\},\mathbb{R}_{+})\) is a convex function, then \(\phi\in\mathcal{C}^{1}(\mathbb{R},\mathbb{R}_{+})\) is convex as well. Moreover, since \((\operatorname{span}(v)+\{x\})\cap\operatorname{int}_{E}(A)=\emptyset\), then for all \(t\in\mathbb{R}\), \(\phi(t)\geq 1\). Yet \(\phi(0)=1\). \(t=0\) is therefore a minimum for \(\phi\), which implies \(\phi^{\prime}(0)=0\). However, \(\phi^{\prime}(0)=\langle\nabla\mu_{A}(x)|v\rangle\), hence \(v\in\operatorname{Ker}(\langle\nabla\mu_{A}(x)|\cdot\rangle)\). From the Lemmas 3.1 and 3.3, the following necessary and sufficient condition can be stated, providing a topological characterization of supporting hyperplanes (Figure 2(b)) on top of their analytical one (obtained in Corollary 3.1): **Corollary 3.2** (The topological characterization).: _If \(A\) has a differentiable boundary, then \(H_{x}(A)\) contains exactly the directions coming from \(x\) that never intersect the interior of \(A\). Formally, for all \(x\in\partial_{E}(A)\), the following holds:_ \[H_{x}(A)=\{v\in E\,:\,(\operatorname{span}(v)+\{x\})\cap\operatorname{int}_{E}( A)=\emptyset\} \tag{5}\] Before linking the topological characterization of the supporting hyperplanes of \(A\) with the boundary of the projection \(p_{\mathcal{V}}(A)\) of \(A\) onto \(\mathcal{V}\), two topological results on the orthogonal projection of \(A\) are stated. The first one simply states that the interior of the projection of \(A\) is the projection of the interior of \(A\) (Lemma 3.4). The second one states that the projection of the closure of \(A\) is also the projection of the boundary of \(A\) (Lemma 3.5). Both are easy to understand visually with the help of Figure 4. **Lemma 3.4**.: _If \(A\) has a differentiable boundary, then \(p_{\mathcal{V}}(\operatorname{int}_{E}(A))=\operatorname{int}_{\mathcal{V}}( p_{\mathcal{V}}(A))\)._ Proof.: This statement is proved by double inclusion. \(\subseteq\) This inclusion is a direct consequence of \(p_{\mathcal{V}}\) being an open map from \(E\) to \(\mathcal{V}\). \(\supseteq\) Let \(y\in\operatorname{int}_{\mathcal{V}}(p_{\mathcal{V}}(A))\), and \(x\in A\) such that \(p_{\mathcal{V}}(x)=y\). If \(x\in\operatorname{int}_{E}(A)\) there is nothing to prove. If \(x\in\partial_{E}(A)\), the following will show by contradiction that \((\mathcal{V}^{\perp}+\{x\})\cap\operatorname{int}_{E}(A)\neq\emptyset\), which, thanks to Lemma 3.3, is equivalent to the existence of \(v\in\mathcal{V}^{\perp}\) such that \(v\notin H_{x}(A)\). By contradiction, it is assumed that \(\mathcal{V}^{\perp}\subseteq H_{x}(A)\). By the hyperplane separation theorem, \(A\) is contained on one side of \(H_{x}(A)+\{x\}\), hence there is \(v\in H_{x}(A)^{\perp}\setminus\{0\}\) such that for all \(t\in\mathbb{R}_{+}^{*}\), \(x+tv\notin A+H_{x}(A)\), therefore \(x+tv\notin A+\mathcal{V}^{\perp}\), and finally \(p_{\mathcal{V}}(x+tv)\notin p_{\mathcal{V}}(A)\). However, since \(\mathcal{V}^{\perp}\subseteq H_{x}(A)\), then \(v\in\mathcal{V}\), hence \(p_{\mathcal{V}}(x+tv)=y+tv\). Since \(y\in\operatorname{int}_{\mathcal{V}}(p_{\mathcal{V}}(A))\), by definition of the interior there exists \(\delta\in\mathbb{R}_{+}^{*}\) such that \(\mathcal{B}_{\mathcal{V}}(y,\delta)\subseteq p_{\mathcal{V}}(A)\), so in particular there exists \(\epsilon\in(0,\delta)\) such that \(p_{\mathcal{V}}(x+\epsilon v)=y+\epsilon v\in p_{\mathcal{V}}(A)\), which contradicts that for all \(t\in\mathbb{R}_{+}^{*}\), \(p_{\mathcal{V}}(x+tv)\notin p_{\mathcal{V}}(A)\). Finally \((\mathcal{V}^{\perp}+\{x\})\cap\operatorname{int}_{E}(A)\neq\emptyset\). **Lemma 3.5**.: _The following equality holds: \(p_{\mathcal{V}}(\mathrm{cl}_{E}(A))=p_{\mathcal{V}}(\partial_{E}(A))\)._ Proof.: This statement is proved by double inclusion. \(\subseteq\) Let \(y\in p_{\mathcal{V}}(\mathrm{cl}_{E}(A))\), and \(x\in\mathrm{cl}_{E}(A)\) such that \(y=p_{\mathcal{V}}(x)\). If \(x\in\partial_{E}(A)\) there is nothing to prove. If \(x\in\mathrm{int}_{E}(A)\), by definition of the interior there exists \(\epsilon\in\mathbb{R}_{+}^{*}\) such that \(\mathcal{B}_{E}(x,\epsilon)\subseteq A\). Let \(v\in\big{(}\mathcal{B}_{E}(0,\epsilon)\cap\mathcal{V}^{\perp}\big{)}\setminus \{0\}\), which guarantees \(x+v\in\mathrm{int}_{E}(A)\). Since \(A\) is bounded, there exists \(t\in(1,+\infty)\) such that \(x+tv\notin\mathrm{cl}_{E}(A)\). Considering the Minkowski functional \(\mu_{A+\{-x\}}\), \(x+v\in\mathrm{int}_{E}(A)\) translates to \(\mu_{A+\{-x\}}(v)<1\), and \(x+tv\notin\mathrm{cl}_{E}(A)\) translates to \(\mu_{A+\{-x\}}(tv)>1\). By continuity of \(\mu_{A+\{-x\}}\) the intermediate value theorem provides the existence of \(t^{*}\in(1,t)\) such that \(\mu_{A+\{-x\}}(t^{*}v)=1\), hence \(x+t^{*}v\in\partial_{E}(A)\). Moreover \(x+t^{*}v\in\mathcal{V}^{\perp}+\{y\}\), meaning \(p_{\mathcal{V}}(x+t^{*}v)=y\) (Figure 4). \(\supseteq\) This inclusion is a direct consequence of the inclusion \(\partial_{E}(A)\subseteq\mathrm{cl}_{E}(A)\). With the help of the previous results, the supporting hyperplanes relation to the boundary of the orthogonal projection of \(A\) onto \(\mathcal{V}\) can be formally stated. Intuitively, when \(y\in\mathcal{V}\) is at the boundary of \(p_{\mathcal{V}}(A)\), the supporting hyperplane at the pre-image of \(y\) by \(p_{\mathcal{V}}\) includes \(\mathcal{V}^{\perp}\), the direction of the projection. Reciprocally, when there is such an alignment, that is to say when \(\mathcal{V}^{\perp}\) is contained in the supporting hyperplane of the pre-image of \(y\) by \(p_{\mathcal{V}}\), then \(y\in\mathcal{V}\) is at the boundary of \(p_{\mathcal{V}}(A)\) (see Figure 5). More exactly, the following Lemma holds. **Lemma 3.6**.: _Let \(A\) be closed and have a differentiable boundary. If \(y\in p_{\mathcal{V}}(A)\), then the following statements are equivalent:_ 1. \(y\in\partial_{\mathcal{V}}(p_{\mathcal{V}}(A))\)__ 2. \(\{x\in\partial_{E}(A):p_{\mathcal{V}}(x)=y\}\) _is convex_ Figure 5: Illustration of the supporting hyperplanes relation to the orthogonal projection of a convex shape. This relation is formalized in Lemma 3.6. 3. \(\exists x\in\partial_{E}(A)\,|\,\begin{cases}p_{\mathcal{V}}(x)=y\\ \mathcal{V}^{\perp}\subseteq H_{x}(A)\end{cases}\)__ Proof.: This statement is proved by a circular chain of implications. The notation \(B=\{x\in\partial_{E}(A):p_{\mathcal{V}}(x)=y\}\) is used in this proof as a shorthand. \((1)\Rightarrow(2)\) This implication is proved by contraposition. Suppose \(B\) is not convex, hence there exists \(z\in\operatorname{conv}(B)\setminus B\). The following equalities hold: \[\begin{split}\operatorname{conv}(B)\setminus B&= \operatorname{conv}\{x\in\partial_{E}(A):p_{\mathcal{V}}(x)=y\}\setminus\{x \in\partial_{E}(A):p_{\mathcal{V}}(x)=y\}\\ &=\{x\in\operatorname{cl}_{E}(A):p_{\mathcal{V}}(x)=y\}\setminus\{x \in\partial_{E}(A):p_{\mathcal{V}}(x)=y\}\\ &=\{x\in\operatorname{int}_{E}(A):p_{\mathcal{V}}(x)=y\}\end{split} \tag{6}\] This provides \(z\in\operatorname{int}_{E}(A)\) with \(p_{\mathcal{V}}(z)=y\). By definition of the interior, there exists \(\epsilon\in\mathbb{R}_{+}^{*}\) such that \(\mathcal{B}_{E}(z,\epsilon)\subseteq A\). For all \(h\in\mathcal{B}_{E}(0,\epsilon)\), \(p_{\mathcal{V}}(z+h)=y+p_{\mathcal{V}}(h)\), and since \(|||p_{\mathcal{V}}|||=1\), then \(p_{\mathcal{V}}(h)\in\mathcal{B}_{\mathcal{V}}(0,\epsilon)\), hence \(p_{\mathcal{V}}(\mathcal{B}_{E}(z,\epsilon))\subseteq\mathcal{B}_{\mathcal{ V}}(y,\epsilon)\subseteq p_{\mathcal{V}}(A)\). This finally provides \(y\in\operatorname{int}_{\mathcal{V}}(p_{\mathcal{V}}(A))\). \((2)\Rightarrow(3)\) Since \(A\) is closed, then, by Lemma 3.5, \(y\in p_{\mathcal{V}}(\partial_{E}(A))\), hence \(B\neq\emptyset\). Let \(x\in B\) and \(v\in\mathcal{V}^{\perp}\). The following will show by contradiction that \(t\in\mathbb{R}\), \(x+tv\notin\operatorname{int}_{E}(A)\). Suppose without loss of generality that there exists \(t\in\mathbb{R}_{+}^{*}\) such that \(x+tv\in\operatorname{int}_{E}(A)\). Since \(A\) is bounded, with the help of the intermediate value theorem (similarly to Lemma 3.5), there exists \(t^{*}\in(1,+\infty)\) such that \(x+t^{*}tv\in\partial_{E}(A)\). This provides \(x\in B\), \(x+t^{*}tv\in B\), and \(x+tv\notin B\), yet \(B\) should be convex, so there is a contradiction (Figure 4). This provides \(\operatorname{span}(v)\cap\operatorname{int}_{E}(A)=\emptyset\), hence by Corollary 3.2, \(v\in H_{x}(A)\). \((3)\Rightarrow(1)\) Let \(x\in\partial_{E}(A)\) be such that \(p_{\mathcal{V}}(x)=y\) and \(\mathcal{V}^{\perp}\subseteq H_{x}(A)\). Lemma 3.2 provides \((H_{x}(A)+\{x\})\cap(H_{x}(A)+\operatorname{int}_{E}(A))=\emptyset\), hence \((\mathcal{V}^{\perp}+\{x\})\cap(\mathcal{V}^{\perp}+\operatorname{int}_{E}(A ))=\emptyset\). Moreover the following equalities hold: \[\begin{split}(\mathcal{V}^{\perp}+\{x\})\cap(\mathcal{V}^{\perp }+\operatorname{int}_{E}(A))&=p_{\mathcal{V}}^{-1}(\{y\})\cap p_ {\mathcal{V}}^{-1}(p_{\mathcal{V}}(\operatorname{int}_{E}(A)))\\ &=p_{\mathcal{V}}^{-1}(\{y\})\cap p_{\mathcal{V}}^{-1}( \operatorname{int}_{\mathcal{V}}(p_{\mathcal{V}}(A)))\quad\text{[Lemma \ref{lem:v}]}\\ &=p_{\mathcal{V}}^{-1}(\{y\}\cap\operatorname{int}_{\mathcal{V}}( p_{\mathcal{V}}(A)))\end{split} \tag{7}\] Hence \(\{y\}\cap\operatorname{int}_{\mathcal{V}}(p_{\mathcal{V}}(A))=\emptyset\), that is to say \(y\notin\operatorname{int}_{\mathcal{V}}(p_{\mathcal{V}}(A))\), providing \(y\in\partial_{\mathcal{V}}(p_{\mathcal{V}}(A))\). Lastly, the projection of \(A\) onto \(\mathcal{V}\) can be seen as the union of the boundaries of the projection of \(tA\) onto \(\mathcal{V}\) with \(t\in[0,1]\) (see Figure 6). In the next section, the following Lemma will provide a way to go from a statement on the boundary of the projection to a statement on the whole projection \(p_{\mathcal{V}}(A)\). **Lemma 3.7**.: _The following equality holds: \(p_{\mathcal{V}}(\operatorname{cl}_{E}(A))=\bigcup_{t\in[0,1]}\partial_{ \mathcal{V}}(p_{\mathcal{V}}(\operatorname{cl}_{E}(tA)))\)_ Proof.: \(\mu_{p_{\mathcal{V}}(\operatorname{cl}_{E}(A))}\) denotes the Minkowski functional of \(p_{\mathcal{V}}(\operatorname{cl}_{E}(A))\) defined over \(\mathcal{V}\). The following equalities hold: \[\begin{split} p_{\mathcal{V}}(\operatorname{cl}_{E}(A))& =\operatorname{cl}_{\mathcal{V}}(p_{\mathcal{V}}(\operatorname{cl}_{E}(A))) \qquad\qquad[p_{\mathcal{V}}\text{ continuous}]\\ &=\mu_{p_{\mathcal{V}}(\operatorname{cl}_{E}(A))}^{-1}([0,1])\\ &=\bigcup_{t\in[0,1]}\mu_{p_{\mathcal{V}}(\operatorname{cl}_{E} (A))}^{-1}(\{t\})\\ &=\bigcup_{t\in[0,1]}\partial_{\mathcal{V}}(tp_{\mathcal{V}}( \operatorname{cl}_{E}(A)))\\ &=\bigcup_{t\in[0,1]}\partial_{\mathcal{V}}(p_{\mathcal{V}}(t \operatorname{cl}_{E}(A)))\quad[\text{linearity of }p_{\mathcal{V}}]\\ p_{\mathcal{V}}(\operatorname{cl}_{E}(A))&= \bigcup_{t\in[0,1]}\partial_{\mathcal{V}}(p_{\mathcal{V}}(\operatorname{cl}_ {E}(tA)))\end{split} \tag{8}\] ## 4 Characterization of the orthogonal projection of a convex set with a differentiable boundary The main result of this document consists in obtaining a system of equations that characterizes the orthogonal projection of the closure of \(A\) on a linear subspace \(\mathcal{V}\neq\{0\}\) when \(A\) has a differentiable boundary. To obtain this system of equations, the following Minkowski functional of two variables is introduced: \[\begin{split}\eta_{A}:\mathcal{V}\times\mathcal{V}^{\perp}& \rightarrow\mathbb{R}\\ \left(x_{\mathcal{V}},x_{\mathcal{V}^{\perp}}\right)& \mapsto\mu_{A}\left(x_{\mathcal{V}}+x_{\mathcal{V}^{\perp}}\right) \end{split} \tag{9}\] From now on, \(\frac{\partial\eta_{A}}{\partial x_{\mathcal{V}}}\) denotes the partial derivative of \(\eta_{A}\) with respect to \(x_{\mathcal{V}}\) and \(\frac{\partial\eta_{A}}{\partial x_{\mathcal{V}\perp}}\) denotes the partial derivative of \(\eta_{A}\) with respect to \(x_{\mathcal{V}^{\perp}}\). The link between the partial derivatives of \(\eta_{A}\) and the boundary of the orthogonal projection of \(A\) onto the linear subspaces of \(E\) is explicitely written and leveraged in the proof of this characterization. Figure 6: Illustration of Lemma 3.7, where \(A\) is assumed to be closed **Theorem 4.1**.: _If \(A\) is a compact and convex set of \(E\) with a differentiable boundary and \(0\in\operatorname{int}_{E}(A)\), then, for all projection \(p_{\mathcal{V}}\) such that \(\mathcal{V}\neq\{0\}\), the following equality holds:_ \[p_{\mathcal{V}}(A)=\left\{x_{\mathcal{V}}\in\mathcal{V}\,:\,\exists x_{ \mathcal{V}^{\perp}}\in\mathcal{V}^{\perp}\,|\,\begin{cases}\eta_{A}\left(x_{ \mathcal{V}},x_{\mathcal{V}^{\perp}}\right)\leq 1\\ x_{\mathcal{V}}+x_{\mathcal{V}^{\perp}}\neq 0\Rightarrow\frac{\partial\eta_{A}}{ \partial x_{\mathcal{V}^{\perp}}}\left(x_{\mathcal{V}},x_{\mathcal{V}^{\perp} }\right)=0\end{cases}\right\} \tag{10}\] Proof.: If \(t=0\) then \(tA=\{0\}=p_{\mathcal{V}}(tA)\), hence for all \(x_{\mathcal{V}^{\perp}}\in\mathcal{V}^{\perp}\), the equality \(\eta_{A}(0,x_{\mathcal{V}^{\perp}})=\mu_{p_{\mathcal{V}}(A)}(0)\) holds, and there is nothing to prove. If \(t\in\mathbb{R}_{+}^{*}\), thanks to Lemma 3.6, the following equivalence holds: \[y\in\partial_{\mathcal{V}}(p_{\mathcal{V}}(tA))\Leftrightarrow\exists x\in \partial_{E}(tA)\,|\,\begin{cases}p_{\mathcal{V}}(x)=y\\ \mathcal{V}^{\perp}\subseteq H_{x}(tA)\end{cases} \tag{11}\] For all \(x\in\partial_{E}(tA)\), \(H_{x}(tA)=\operatorname{Ker}(\langle\nabla\mu_{tA}(x)|\cdot\rangle)\), and since \(t\neq 0\), then \(\nabla\mu_{tA}(x)=\nabla\mu_{A}(x)\). Moreover for all \(h\in E\), \(x_{\mathcal{V}},h_{\mathcal{V}}\in\mathcal{V}\) and \(x_{\mathcal{V}^{\perp}},h_{\mathcal{V}^{\perp}}\in\mathcal{V}^{\perp}\) such that \(x=x_{\mathcal{V}}+x_{\mathcal{V}^{\perp}}\) and \(h=h_{\mathcal{V}}+h_{\mathcal{V}^{\perp}}\), the following equality holds: \[\langle\nabla\mu_{A}(x)|h\rangle=\frac{\partial\eta_{A}}{\partial x_{\mathcal{ V}}}\left(x_{\mathcal{V}},x_{\mathcal{V}^{\perp}}\right)h_{\mathcal{V}}+\frac{ \partial\eta_{A}}{\partial x_{\mathcal{V}^{\perp}}}\left(x_{\mathcal{V}},x_{ \mathcal{V}^{\perp}}\right)h_{\mathcal{V}^{\perp}} \tag{12}\] hence the following equivalences hold: \[y\in\partial_{\mathcal{V}}(p_{\mathcal{V}}(tA)) \Leftrightarrow\exists x_{\mathcal{V}^{\perp}}\in\mathcal{V}^{ \perp}\,|\,\begin{cases}y+x_{\mathcal{V}^{\perp}}\in\partial_{E}(tA)\\ \frac{\partial\eta_{A}}{\partial x_{\mathcal{V}^{\perp}}}(y,x_{\mathcal{V}^{ \perp}})=0\end{cases} \tag{13}\] \[\text{i.e.}\,\,y\in\partial_{\mathcal{V}}(p_{\mathcal{V}}(tA)) \Leftrightarrow\exists x_{\mathcal{V}^{\perp}}\in\mathcal{V}^{\perp}\,|\, \begin{cases}\eta_{A}(y,x_{\mathcal{V}^{\perp}})=t\\ \frac{\partial\eta_{A}}{\partial x_{\mathcal{V}^{\perp}}}(y,x_{\mathcal{V}^{ \perp}})=0\end{cases}\] For \(t=1\), this last equivalence provides the link between the partial derivatives of \(\eta_{A}\) and the boundary of the orthogonal projection of \(A\) onto the linear subspaces of \(E\). Finally, Lemma 3.7 provides: \[p_{\mathcal{V}}(A) =p_{\mathcal{V}}(\operatorname{cl}_{E}(A)) \tag{14}\] \[=\] \[=\] \[p_{\mathcal{V}}(A) =\] which concludes the proof. Given a compact and convex set of \(E\) with a differentiable boundary and a non-empty interior, there exists a translation so that the origin of \(E\) is in the interior of the translated set, hence this new set is absorbing. Given a good translation of \(A\), the main result of this document can therefore be extented without difficulty to a more general setting where \(A\) simply denotes a compact and convex set of \(E\) with a differentiable boundary and a non-empty interior. **Corollary 4.1**.: _Keeping the assumptions of Theorem 4.1, the following equality holds:_ \[\mu_{p_{\mathcal{V}}(A)}(x)=\begin{cases}\inf\left\{t\in\mathbb{R}_{+}^{*}\,:\, \exists x_{\mathcal{V}^{\perp}}\in\mathcal{V}^{\perp}\,|\,\begin{cases}\eta_{A} \left(x,x_{\mathcal{V}^{\perp}}\right)\leq t\\ x+x_{\mathcal{V}^{\perp}}\neq 0\Rightarrow\,\frac{\partial\eta_{A}}{\partial x_{ \mathcal{V}^{\perp}}}\left(x,x_{\mathcal{V}^{\perp}}\right)=0\end{cases}\right\} &\text{ if }x\in\mathcal{V}\\ +\infty&\text{ if }x\notin\mathcal{V}\end{cases} \tag{15}\] _Moreover, if \(V\) and \(V^{\perp}\) denote the matrices whose columns are resp. formed by \((v_{1},\ldots,v_{m})\) a basis to \(\mathcal{V}\) and \((v_{m+1},\ldots,v_{n})\) a basis to \(\mathcal{V}^{\perp}\), then the following equality holds:_ \[\mu_{PA}(y)=\mu_{p_{\mathcal{V}}(A)}(Vy) \tag{16}\] _with \(P=\left[\begin{array}{cc}I_{m}&0\end{array}\right]\left[\begin{array}{cc }V&V^{\perp}\end{array}\right]^{-1}\) and where \(y\in\mathbb{R}^{m}\) is expressed in the \((v_{1},\ldots,v_{m})\) basis._ Proof.: Equation (15) is easily derived by replacing the interval \([0,1]\) by the interval \([0,t]\) in the proof of Theorem 4.1. Equation (16) is a trivial consequence of \(p_{\mathcal{V}}(A)=VPA\). ## 5 Illustrative example As an illustrative example of Theorem 4.1, this section of the document provides an implicit parametric equation to the projection of the unit ball of norm \(4\) of \(\mathbb{R}^{3}\) (denoted \(A\)) onto the plane \(H:x+y+z=0\). The Minkowski functional of \(A\) is given by \[\mu_{A}(x,y,z)=\sqrt[4]{x^{4}+y^{4}+z^{4}} \tag{17}\] After the orthonormal change of basis \[\left[\begin{array}{c}x\\ y\\ z\end{array}\right]=\left[\begin{array}{rr}0&\sqrt{2/3}&1/\sqrt{3}\\ 1/\sqrt{2}&-1/\sqrt{6}&1/\sqrt{3}\\ -1/\sqrt{2}&-1/\sqrt{6}&1/\sqrt{3}\end{array}\right]\left[\begin{array}{c}u \\ v\\ w\end{array}\right] \tag{18}\] where \(w\) is chosen such that \(H:w=0\), the function \(\eta\) is introduced \[\eta_{A}(u,v,w)=\mu_{A}\left(\sqrt{\frac{2}{3}}v+\frac{1}{\sqrt{3}}w,\frac{1} {\sqrt{2}}u-\frac{1}{\sqrt{6}}v+\frac{1}{\sqrt{3}}w,-\frac{1}{\sqrt{2}}u- \frac{1}{\sqrt{6}}v+\frac{1}{\sqrt{3}}w\right) \tag{19}\] For all \((u,v,w)\neq(0,0,0)\), its partial derivative with respect to \(w\) is given by \[\begin{split}\frac{\partial\eta_{A}}{\partial w}(u,v,w)& =\frac{\partial}{\partial w}\left[\sqrt[4]{\left[\sqrt{\frac{2}{3} }v+\frac{1}{\sqrt{3}}w\right]^{4}+\left[\frac{1}{\sqrt{2}}u-\frac{1}{\sqrt{6} }v+\frac{1}{\sqrt{3}}w\right]^{4}+\left[-\frac{1}{\sqrt{2}}u-\frac{1}{\sqrt{6} }v+\frac{1}{\sqrt{3}}w\right]^{4}}\right]\\ &=\left[\frac{1}{3}w^{3}+(u^{2}+v^{2})w-\frac{\sqrt{2}}{2}u^{2}v+ \frac{\sqrt{2}}{6}v^{3}\right]\eta_{A}^{-3}(u,v,w)\end{split} \tag{20}\] Since \(\eta_{A}^{-3}(u,v,w)>0\), studying \(w\) such that \(\frac{\partial\eta_{A}}{\partial w}(u,v,w)=0\) is equivalent to the study of the solutions to the depressed cubic equation \[X^{3}+3(u^{2}+v^{2})X+\frac{\sqrt{2}}{2}v\left(v^{2}-3u^{2}\right)=0 \tag{21}\] which discriminant is given by \[\Delta=-\left(108(u^{2}+v^{2})^{3}+\frac{27}{2}v^{2}\left(v^{2}-3u^{2}\right)^{2}\right) \tag{22}\] It is easily verified that \(\Delta\leq 0\), hence there is only one real root \(w^{*}\) satisfying (21), and it is given by Cardano's formula van der Waerden [2003] \[w^{*}=\sqrt[3]{-\frac{\sqrt{2}}{4}v\left(v^{2}-3u^{2}\right)-\sqrt{\delta(u,v)} }+\sqrt[3]{-\frac{\sqrt{2}}{4}v\left(v^{2}-3u^{2}\right)+\sqrt{\delta(u,v)}} \tag{23}\] where \[\delta(u,v)=\frac{1}{8}v^{2}\left(v^{2}-3u^{2}\right)^{2}+(u^{2}+v^{2})^{3} \tag{24}\] Finally, Theorem 4.1 provides that the projection of \(A\) onto \(H\) is given by the \((u,v)\in\mathbb{R}^{2}\) satisfying \[\eta_{A}\left(u,v,\sqrt[3]{-\frac{\sqrt{2}}{4}v\left(v^{2}-3u^{2}\right)-\sqrt {\delta(u,v)}}+\sqrt[3]{-\frac{\sqrt{2}}{4}v\left(v^{2}-3u^{2}\right)+\sqrt{ \delta(u,v)}}\right)\leq 1 \tag{25}\] which is plotted in Figure 7 below. ## 6 Conclusion In this study, the topological link between the partial derivatives of the Minkowski functionnal associated with \(A\) (a compact and convex set of a Euclidean space \(E\)) and the boundary of the projection of \(A\) onto the linear subspaces of \(E\) was elucidated. This topological link provided a system of equations for the orthogonal projection of \(A\) onto the linear subspaces of \(E\). Some applications of these results can be found for engineering, in particular in fault detection schemes for the diagnosis of dynamical systems. Indeed, model-based fault detection consists in identifying when a fault occurs in a dynamical system by analysing the discrepancies between the system inputs and outputs and their expected values provided by the model Ding [2013]. These discrepancies Figure 7: Shape of the projection of the unit ball of norm 4 of \(\mathbb{R}^{3}\) onto the plane \(H:x+y+z=0\) are generally used to generate residual signals for system diagnosis. However, these signals being subject to the system perturbations and to measurement noises, one of the challenge of fault detection is to distinguish the unavoidable noise from an actual fault in the process Basseville and Nikiforov (1993); Whalen (2013). For example, residuals obtained using a parity space approach to fault detection are generally simply affected by a projection of this noise, hence, knowing a noise bounding shape, an exact threshold to detect a fault could be the boundary of the projection of this bounding shape.
2306.14631
Dynamic structured illumination for confocal microscopy
Structured illumination enables the tailoring of an imaging device's optical transfer function to enhance resolution. We propose the incorporation of a temporal periodic modulation, specifically a rotating mask, to encode multiple transfer functions in the temporal domain. This approach is demonstrated using a confocal microscope configuration. At each scanning position, a temporal periodic signal is recorded. By filtering around each harmonic of the rotation frequency, multiple images of the same object can be constructed. The image carried by the $n{\mathrm{th}}$ harmonic is a convolution of the object with a phase vortex of topological charge $n$, similar to the outcome when using a vortex phase plate as an illumination. This enables the collection of chosen high spatial frequencies from the sample, thereby enhancing the spatial resolution of the confocal microscope.
Guillaume Noetinger, Fabrice Lemoult, Sébastien M. Popoff
2023-06-26T12:08:48Z
http://arxiv.org/abs/2306.14631v1
# Dynamic structured illumination for confocal microscopy ###### Abstract Structured illumination enables the tailoring of an imaging device's optical transfer function to enhance resolution. We propose the incorporation of a temporal periodic modulation, specifically a rotating mask, to encode multiple transfer functions in the temporal domain. This approach is demonstrated using a confocal microscope configuration. At each scanning position, a temporal periodic signal is recorded. By filtering around each harmonic of the rotation frequency, multiple images of the same object can be constructed. The image carried by the \(n\)th harmonic is a convolution of the object with a phase vortex of topological charge \(n\), similar to the outcome when using a vortex phase plate as an illumination. This enables the collection of chosen high spatial frequencies from the sample, thereby enhancing the spatial resolution of the confocal microscope. The optical confocal microscope [1], an imaging device extensively utilized for decades, has proven invaluable for scientists investigating phenomena at the scale of hundreds of nanometers. These researchers, including biologists and material scientists, benefit from the device's ability to filter out-of-focus light. This is known as _optical sectioning_. This feature enables the capture of high contrast images even in diffusive samples such as biological tissue [2]. The high-resolution capabilities of the confocal microscope are particularly beneficial for fluorescent imaging [3]. Combined with a depletion beam in the STED configuration the device can achieve superresolution, yielding precise structural insights at the cellular level [4]. However, fluorescent markers present limitations, including their potential toxicity and the prerequisite treatment of the sample, making them unsuitable in some contexts. Consequently, the development of optical label-free superresolution microscopy would be highly advantageous in numerous practical applications [5; 6]. Broadly, the inclusion of time in an optical scheme opens new possibilities [7]. Mechanical scanning as used in STED, illumination and acquisition sequences as seen in STORM [8], and structured illumination [9], as well as the analysis of emission fluctuations in SOFI [10], all exemplify the prevalent use of time as an additional degree of freedom in numerous superresolution techniques. For example, recent work involving illumination modulation in a fluorescent sample with time-varying structured illumination has demonstrated remarkable precision in localization [11]. In this article, we address the challenge of label-free superresolution in the far-field utilizing an analogous approach that capitalizes on the temporal domain to enhance the volume of data gathered from the object for image reconstruction. To that end, we suggest incorporating wavefront shaping techniques into a standard confocal microscope to introduce a temporal modulation in the signal acquired at each scanning point. The resultant additional degrees of freedom could enhance the space-bandwidth product [12] of the confocal microscope, leading to an improved resolution. ## Concept In the absence of fluorescent probes, the confocal microscope demonstrates a modest improvement in lateral resolution compared to the full-field configuration. In a full-field microscope, the coherent point-spread function (PSF) corresponds to the 2D Fourier transform of the pupil function. With a circular pupil, it is recognized as the Airy function. Owing to the scanning process and under the approximation of a point-like detector, the coherent PSF of the confocal microscope can be expressed as the product of the illumination and collection PSFs [13]. In a symmetric configuration, its width is smaller than that of the full field microscope. Using a Gaussian approximation of the PSF, the improvement in lateral resolution is estimated to be on the order of \(\sqrt{2}\) which is about 40%. In terms of spatial frequencies, the coherent transfer function (CTF) of a full-field microscope is dictated by the shape of the microscope objective's pupil. For a circular pupil, the CTF takes the form of a circular step function: spatial frequencies with a modulus greater than \(N\!A/\lambda\) are filtered out, where \(N\!A\) represents the numerical aperture and \(\lambda\) is the working wavelength. All spatial frequencies within this zone are transmitted with the same amplitude.In the confocal microscope, the CTF is the convolution of the illumination and collection CTFs [14]. Assuming a symmetric configuration for illumination and collection, the confocal possesses a well-known conical CTF of support twice as large as the full field CTF [13]. This means that the highest transmitted spatial frequency, \(2N\!A/\lambda\), is twice that in the full-field configuration. However, the gain diminishes linearly, peaking at low spatial frequencies and reaching a minimum at the cut-off frequency. In the presence of noise, the low signal to noise ratio at high frequencies leads to a degraded resolution in practice. One straightforward strategy to offset this declining gain is to employ an annular pupil [15]. Its drawbacks are a deterioration of the optical sectioning and the presence of secondary lobes. In this paper, we aim to present an alternative approach using a temporal modulation. In a recent work, we demonstrated experimentally with acoustic waves that the use of spatiotemporal wavefront shaping allows multiplexing the acquisition for image reconstruction [16]. Using a rotating source for the illumination and a rotating receiver for the collection, we obtained different images corresponding to the convolutions of the object with different orthogonal PSFs. The PSFs present different topological phase structures. Owing to a periodic Doppler effect, a point-like object perceives a monochromatic wavefield at \(\omega_{0}\) only on the rotation axis and a periodically modulated signal elsewhere equivalent to a frequency comb patterned spectrum. For each frequency \(\omega_{0}+n\Omega\), the field forms a vortex with a vorticity \(n\), centered on the optical axis, as though a vortex plate were positioned in the pupil plane. During the backscattering process, the same phenomenon applies, resulting in a focal spot twice as small as that of the full-field microscope at \(\omega_{0}\), and also vortex patterns twice as small as those in the focal plane at other frequencies. This suggests that the rotating emitter and recorder function as a spatiotemporal filter, retaining only the information associated with the high spatial frequency content collected by the confocal microscope. Consequently, the presence of harmonic frequencies in the modulated signal perceived by the object, along with vortex-like features, depends solely on the presence of a rotating modulation. Importantly, this is independent of the speed of the modulation relative to the wave's speed or frequency. By exploiting the diversity of information by summing the images recovered from each PSF, an improvement of the confocal resolution by 70% is obtained. This improvement enabled the distinction between two point-like objects closer than \(\frac{\Lambda}{4\Lambda\Lambda}\), surpassing the confocal limit. While the experiment described was implemented using acoustics as a proof of concept, the principles highlighted are broadly applicable to wave-based imaging [17]. In this article, we expand on this approach, applying it to an optical confocal scanning microscope. Nevertheless, it's crucial to acknowledge that optics possess subtle differences with acoustics that drastically modifies the implementation (see SI). Interestingly, this effect has already been studied in optics for the detection of rotating bodies in astronomy [18] but to our knowledge has never been applied in microscopy. To engineer a high-speed, time-varying illumination with optical waves that will not significantly impede the confocal acquisition process, we opt to utilize a Digital Micromirror Device (DMD) optically conjugated to the pupil plane of a microscope objective (Figure 1.**a**). This device enables amplitude modulation of the field at approximately 10 kHz. The time-varying pattern displayed-modulates temporally the pupil function and, as a result, the CTF. In this scenario, both the illumination and collection pupils are time-varying, which is equivalent to a rotation of the object via a change of frame. In the following, we focus on a particular type of illumination consisting on a pattern rotating about the optical axis. We demonstrate how it allows multiplexing the image acquisition process, enabling the efficient extraction of the highest spatial frequency components. We show the capacity of this approach to improve the image resolution. ## Numerical approach We choose a sequence on the DMD, \(M(x_{d},y_{d},t)\), consisting of the full pupil deprived from a \(45^{\circ}\) truncated sector as depicted on Figure 2.**a**. This preserves the optical sectioning as well as providing some signal amplification of the temporal backscattered signal associated to the rotating pattern (see SI). As a time-periodic pattern, it can be decomposed as a Fourier series with coefficients \(M_{n}(x_{d},y_{d})\). As seen in Figure 2.**b**, these coefficients are associated to vortices. Similarly, the corresponding temporal PSF or CTF can be computed and then again expressed as Fourier series (Figure 2.**c** & **d**). As a topological invariant [19], the Figure 1: **Schematic view of the experimental process with simulated data.****(a) Acquisition.** A spatiotemporal image is acquired by scanning and temporally modulating the objective’s pupil with a rotating mask. **(b) Analysis.** Thanks to a Fourier series decomposition, the information is gathered in _harmonic_ complex images \(Im_{n}\). Each one is carried by a different frequency \(n\Omega\) with \(n\in\mathbb{Z}\). **(c) Reconstruction.** Each image \(Im_{n}\) is deconvolved using a computed PSF. A reconstruction of the sample \(Obj_{R}\) exhibiting sharper details than the confocal image is obtained. vorticity seen on the DMD patterns is conserved and is seen on the PSFs; thus guaranteeing their orthogonality. Indeed, the PSF corresponding to the frequency \(n\Omega\) is a vortex of vorticity \(n\) with a radius increasing with \(|n|\). \(CTF_{0}\) is roughly equivalent to the confocal CTF: the average pupil used during the illumination being almost the full microscope objective pupil, \(CTF_{0}\) is also roughly the autoconvolution of the objective's pupil. The other dynamic CTFs possess a vorticity which imposes a zero for the low spatial frequencies. \(CTF_{n\neq 0}\) carries information with a high gain only for the high spatial frequencies of the sample. This illustrates the benefit of using a rotating illumination in confocal microscopy since those frequencies are usually transmitted with a low gain leading to a resolution lower than \(\frac{2NA}{\lambda}\) in the presence of noise. ## IV Experiment & Results Let us examine a practical implementation of the experiment (Figure 1) detailed in SI. It is made using a narrowband polarized laser Coherent Sapphire SF NX @488 nm. The beam is enlarged with beam expanders, filtered using a pinhole, sent to a 2 Mpx _Vialux_ DMD and then to the sample using an _Olympus_ MPLFLN40X microscope objective of numerical aperture \(N\!A=0.75\). With a quarter waveplate, the flux is sent to a _Thorlabs_ PDA10A2 photodiode after a second passage by the polarizing beam splitter cube. The latter is placed behind a pinhole whose equivalent size in the object plane is approximately 1 Airy unit. The current from the photodiode is amplified by a transimpedance amplifier and recorded by a _Picoscope_ electronic oscilloscope. Using a 1951 _USAF_ target, the full-field resolution in white light is near 388 nm, close to \(\lambda/2N\!A=325\) nm and the confocal resolution is determined to be 244 nm close enough to \(\frac{\lambda}{2\sqrt{2}}=230\)\(nm\) to consider the set-up to be diffraction-limited (see SI). For each scan point, the sequence of 60 masks \(M(x_{d},y_{d},t)\) is displayed first followed by a circular pattern associated to the full aperture. In this way, dynamic and _standard_ confocal images are acquired with the same scan. We employ the sequence depicted in Figure 2.**a** to capture images of parallel lines from the USAF resolution test chart, each line featuring a thickness and separation distance of 244 nm. After temporal Fourier decomposition of the received signals on the photodiode we are capable to retrieve images at the different harmonics \(n\Omega\) with \(n\in[\![-2;2]\!]\) (Figure 13.**a**)). Each image provides a Figure 3: **Illustration of the deconvolution procedure with experimental images of a ReadyOptics USAF resolution target group 11 element 1 (244nm). (**a)** Dynamic confocal images obtained with the pattern of Figure 2.**a** at different harmonic frequencies \(n\Omega\) for \(n\in[\![-2;2]\!]\) with weighted phase and intensity. **(b)** Deconvolved confocal dynamic images with numerical pseudoinverse \(iPSF_{n}\). **(c)** Sum of \(Obj_{n}\) with \(n\in[\![-1;1]\!]\) to obtain a final reconstruction \(Obj_{R}\). Figure 2: **PSF and CTF of the system associated to a given pattern** decomposed as a Fourier series on the frequencies \(n\Omega\) with \(n\in[\![-2,2]\!]\). **(a)** The rotating pattern \(M\) sent to the DMD. **(b) Decomposition of \(M(x_{d},y_{d},t)\)**. The pupil is a linear combination of vortex plates. **(c) PSF** in the sample’s plane. **(d) Coherent transfer functions** (\(CTF_{n}\)) in the spatial frequency space. The dotted line represents the confocal resolution limit \(2\lambda/N\!A\)**(e) Cross section view of \(CTF_{n}\)** modulus. The low spatial frequencies are not collected for values of \(n\) different than \(0\), \(CTF_{0}\) corresponds roughly to the classical confocal CTF. phase information that is reminiscent of the vortex nature of each PSF, except for the image at \(n=0\) which is equivalent to an intensity image. Each of these images carry different informations of the same objects with its own noise. For absolute values of \(n\) exceeding 1, experimental data begin to diverge from the theoretical and numerical predictions. A possibility is that higher order vortices break into \(\pm 1\) vortices that are the only one existing naturally [10] when encountering the sample. Other possible explanations are the integrating effect of the pinhole [14] which is in fact of finite size and the setup's susceptibility to misalignment, thermal instability, and mechanical vibrations as reported in a similar experiment [20]. To build a single image out of this series, we implement a simple inversion procedure. Each image at each harmonic is deconvolved by its own inverted PSF predicted by the numerical simulation. To ensure the procedure is resistant to noise, a Tikhonov regularization is employed during the inversion [21, 22]. The regularized inverse operator writes: \[iCTF_{n}=i\widehat{PSF}_{n}=(CTF_{n}^{*}\cdot CTF_{n}+\sigma)^{-1}\cdot CTF_{ n}^{*} \tag{1}\] \(\sigma\) being the noise-to-signal ratio, \(\cdot^{*}\) denotes the complex conjugate and \(\widehat{\cdot}\) the 2D spatial Fourier transform. Note that the pseudo-inverse of a vortex-like PSF is also a vortex-like function with the opposite topological charge. Each image from each harmonic yields a deconvolved image referred to as \(Obj_{n}\), as depicted in Figure 13**.b)**. Each image resembles the target object, albeit with noticeable degradation for \(n=2\), aligning with the discrepancies observed earlier. The final image is achieved by summing all the deconvolved intensity images, resulting in a pattern of enhanced contrast and improved resolution. To draw a comparison with the standard confocal setup, we present in Figure 4 the confocal images, both with and without inversion, alongside the results from our approach, pertaining to lines that are now 218 nm thick, a bit smaller than in the previous Figure. The reconstructed image resulting from the temporally modulated wavefronts is the only one that successfully allows discriminating the individual lines. This represents a 10% improvement in resolution compared to the inverted confocal image. ## IV Conclusion & Perspectives In this article, we provide a proof-of-concept of adding a temporal modulation to an imaging scheme. As an example, we chose to use rotating wavefronts to enhance the lateral resolution of confocal microscopy. Displaying a pattern rotating by the optical axis leads to the same periodic modulation of the illumination equivalent to a frequency comb observed with sound waves. For each frequency, a different image of the same sample is obtained. All the information can be summed to obtain an improved reconstruction of the object. Our results demonstrate this technique's capacity to improve the contrast and resolution of confocal imaging. Although our experiments suffer from the current flaws of the wavefront shaping tools as highlighted in [23], namely, the slowness of SLMs and the binary amplitude modulation of DMDs (not to mention the introduction of mechanical vibrations and aberrations), our approach opens new perspectives for confocal imaging. While it has already outperformed confocal imaging in terms of resolution, we anticipate that addressing stability concerns and implementing a more sophisticated reconstruction algorithm could lead to further enhancements in resolution and image quality. In the study presented, we leverage the vortex shape at each harmonic, which facilitates the selective reconstruction of information from different regions of the image's spatial spectrum to enhance the image quality. Noticeably, the implementation of spatiotemporal modulation unlocks these new degrees of freedom, while still necessitating only a single photodetector for the measurement part. This application is an example of PSF engineering as we exploit the spatio-temporal modulation of the illumination field to generate controlled CTFs for different harmonics. It offers control over the reconstruction process, enabling one to concentrate, for example, on parts of the spectrum more sensitive to noise, or to selectively detect specific patterns. Moreover, these ideas could be applied to various optical setups in a full-field configuration, _e.g._ readily in image scanning microscopy [24] to enhance again the resolution. In astronomy, a possible application could be to replace the vortex plates used in coronagraphy [25] with more flexibility. FundingThis work has received support under the program "_Investissements d'Avenir_" launched by the French Government, and is partially supported by the Simons Foundation/Collaboration on Symmetry-Driven Extreme Wave Phenomena. Figure 4: **Resolution limit.****a)** Regular confocal image of group 11 element 2 of USAF target, linewidth is 218nm. **b)** Deconvolution of the confocal image. **c)** Reconstruction obtained by summing the deconvolved dynamic confocal images obtained with our method. This is a 10% improvement in resolution.
2307.14518
Bifurcation structure of interval maps with orbits homoclinic to a saddle-focus
We study homoclinic bifurcations in an interval map associated with a saddle-focus of (2, 1)-type in $\mathbb{Z}_2$-symmetric systems. Our study of this map reveals the homoclinic structure of the saddle-focus, with a bifurcation unfolding guided by the codimension-two Belyakov bifurcation. We consider three parameters of the map, corresponding to the saddle quantity, splitting parameter, and focal frequency of the smooth saddle-focus in a neighborhood of homoclinic bifurcations. We symbolically encode dynamics of the map in order to find stability windows and locate homoclinic bifurcation sets in a computationally efficient manner. The organization and possible shapes of homoclinic bifurcation curves in the parameter space are examined, taking into account the symmetry and discontinuity of the map. Sufficient conditions for stability and local symbolic constancy of the map are presented. This study furnishes insights into the structure of homoclinic bifurcations of the saddle-focus map, furthering comprehension of low-dimensional chaotic systems.
Carter Hinsley, James Scully, Andrey L. Shilnikov
2023-07-26T21:34:15Z
http://arxiv.org/abs/2307.14518v1
# Bifurcation structure of interval maps with orbits homoclinic to a saddle-focus ###### Abstract We study homoclinic bifurcations in an interval map associated with a saddle-focus of (2, 1)-type in \(\mathbb{Z}_{2}\)-symmetric systems. Our study of this map reveals the homoclinic structure of the saddle-focus, with a bifurcation unfolding guided by the codimension-two Belyakov bifurcation. We consider three parameters of the map, corresponding to the saddle quantity, splitting parameter, and focal frequency of the smooth saddle-focus in a neighborhood of homoclinic bifurcations. We symbolically encode dynamics of the map in order to find stability windows and locate homoclinic bifurcation sets in a computationally efficient manner. The organization and possible shapes of homoclinic bifurcation curves in the parameter space are examined, taking into account the symmetry and discontinuity of the map. Sufficient conditions for stability and local symbolic constancy of the map are presented. This study furnishes insights into the structure of homoclinic bifurcations of the saddle-focus map, furthering comprehension of low-dimensional chaotic systems. We begin with the acknowledgement that we are very grateful to the special editors who invited us to submit our recent research to this special issue. It is an honor for us to contribute to this volume of the Ukrainian Mathematical Journal, dedicated to the Figure 1: Two classes of high-dimensional and one-dimensional dynamics: L. P. Shilnikov and O. M. Sharkovsky (Kiev, 2005). During this visit L. P. Shilnikov was awarded the Lavrentiev medal by the National Academy of Sciences of Ukraine for his pioneering contributions to dynamical system theory. memory and the academic legacy of Olexander Sharkovsky, beginning with his seminal publications [1; 2] from the early 60s, his reference book [3] co-authored with his students in the mid-80s of the previous century, and concluded with collection [4] yesteryear, 2022. One of our own, A.L.S., had the privilege of knowing Dr. Sharkovsky personally through various academic rendezvous, their initial encounter taking place at a meeting in Jurmala, Latvia in 1989. Predominantly, these encounters were facilitated by Yuri and Volodimir Maistrenko at their scholarly gatherings held in the serene setting of peaceful Crimea. Moreover, A.L.S. had a couple of occasions to interact with Dr. Sharkovsky at his parents' abode, the residence of Leonid and Ludmila Shilnikov. It is notable to mention that Olexander and Leonid shared an enduring friendship and academic kinship, extending over half a century, marked by mutual respect and admiration. Each held the other's original scientific school, founded in Kiev and Nizhny Novgorod (formerly known as Gorky) respectively, in the highest esteem. ## I Introduction We aim to scrutinize and computationally illustrate the structure of bifurcation unfoldings of periodic and homoclinic orbits in one-dimensional saddle-focus return maps, especially with regards to the Shilnikov saddle-focus in the mirror-symmetric case. These occurrences emerge near the primary figure-8 connection in a fully \(\mathbb{Z}_{2}\)-symmetric system. Figure 2 offers a glimpse of such intricate dynamics, portraying the chaotic trajectories recurrently returning nearby the saddle-focus only to spiral into the three-dimensional phase space of the characteristic model [5; 6] with reflective \(\mathbb{Z}_{2}\)-symmetry: \[\dot{x}=y,\quad\dot{y}=z,\quad\dot{z}=-bz+cy+ax-x^{3},\quad\text{with}\quad a, b,c>0. \tag{1}\] In L. P. Shilnikov's seminal works on the saddle focus, he convincingly demonstrated that the presence of a single homoclinic orbit of the Shilnikov saddle-focus instigates the onset of chaotic dynamics, involving a countable number of periodic orbits in the phase space of such systems. His pioneering theories from the 1960s firmly established and underscored the critical role of homoclinic orbits within the hierarchy of deterministic chaos in its entirety [7; 8; 9]. Before proceeding, it seems prudent to recapitulate some fundamental elements of the Shilnikov saddle-focus theory. For a comprehensive understanding, one can refer to his original papers, review articles [10; 11; 12; 13; 14; 15; 16], and textbooks [17; 18]. Relevant insights can also be gleaned from previous studies [19; 20; 21; 22; 23; 24; 25; 26; 27] that are pertinent to both the theory and the focus of this paper. The Shilnikov saddle-focus homoclinic bifurcation serves as a fundamental and visually accessible example of chaotic dynamics within low-dimensional systems of differential equations. Requiring a mere three dimensions for depiction, its homoclinic orbit and adjacent trajectories lend themselves to convenient visualization. Further, this structure's compatibility with one-dimensional return maps enhances its value as a paradigm for the evolution of mathematical and computational tools within the realm of chaotic systems. Figure 3A illustrates the primary homoclinic orbit to a saddle-focus of the differential (2,1)-type. The designation (2,1)-type implies that the saddle-focus possesses a pair of complex conjugate characteristic exponents, denoted as \(\lambda_{1,2}=-\alpha\pm i\omega\) Figure 2: The complex chaotic dynamics governed by the Shilnikov saddle-focus at the origin with the characteristic homoclinic figure-8 (in black) in the three-dimensional phase space of the \(\mathbf{Z}_{2}\)-symmetric model (1) at \(a=2.1593\), \(b=0.7\), and \(c=1.95\). \(\alpha,\omega>0\) (small green dots in the inset of fig. 3A), residing in the open left-half of the complex plane, alongside a single positive real exponent \(\lambda_{3}\) (red dot). It is important to stress that, for the Shilnikov saddle-focus classification, the complex pair should be the closest to the imaginary axis; this corresponds to chaos due to the existence of countably many saddle periodic orbits intersecting any small neighborhood of the saddle-focus. On the other hand, if the Shilnikov condition is not met (i.e., if the real eigenvalue is closest to the imaginary axis), then there exists a neighborhood of the saddle-focus not intersecting any periodic orbits.[14] System trajectories passing nearby the saddle-focus effectively map a local cross-section \(\Pi_{1}^{+}\) (transverse to flow in the two-dimensional stable manifold \(\mathbb{W}_{\rm loc}^{s}\)) onto another cross-section \(\Pi_{2}\) (transverse to the one-dimensional unstable separatrix \(\Gamma_{1}\)). Consequently, three colored stripes delineated on \(\Pi_{1}^{+}\) morph into a correspondingly colored spiral on \(\Pi_{2}\). The global map \(\Pi_{2}\to\Pi_{1}\) transposes the spiral back onto the original section as depicted in figs.3B\({}_{1}\) and 3B\({}_{2}\). The saddle index \(\rho=\lambda_{3}/\alpha\) being less or greater than 1 engenders two distinct outcomes of such a homoclinic bifurcation. When \(\rho>1\), i.e., local stability "dominates" local instability at the saddle-focus, the resulting two-dimensional map is a contraction (fig. 3B\({}_{1}\)). Its one-dimensional projection is visually represented in the Lammerey cobweb diagram presented in fig. 3C\({}_{1}\), capturing the essential details of the map. In accordance with Ref.[17], we can adopt the following truncated form of the generic one-dimensional saddle-focus map: \[x_{n+1}=\mu+x_{n}^{\rho}\cos(\omega\ln(x_{n})+\phi)\quad\text{with}\quad x_{n} \geq 0. \tag{2}\] In the \(\mathbb{Z}_{2}\)-symmetric case, the map becomes discontinuous for \(\mu\neq 0\): \[x_{n+1}=\text{sign}(x_{n})\left[\mu+\left|x_{n}\right|^{\rho}\cos(\omega\ln \left|x_{n}\right|+\phi)\right]. \tag{3}\] Note that the \(x\) coordinate in this system does not correspond to \(x\) in the (1) system. The parameters of this system correspond to geometric properties of the saddle-focus in the differential system: \(\rho\) is the saddle index, \(\omega\) is the focal frequency, and \(\mu\) is the splitting parameter. In particular, \(\mu=0\) when there is a homoclinic orbit to the saddle-focus passing once through \(\Pi_{1}\), while \(\mu\neq 0\) corresponds to the distance from the stable manifold \(\mathbb{W}_{\rm loc}^{s}\) to the image of the origin (corresponding to the first intersection of \(\Gamma_{1}\) with \(\Pi_{2}\)) under the map \(\Pi_{2}\to\Pi_{1}\) given by the flow. This allows us to track the system's behavior as it undergoes a primary homoclinic bifurcation as \(\mu\) crosses 0, as well as to study secondary, tertiary, and countably many other ancillary homoclinic bifurcations of the saddle-focus as it merges with the corresponding nearby periodic orbits for \(\mu\neq 0\) in the Shilnikov case \(\rho<1\). The origin \(x=0\) in the one-dimensional map always corresponds to the saddle-focus of the three-dimensional system. For \(\mu=0\) and \(\rho>1\) (when the two-dimensional return map \(T:\Pi_{1}\to\Pi_{1}\) sends small neighborhoods of the origin into themselves) Figure 3: (A) Three-dimensional phase space showing the primary homoclinic orbit of a saddle focus of (2,1)-type, i.e., with two-dimensional stable manifold \(\mathbb{W}^{s}\) and one-dimensional unstable manifold \(\mathbb{W}^{s}\). Three colored stripes painted on a two-dimensional cross-section \(\Pi_{1}\), locally transverse to \(W^{*}\), are morphed along trajectories passing by the saddle focus into a colored spiral on the top cross-section \(\Pi_{2}\), transverse to \(\mathbb{W}^{s}\). (B\({}_{1}\)) The two-dimensional Poincaré return map \(T:\Pi_{1}\to\Pi_{1}\) is a contraction when the saddle index \(\rho>1\); the corresponding one-dimensional map is shown in C1. (B\({}_{2}\)) When the Shilnikov condition \(\rho<1\) is fulfilled, then the map is an expansion with overlapping \(T\sum_{k}\cap\Sigma_{k}\) that gives rise to countably many Smale horseshoes and saddle periodic orbits corresponding to repelling fixed points in the respective one-dimensional map in panel C\({}_{2}\); courtesy of Ref.[14] the fixed point \(x^{*}=0\) of the one-dimensional map (3) is superstable. In contrast, the scenario when \(\rho<1\) is an expansion, as depicted in fig. 3B\({}_{2}\). In this case, the colored (green, blue, and red) stripes do not bound or exceed their images in the expanding spiral in distance from the origin, but instead intersect their image sets. Such intersections are interpreted as the mechanism instigating the formation of countably many Smale horseshoes, resulting in countably many unstable periodic orbits and the onset of complex dynamics in close proximity to the primary homoclinic orbit in the phase space of the differential system. The corresponding one-dimensional return map illustrated in fig. 3C\({}_{1,2}\) locally exhibits countably many characteristic oscillations, resulting in countably many unstable fixed points at the intersections of the graph with the identity line. It is worth mentioning that (i) these correspond to periodic orbits near the saddle-focus in the phase space of the corresponding differential system, and (ii) certain "oscillations" of the map graph will become tangent to the identity line as the parameters are varied, leading to new crossings or their elimination. Such a tangency triggers a saddle-node bifurcation through which a pair of periodic orbits - one stable and one saddle - emerge. It can be readily inferred that the stable orbit will soon undergo a period-doubling bifurcation when its slope in the map exceeds 1 in absolute value; this will be succeeded by a period-doubling cascade, and so on. This pattern is a primary reason why the Shilnikov bifurcation in three-dimensional systems is associated with the motion of the quasi-chaotic attractor [9], where a hyperbolic subset can coexist with stable periodic orbits emerging through saddle-node bifurcations [28; 29] in a variety of models and applications [27; 30; 31; 32]. This phenomenon is not necessarily observable in higher dimensions, where such homoclinic tangencies may instigate saddle-saddle bifurcations instead, as detailed in [15; 33; 34], no longer giving rise to stable periodic orbits within a chaotic attractor in the phase space. In what follows we will examine the global organization of bifurcation unfoldings with biparameteric sweeps of the above one-dimensional return maps (2) and (3) to reveal the organization of stability windows, also known as shrimps [35; 36; 37; 38], uniformly emerging in diverse applications, including models with the Shilnikov saddle focus [27; 30]. We will also study the fine organization of secondary and higher-order homoclinic bifurcations in such maps. Of special consideration is the borderline codimension-2 case when the dilation map with \(\rho<1\) becomes a contraction map with \(\rho>1\). This transition was first analytically studied by L.A. Belyakov [22]; see his bifurcation diagram presented in fig. 5, where \(\mu_{1}=1-\rho\), while \(\mu_{2}\) can be either the frequency \(\omega\) or the splitting parameter \(\mu\) shifting the maps given by Eqs. (2) and (3) up and down. Figure 4: Snapshots of the symmetric discontinuous map (3) with \(\rho=0.5\) and \(\omega=10\) depicting (A) chaotic dynamics at \(\mu=0.05\), (B) a stable period-2 orbit at \(\mu=0.125\), a transition from one-sided chaos at \(\mu=1.65\) in (C) to symmetric chaos at \(\mu=1.6\) in (D) after the “boundary crisis” when a critical point lowers below the horizontal axis. Here, a "'-shaped curve with a cusp corresponds to two closest saddle-node or tangent bifurcations in the one-dimensional maps shown in figs. 3C\({}_{1}\) and C\({}_{2}\). To the right from it, there are loci of U-shaped curves in the bifurcation diagram which correspond to secondary, tertiary, and higher-order homoclinic bifurcations in the differential system. To detect and differentiate such longer orbits, we employ a symbolic description, following our previous work [6; 39; 40; 41; 42; 43; 44; 45]. The codes [11] and [111] for the double and triple loops signify that the unstable separatrix returns to the saddle focus to complete the orbit after two and three large swings or excursions, respectively; these orbits in the differential system are secondary and tertiary homoclinics. The respective orbits for the one-dimensional maps are demonstrated in figure panels 6A\({}_{2}\), A\({}_{3}\), and B\({}_{2}\). For the double loop [11] in the map (3), the sequence of iterates follows the pattern: \(0\mapsto\mu\mapsto 0\); whereas for the triple loop requires one more iterate: \(0\mapsto\mu\mapsto x_{2}\mapsto 0\). The oscillatory structure of the one-dimensional map allows such homoclinic orbits to emerge at different zeros or oscillatory branches as depicted in figs. 6A\({}_{2,3}\), though all such double orbits share the same Figure 5: A fragment of the Belyakov homoclinic bifurcation set [22] near the borderline transition from the Shlinikov saddle-focus for \(\mu_{1}>0\) (i.e., \(\rho<1\)) to a stable contraction for \(\mu_{1}<0\) (\(\rho>1\)). Figure 6: Comparison of _one-sided_ secondary and tertiary homoclinic orbits. (A1) A secondary (one-sided) homoclinic orbit (coded as [11]) to the Shlinikov saddle-focus (\(\rho<1\)) with its two variants in the one-dimensional return maps (A2 and A3) ending at different zeros. (B\({}_{1}\)) A tertiary (one-sided) homoclinic orbit coded as [111] and its representation in the one-dimensional return map (B2) where the right forward iterates of the origin attain a critical point touching the horizontal axis – the so-called homoclinic tangency. symbolic code [11]. In Eqs. (2) and (3), varying \(\rho\) changes the envelope of the map from convex if \(\rho>1\) to non-convex when \(\rho<1\), while the frequency parameter \(\omega\) stretches and shrinks the map graph horizontally, and the splitting parameter \(\mu\) shifts the graph of the one-sided map up and down. We illustrate possible homoclinic orbits in a mirror-symmetric map in figs. 7A,B, showing that such orbits are inherent in \(\mathbb{Z}_{2}\)-symmetric systems like the chaotic model (1) above. This introduction concludes with snapshots showcasing the fractal organization of some global bifurcation unfolding representing a rich variety homoclinic orbits to the saddle-focus in the system (1). Figure 8A displays numerous U-shaped curves corresponding to one-sided homoclinics, while two-sided homoclinics populate within the spaces bounded by these U-shaped curves (fig. 8B). The subsequent analysis will offer a more granular examination of these structures using a computationally efficient symbolic approach. ## II Symbolic representation and homoclinic bifurcation unfoldings ### Partitioning the one-dimensional map The saddle-focus in the system corresponds to the origin \(x=0\) of the one-dimensional map, and the homoclinics in the differential system correspond to successive forward iterates of the map beginning and ending at the origin. The map is generally Figure 8: (A) Bifurcation diagram of model (1) populated by self-similar U-shaped bifurcation curves corresponding to the one-sided homoclinic orbits coded as [11], [11],... along the boundaries of solid-color regions. (B) Fractal organization of bifurcation structures corresponding to one- and two-sided homoclinic orbits coded with all symbolic sequences. Figure 7: _Two-sided_ secondary and tertiary homoclinic orbits and their representations in the symmetric one-dimensional map. (A1) Depicted is a secondary homoclinic orbit symbolically encoded as [10], while panel (B1) illustrates a triple homoclinic loop encoded as [10]. The corresponding return maps are displayed in (A2) and (B2) respectively. discontinuous at 0, and there are three possible behaviors at the discontinuity. Firstly, the origin may be treated as a fixed point corresponding to the saddle focus. The second and third possibilities involve the trajectory leaving the saddle-focus in either direction along the one-dimensional unstable manifold, corresponding to sending \(0\mapsto\mu\) and \(0\mapsto-\mu\) respectively. We construct a binary sequence which encodes the sequence of positive and negative excursions a trajectory of the differential system takes; for each choice of parameters of the maps there correspond two such symbolic sequences. The first element of the sequence is "1" for a positive excursion corresponding to \(x_{1}=\mu\) and "0" for a negative excursion corresponding to \(x_{1}=-\mu\). The rest of the sequence is generated from the signs of successive iterates \(x_{n}\), \(n\geq 0\), of the chosen initial point, with "0" corresponding to \(\mathrm{sign}(x_{n})=-\mathrm{sign}(\mu)\) and "1" to \(\mathrm{sign}(x_{n})=\mathrm{sign}(\mu)\). Due to the symmetry of the system there is a mirror image of each sequence, but we will in this paper always follow the sequence originating on the right branch of the symmetric one-dimensional map (\(x_{1}>0\)). Consider the mappings from the \((\rho,\mu^{+})\)-parameter half-plane to the \(n^{\mathrm{th}}\) iterates starting with the initial point \(x_{1}=\mu>0\). It is precisely the zeros of these mappings (where \(x_{n}=0\)) that define corresponding bifurcation curves of the homoclinic orbits of the \(n^{\mathrm{th}}\) degree in the parameter space. Reaching \(x_{n}=0\) is encoded symbolically as a termination of the sequence. This sequence constitutes a binary representation of the dynamical behavior at each point, providing a comprehensive description of the homoclinic bifurcation structures. As such, this method transforms the intricate problem of calculating homoclinic orbits in continuous-time dynamical systems into the simpler problem of finding zeros of iterates in discrete maps. This transformation considerably simplifies the analysis and enables efficient computation of homoclinic structures. ### Basic use of the symbolic trajectory representation Two procedures are used to process the binary sequences. The first procedure is to select particular sequences which illustrate particular aspects of the homoclinic structure. The zeros of the first iterate of \(\mu\) correspond to the boundary between various sequences [XX1...] and [XX0...] (here the Xs denotes various identical initial substrings in such sequences), as well as to secondary homoclinic curves in the ODE system. Similarly, the zeros of the second iterate of \(\mu\) correspond to all bifurcation curves of tertiary homoclinic orbits. For asymmetric systems with one-dimensional return map (2), only positive \(x\) values are relevant, so the only homoclinics to consider are one-sided and correspond to sequences of repeated "1"s. For one-sided orbits with \(\mu>0\), it is necessary to truncate sequence just before their first zero entries. Although in this case one cannot distinguish homoclinic orbits from non-homoclinic orbits symbolically, the boundaries of regions in parameter space corresponding to particular symbolic sequences do form homoclinic bifurcation curves. The second procedure is to compute an embedding of binary sequences of arbitrary length into the interval \([0,1]\). For a binary sequence \([S_{1},S_{2},\ldots,S_{N}]\) of length N, this is computed as a partial power series with the factor \(\frac{1}{2}\): \[K(\rho,\,\mu)=\sum_{i=1}^{N}S_{i}\frac{1}{2^{i}}. \tag{4}\] ### Bifurcation unfoldings in the \((\rho,\mu)\)-plane of the interval map The overarching structure of parameter sets for one- and two-sided sequences up to order 6 is summarized in fig. 9, with several panels presented for side-by-side comparison. Panel A reveals a collection of U-shaped bifurcation curves of secondary homoclinic orbits accumulating to the primary homoclinic at at \(\mu=0\) from above. The top and bottom branches of a secondary homoclinic bifurcation curve correspond to [11]-encoded double loops occurring in the one-dimensional map as illustrated in figs. 6A\({}_{2,3}\): the forward iterates of the origin come back after two steps: \(0\mapsto\mu\mapsto 0\). The peak of this U-shaped bifurcation curve at \(\rho=1\) corresponds to the case when the orbit involves a critical point of the map yields a coincidence of the graph with the horizontal axis, producing a homoclinic tangency much like the case illustrated in fig. 6B\({}_{2}\) for the tertiary homoclinic orbit. For fixed \(\rho\) and varying \(\mu\) values, secondary homoclinic orbits may form at the various oscillatory branches of the one-dimensional map positioned some distances away from the origin. This accounts for the shape and multiplicity of such U-shaped bifurcation curves, which become narrower as \(\mu\) decreases, accumulating to the primary homoclinic bifurcation at \(\mu=0\). Also noteworthy is that these peaks lie exclusively on the line \(\rho=1\), with no secondary homoclinic bifurcations in the \(\rho>1\) half plane. This implies that the secondary one-sided homoclinic tangencies are exclusive to the Shilnikov saddle-focus; i.e., where \(\rho\leq 1\). However, this is not the case for the one-sided tertiary and higher-order homoclinics, nor is it the case for two-sided homoclinic bifurcations in general, all of which will be discussed in later sections. While only small values of \(\mu\) are relevant to the study of systems in a neighborhood of the primary homoclinic bifurcation, the behavior of the map for arbitrary \(\mu\) is interesting in its own right. In figs. 10A and B, we explore the impact of larger values of \(|\mu|>1\) on homoclinic orbits. When \(|\mu|\) exceeds 1, the relationship between the envelope (due to the term \(\left|x_{n}\right|^{\rho}\) in (3)) and the image of \(\mu\) changes. At \(\mu=1\), the envelope has a root at \(x=\mu\), and thus homoclinic tangencies relevant to the flow arise only for \(|x|\geq\left|\mu\right|\) so that homoclinic bifurcation curves are seen for large \(\rho\) but cannot be found for \(\rho\) small. This changes the position of the homoclinic U-shaped curves, from being contained mostly within the left half of the parameter plane, to being found predominantly within the right half as depicted in these two figures. The left panel demonstrates this effect in the case of one-sided homoclinic orbits, while the middle panel exhibits the structure of such homoclinic bifurcation curves in the two-sided case. Figure 10C demonstrates the order of homoclinic orbits and their bifurcation curves for small values of \(\mu\) in the bifurcation diagram near the demarcation line \(\rho=1\) in the one-dimensional saddle-focus map, to be compared with the sketch in fig. 5 from the original Belyakov theory [22]. In this case, the map exhibits fractal structure organized about the codimension-2 Belyakov point (\(\rho=1\), \(\mu=0\)), with bifurcation curves of homoclinic orbits of higher orders drawn into a front at \(\rho=1\). This observation provides an intricate look into the dynamics of the system and the fractal nature of orbits homoclinic to saddle-foci and periodic orbits in neighborhoods thereof. by taking the mean of the logarithm of the absolute derivatives of the map along the trajectory as follows: \[LE(\rho,\,\mu)=\frac{1}{N}\sum_{i=1}^{N}\log\left|\frac{dx_{n+1}}{dx_{n}}(x_{i}) \right|\,. \tag{5}\] Figure 12A visualizes the \((\rho,\mu)\)-parameter plane of the given saddle-focus map: the color-coded heatmap reveals chaoslands in red, where \(LE>0\), and stability windows in blue and white, where \(LE\leq 0\). It is worth noting that the presence of many multistability regions is a complex aspect that the Lyapunov exponent computed from a single initial value does not address directly. The accurate and in-depth exploration and understanding of multistability principles in systems with saddle-foci remains yet an open challenge. However, this visualization still offers insightful glimpses into the chaotic region and aids in understanding the overall stability landscape of the system. ### Stability in the absence of homoclinic interference It will be useful to note going forward that the derivative of the map (3) is given by the expression \[\frac{\mathrm{d}x_{n+1}}{\mathrm{d}x_{n}}=\frac{\left|x_{n}\right|^{\rho}}{x_{ n}}\sqrt{\rho^{2}+\omega^{2}}\cos\left(\omega\ln\left|x_{n}\right|+\tan^{-1} \frac{\omega}{\rho}\right). \tag{6}\] When the Shilnikov condition \(\rho<1\) is accompanied by the existence of a primary homoclinic (that is, when the splitting parameter \(\mu=0\) so that \(x=0\) is a fixed point of the map), chaotic behavior is observed in a neighborhood of the origin, associated with the existence of countably many unstable periodic orbits. However, for nonzero splitting parameter \(\mu\) there exist ancillary homoclinic orbits to the saddle focus, with tertiary and higher-order homoclinic orbits present even for \(\rho>1\). The curve \(\gamma_{b}\) in the \((\rho,\mu)\)-parameter plane, seen in fig. 11, serves as an upper bound on \(\rho\) for which homoclinic bifurcations can occur given \(\left|\mu\right|\ll 1\). Figure 10: (A, B) Homoclinic bifurcations of the one-dimensional map for large \(\mu\). The left panel shows the one-sided orbits and the right panel depicts the two-sided orbits. The orientation of the homoclinic orbits in the parameter plane switches due to the changing relationship between the envelope \(\mu\pm\left|x\right|^{\rho}\) and the image of \(\mu\). (C) Low-order homoclinic bifurcations of the saddle-focus at the transitions of colors for small \(\mu\) densely organized about the primary homoclinic orbit at the origin of the map at \(\mu=0\). The color bar corresponds to the embedding of a symbolic sequence at a given parameter value into the interval \([0,1]\). Countably many homoclinic U-shaped curves of a particular color lie tangent to each of countably many monotonic curves originating at \(\rho=1,\mu=0\). The U-shaped regions become uniform in this area, illustrating the fractal nature of dynamics of the saddle-focus map and its bifurcation diagram. \(\gamma_{b}\) is determined in part by the explicit solution of the system of equations \(\frac{\mathrm{d}x_{n+1}}{\mathrm{d}x_{n}}(x_{1})=x_{2}=0\) for the parameter \(\mu\). This admits countably many solutions \[\mu=(-1)^{k}\frac{\omega}{\sqrt{\rho^{2}+\omega^{2}}}\exp\left(\frac{\rho}{ \omega}\left(\pi\left(-k-\frac{1}{2}\right)-\tan^{-1}\frac{\omega}{\rho}\right)\right) \tag{7}\] indexed by \(k\geq 0\), lying in the upper half parameter plane \(\mu>0\) for \(k\) even and in the lower half plane \(\mu<0\) for \(k\) odd. The rest of \(\gamma_{b}\) is determined by the implicit solution of the system of equations \(\frac{\mathrm{d}x_{n+1}}{\mathrm{d}x_{n}}(x_{1})=x_{3}=0\) in the \((\rho,\mu)\)-plane. Again, there are countably many solutions \[\begin{split} x_{2}&=(-1)^{k}\mu+\frac{\omega}{ \sqrt{\rho^{2}+\omega^{2}}}\exp\left(\frac{\rho}{\omega}\left(\pi\left(-k- \frac{1}{2}\right)-\tan^{-1}\frac{\omega}{\rho}\right)\right),\\ 0&=\mu+\left|x_{2}\right|^{\rho}\cos\left(\omega \ln\left(\left|x_{2}\right|\right)\right)\end{split} \tag{8}\] indexed by \(k\geq 0\), this time lying in the lower half parameter plane \(\mu<0\) for \(k\) even and in the upper half plane \(\mu>0\) for \(k\) odd; the solution sets to these equations do belong to each half plane, but serve to bound homoclinic bifurcation sets only in one half plane Figure 11: Low- and high-order homoclinic bifurcation structure of the one-dimensional saddle-focus map with \(\omega=3.6\) on the 8000x8000 pixel scan. Outside of the cone-shaped region bounded by the green curves \(\gamma_{g}\) for \(\rho>1\) there exists no invariant interval in the map (3) as its iterates may diverge for some choice of \(\omega\) given \(\rho,\mu\) in this region. The region bounded by the cusp-like purple curves \(\gamma_{p}\) comprises parameters for which the derivative remains less than \(1\) in absolute value within the invariant interval, so orbits converge to the unique fixed point of the map. The black curves \(\gamma_{b}\) are solutions to systems of equations corresponding to critical zeroes of the iterated map, and for sufficiently small \(\mu\) serve as upper bounds on values of \(\rho\) for which such ancillary homoclinic bifurcations may be found. or the other. Only a certain restriction of these solution sets within the \((\rho,\mu)\)-plane correspond to \(\gamma_{b}\), although the equations involved do govern organization of homoclinic bifurcations internally to the region bounded above in \(\rho\) by \(\gamma_{b}\). Moreover, there exist conditions corresponding to higher-order iterates of the map which serve to further organize the homoclinic bifurcation structure; in general, these conditions correspond to systems of equations for which only implicit solutions may be obtained. Through geometric analysis of the one-dimensional map, parameter values associated with the existence of a fixed point are determined. Additionally, some conditions under which bounds on trajectories can be established are identified. As our analysis concerns behavior of the map (3) in a small neighborhood of \(x=0\), it is useful to note that in many cases a compact invariant interval containing the origin can be given. For \(\rho>1\) and \(\mu=0\), a small neighborhood of \(x=0\) cannot contain any fixed points of the map other than the origin \(x=0\) itself. As \(\frac{\mathrm{d}x_{n+1}}{\mathrm{d}x_{n}}(0)=0\), the origin is stable. However, for \(\mu\neq 0\), orbits may wander chaotically and the non-convexity of the envelope \(\left|x_{n+1}-\frac{x_{n}}{\left|x_{n}\right|}\mu\right|\leq\left|x_{n}\right|^ {\rho}\) can lead to exploding trajectories. In preventing these issues it is enough to consider only \(x>0,\mu>0\) due to the map's odd symmetry. A sufficient condition for a trajectory beginning at \(x_{1}=\mu\) to be bounded is that the upper envelope \(x_{n+1}\leq\mu+x_{n}^{\rho}\) intersect the identity line; that is, \(\beta^{\rho}-\alpha+\mu=0\) has a solution \(\beta>0\). Noting that \(F(x)=x^{\rho}-x+\mu\) has a minimum of \(\rho^{\frac{1}{1-\rho}}\left(\frac{1}{\rho}-1\right)\) and that \(F(0)=\mu>0\), one sees that such a solution \(\beta\) exists if \(\mu\leq\rho^{\frac{1}{1-\rho}}\left(1-\frac{1}{\rho}\right)\); this region of parameter space corresponds to the region bounded by the green curve \(\gamma_{g}\) in fig. 12A. Evidenced by the existence of a positive Lyapunov exponent within this region, these bounded trajectories can nevertheless behave chaotically. We now seek to prove that a trajectory \(x_{n}\) with \(x_{1}=\mu\) converges to a stable fixed point when the map is an expansion (\(\rho>1\)) and the splitting parameter \(\mu\) is small. One method to guarantee that a trajectory beginning at \(x_{1}=\mu\) converges to a fixed point is to establish a bound \(x_{n}\leq\beta\) as before, subject to the additional constraint that \(\left|\frac{\mathrm{d}x_{n+1}}{\mathrm{d}x_{n}}(x_{n})\right|<1\) for all \(0<x_{n}<\beta\). Using the Brouwer fixed point theorem alongside the established bounds on the map's derivative, the existence of a unique fixed point of the map \(x^{*}\) is verified within the interval \(0<x^{*}\leq\beta\) as \(x_{n+1}(x_{n})-x_{n}\) is monotone decreasing for \(0<x_{n}\leq\beta\). Furthermore, this fixed point is determined to be stable. Figure 12: (A) Chaos and stability windows (“shrimps”) in a 4000x4000-pixel scan of the \((\rho,\mu)\)-parameter plane of the one-dimensional saddle-focus map with \(\omega=5\). The heatmap represents the magnitudes (color bar on the right) of Lyapunov exponents computed over trajectories of length 5000 with the red color indicating chaos and the blue/white colors signifying stability. Discontinuities in the color grading correspond to branch-switching in areas of multistability, mostly well-organized for \(\rho>1\). (B) 4000x4000-pixel bifurcation diagram of the map viewed through the lens of Lempel-Ziv complexities of the symbolic binary sequences of long orbits beginning with the same initial point \(x_{1}=\mu\): lighter shades signify higher symbolic complexity (color bar on the right). The overlaid curves \(\gamma_{b}\), \(\gamma_{g}\), and \(\gamma_{p}\) are the same as in fig. 11. Despite unbounded and Lyapunov-positive trajectories in much of the \(\rho>1\) half plane, the symbolic representations of these orbits are very simple for small enough \(\left|\mu\right|\), undisturbed by distant homoclinic structures. The shrimp structures from panel (A) appear as windows of relatively lower complexity. In order to determine a large value \(\mu\) such that a suitable \(\beta\) exists, note that \(\left|\frac{\mathrm{d}x_{n+1}}{\mathrm{d}x_{n}}\right|\leq x_{n}^{\rho-1}\sqrt{ \rho^{2}+\omega^{2}}\): it is enough to satisfy \(x_{n}^{\rho-1}\sqrt{\rho^{2}+\omega^{2}}<1\) by choosing \(\beta\) such that \(x_{n}\leq\beta<\left(\rho^{2}+\omega^{2}\right)^{\frac{1}{2(1-\rho)}}\). As \(F(x)\) has its smallest positive root at \(x=\beta\) and is a convex function, we can obtain an upper bound on \(\beta\) by Jensen's inequality applied via a chord through \(\left(0,F(0)\right)=\left(0,\mu\right)\) and \(f\)'s minimum \(\left(\rho^{\frac{1}{1-\rho}},\rho^{\frac{1}{1-\rho}}\left(\frac{1}{\rho}-1 \right)+\mu\right)\): certainly \(\beta\leq\frac{\mu}{1-\frac{\rho}{\rho}}\). Hence a suitable bound \(x_{n}\leq\beta\) exists if \(\mu<\left(1-\frac{1}{\rho}\right)\left(\rho^{2}+\omega^{2}\right)^{\frac{1}{2( 1-\rho)}}\); equality here yields the purple curve \(\gamma_{\rho}\) in fig. 12A. It is easy to see by the symmetry of the map that these stability conditions are nearly identical if \(\mu<0\); one needs only consider establishing the same bounds instead on the absolute value of \(\mu\). In the case of \(\rho<1\), the one-sided envelopes are convex and thus an invariant interval containing \(x=\mu\) always exists. An upper bound \(\beta\) on trajectories in this case is given by the sufficient constraint \(\left|x_{n}\right|\leq\left(\left|\mu\right|+1\right)^{\frac{1}{1-\rho}}\leq\beta\). ### Shrimp tails and symbolic robustness Figure 13 presents a detailed exploration of a "shrimp" structure identified in the Lyapunov-exponent scan (fig. 13A) of the one-dimensional map with \(\omega=10\). These regions arise from saddle-node bifurcations and exhibit periodic orbits robust to perturbations both in parameter space and in the one-dimensional interval map (3). One key observation is the presence of period doubling cascades, a common indicator of the emergence of chaotic dynamics. Moreover, from the orbit diagram in figure 13B we observe that the periods of these orbits appear to progress monotonically through Sharkovsky's order [4]. The "tails" of these shrimp, those long negative-Lyapunov-exponent regions along decreasing \(\rho\), carry on all the way to \(\rho=0\) and beyond, though the shrimp may be partially obscured by multistability. The existence of these features, keeping multistability in mind, expands our understanding of the chaotic nature of the saddle-focus map and sets the stage for more in-depth study. Figure 13B depicts on the vertical axis the branches of stable periodic orbits originating at the boundary of the shrimp, plotted against the bifurcation parameter \(\rho\) at fixed \(\mu=0.35\). These periodic orbits develop in a manner reminiscent of saddle-node bifurcations and their further development in unimodal maps. Despite the presence of stable periodic orbits within the shrimp appearing to coincide in evolution of periodicities with the Sharkovsky order as \(\rho\) decreases, period-3 orbits can be easily identified throughout the windows, as is seen in the juxtaposed red curves in fig. 13B corresponding to a persistent period-3 orbit; a cobweb diagram of another period-3 orbit within the shrimp is depicted in fig. 13C. This is important to keep in mind going forward, as the existence of a period-doubling cascade and subsequent progression to odd-period cycles does not by the Sharkovsky theorem imply the nonexistence of period-3 orbits. At the same time, the existence of the negative-Lyapunov-exponent shrimp structure tells one nothing about the existence or absence of chaotic sets within intervals bounded by period-two orbits; multistability is prevalent throughout saddle-focus systems. Figure 13: (A) A two-dimensional Lyapunov-exponent sweep highlighting a “shrimp” structure, indicative of a saddle-node bifurcation in the one-dimensional saddle-focus map with \(\omega=10\). This region contains cascades of period doubling bifurcations of stable orbits of minimal period progressing in agreement with the Sharkovsky ordering [4]. (B) The orbit diagram of a horizontal slice through the shrimp from (A) at \(\mu=0.35\) (dotted interval). Observe the occurrence of a period doubling cascade in decreasing \(\rho\) and a subsequent progression through orbits of periods with odd factors, signifying alongside the negative Lyapunov exponents in panel A that trajectories near the saddle-focus exhibit stable behavior. Nevertheless, there still exist robust orbits of period-3 in intervals containing \(\mu\) throughout the shrimp. Such a period-3 orbit has been overlaid in red. Varying \(\rho\) throughout the shrimp continuously deforms this red orbit, preserving its 3-periodicity. (C) JJxtaposition of cobwebs of a period-3 orbit and a \(x_{1}=\mu\) orbit (from the saddle-focus) within the shrimp from panel A at a choice of \(\rho\) where \(\mu\) converges to a period-2 orbit. Our computations of the Lempel-Ziv complexity [46] for a symbolic sequence at each parameter value in the \((\rho,\mu)\)-plane are showcased in fig. 12B. The Lempel-Ziv complexity is a measure of the complexity of binary sequences, related in purpose to the notion of Kolmogorov complexity; it is defined as the length of a partition of a finite binary sequence such that each element of the partition is the shortest substring not having already occurred, less the final element if it happens to be a duplicate. For instance, the binary sequence [010110010111] is partitioned as \(\{0,1,01,10,010,11,1\}\), so it has a Lempel-Ziv complexity of 6. After computing the Lempel-Ziv complexity \(C\) of a symbolic sequence of length \(N\), we normalize by taking \(\overline{C}=\frac{\ln(N)}{N}C\), as is done in our recent publication [32]. The region confined by the purple curve in the two-dimensional LZ-sweep, as shown in fig.12B, displays sequences of minimal complexity, with quick convergence to unique fixed points for \(\mu\geq 0\) or period-2 orbits for \(\mu<0\). However, substantial regions associated with positive Lyapunov exponents (refer to fig.12A) similarly exhibits low symbolic complexity. Chaos here does not change sign, and thus does not interact with homoclinics. Within the region populated by homoclinics in the complexity scan, there is a "sheet" of high complexity interspersed by stability windows. These windows align with tails of shrimp structures visible in the Lyapunov-exponent scan in fig.12A. The sheet appears as noise, seemingly induced by the sensitivity of symbolic sequences to perturbations of their generating trajectories, while the windows of reduced complexity indicate robust convergence to specific symbolic sequences. Furthermore, the geometric organization of the level sets of very small symbolic complexities within these shrimp tails - and also across much of the boundary of the region of nontrivial symbolic complexity - mirrors that of the homoclinic curves seen in the symbolic sequence scans from fig.11 due to transients. ## IV Conclusions and future directions In this study, we delved into the heart of chaos, exploring the rich dynamics inherent in low-dimensional systems of ODEs, particularly the map associated with the Shilnikov saddle-focus homoclinic bifurcation. Inspired by the foundational work of Sharkovsky in one-dimensional maps, our research adopted two primary approaches: the generation of binary sequences to symbolically represent the dynamical behavior at each point in the parameter space, and the subsequent geometric analysis of the one-dimensional map (3) in elucidating homoclinic bifurcation structure. These techniques unveiled the intricate details of the homoclinic bifurcation structures relating to the saddle-focus, shedding light on the complex organization of these orbits. Utilizing the Lyapunov exponent enabled us to illustrate the chaotic regions and stability zones within the saddle-focus map's parameter plane. However, this method falls short when addressing multistability. Our research revealed that the stability region of the saddle-focus map dramatically narrows near the codimension-two point (\(\mu=0\), \(\rho=1\)), representative of the Belyakov case [22]. This discovery raises profound questions about the nature of chaos at nonzero \(\mu\), particularly as the \(\rho=1\) case corresponds to a nonhyperbolic saddle-focus, delicately balanced between the map's expansive and contractive behaviors. Moreover, the relationship between the one-dimensional saddle-focus map and the corresponding two-dimensional return map has nuances that may result in obscuring chaotic behavior in the full saddle-focus ODE system. The exploration of this theoretical frontier warrants deeper examination, and the one-dimensional map framework presents a promising avenue for this future endeavor, further building upon the pioneering work of L. P. Shilnikov in the study of two- and higher-dimensional return maps. Beyond this, there are additional aspects of both the stability and homoclinic structure that await scrutiny. The occurrence of multistability within the map and its relationship to periodic orbits, as well as their corresponding homoclinics in systems of ODEs featuring a saddle-focus, represent fertile ground for future investigation. In future research on these topics, we would like to: * produce tools to extend our Lyapunov-exponent scans along stability branches, * visualize 2-dimensional homoclinic submanifolds of the \((\rho,\mu,\omega)\)-parameter space by a method similar to the symbolic method we showcase in this paper, followed by an investigation of the homotopy types of these submanifolds, and * develop a computational method for efficiently scanning the \((\rho,\mu)\)-parameter plane for the Sharkovsky-largest minimal-period orbit exhibited at each parameter choice, demonstrating the level of periodicity within the Sharkovsky order. Investigation into these areas will not only enhance our understanding of the rich dynamics in such systems but also contribute to the broader theoretical framework for analyzing complex dynamical systems. In conclusion, our research stands as a testament to the enduring impact of Sharkovsky's groundbreaking work [1; 2; 3; 4] on one-dimensional maps. Our methods, influenced by his research, not only simplify the analysis of intricate dynamical structures but also offer a promising avenue for future investigations into similar low-dimensional systems. The broad applicability of these techniques makes a significant contribution to the mathematical toolbox for studying complex dynamics, underscoring their potential to advance our understanding of chaos and complex dynamical systems. ## Acknowledgments We thank the Brains & Behavior initiative of Georgia State University for the B&B graduate fellowship awarded to J. Scully.
2302.03862
CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for Emerging Memories-Based Deep Neural Networks
Deep Neural Networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications. With the rapid development of DNNs, efficient hardware architectures for deploying DNN-based applications on edge devices have been extensively studied. Emerging Non-Volatile Memories (NVMs), with their better scalability, non-volatility and good read performance, are found to be promising candidates for deploying DNNs. However, despite the promise, emerging NVMs often suffer from reliability issues such as stuck-at faults, which decrease the chip yield/memory lifetime and severely impact the accuracy of DNNs. A stuck-at cell can be read but not reprogrammed, thus, stuck-at faults in NVMs may or may not result in errors depending on the data to be stored. By reducing the number of errors caused by stuck-at faults, the reliability of a DNN-based system can be enhanced. This paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement Techniques to enhance the reliability of NVM-based DNNs in the presence of stuck-at faults. A data block remapping technique is used to reduce the impact of stuck-at faults on DNNs accuracy. Additionally, by performing bit-level criticality analysis on various DNNs, the critical-bit positions in network parameters that can significantly impact the accuracy are identified. Based on this analysis, we propose an encoding method which effectively swaps the critical bit positions with that of non-critical bits when more errors (due to stuck-at faults) are present in the critical bits.
Thai-Hoang Nguyen, Muhammad Imran, Jaehyuk Choi, Joon-Sung Yang
2023-02-08T03:39:11Z
http://arxiv.org/abs/2302.03862v1
CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for Emerging Memories-Based Deep Neural Networks ###### Abstract Deep Neural Networks (DNNs) have emerged as the most effective programming paradigm for computer vision and natural language processing applications. With the rapid development of DNNs, efficient hardware architectures for deploying DNN-based applications on edge devices have been extensively studied. Emerging Non-Volatile Memories (NVMs), with their better scalability, non-volatility and good read performance, are found to be promising candidates for deploying DNNs. However, despite the promise, emerging NVMs often suffer from reliability issues such as stuck-at faults, which decrease the chip yield/memory lifetime and severely impact the accuracy of DNNs. A stuck-at cell can be read but not reprogrammed, thus, stuck-at faults in NVMs may or may not result in errors depending on the data to be stored. By reducing the number of errors caused by stuck-at faults, the reliability of a DNN-based system can be enhanced. This paper proposes CRAFT, i.e., Criticality-Aware Fault-Tolerance Enhancement Techniques to enhance the reliability of NVM-based DNNs in the presence of stuck-at faults. A data block remapping technique is used to reduce the impact of stuck-at faults on DNNs accuracy. Additionally, by performing bit-level criticality analysis on various DNNs, the critical-bit positions in network parameters that can significantly impact the accuracy are identified. Based on this analysis, we propose an encoding method which effectively swaps the critical bit positions with that of non-critical bits when more errors (due to stuck-at faults) are present in the critical bits. Experiments of CRAFT architecture with various DNN models indicate that the robustness of a DNN against stuck-at faults can be enhanced by up to \(10^{5}\) times on CIFAR-10 dataset and up to 29 times on ImageNet dataset with only a minimal amount of storage overhead i.e., 1.17%. Being orthogonal, CRAFT can be integrated with existing fault-tolerance schemes to further enhance the robustness of DNNs against stuck-at faults in NVMs. Deep learning hardware, Emerging Memories, Fault-Tolerance, Neural Networks, Stuck-at Faults ## I Introduction Deep Neural Networks (DNNs), a subset of Machine learning (ML) algorithms, have demonstrated impressive effectiveness in various applications such as computer vision, natural language processing, big data analysis and etc. A typical DNN consists of multiple hidden layers sandwiched between an input layer and an output layer. This hierarchical design allows DNN to solve complex programming tasks that appear to be infeasible with conventional programming approaches. However, despite the potential, DNNs often require enormous amount of computational power and hardware overhead, which makes it difficult to deploy them in real-time computing applications often running on mobile devices. As a result of the rapid development of DNNs, there is an enormous increase in the demand for efficient and scalable hardware architectures for DNNs' deployment. Realizing the high computational cost of DNNs, various methodologies have been proposed to achieve hardware-efficient architectures for DNNs [1, 2]. These techniques often focus on reducing the storage required by DNNs through network compression [1] and precision reduction [2]. Such methods have proven to be efficient, however, DNNs often need to sacrifice accuracy in exchange for a reduced implementation cost in resource-constrained devices. Memory plays a key role in the applications involving large amount of data like DNNs. Current charge-based memory technologies such as Dynamic Random-Access Memory (DRAM), Static Random-Access Memory (SRAM) and Flash are facing challenges in continuing technology scaling [4]. Moreover, as the technology scales down, conventional memory technologies become highly prone to charge leakage which makes them a less attractive choice for data-intensive DNN applications. To cope with these issues posed by the conventional technologies, several emerging non-volatile memory technologies (NVMs) such as Resistive Random-Access Memory (ReRAM) and Phase Change Memory (PCM) have been extensively investigated over the past decade. With better scaling potential, better read performance and non-volatility [5], emerging NVMs are considered to be the potential replacement of the current charge-based memory technologies. Beside being used for storage, thanks to their analog characteristic, emerging NVMs have also played a major role in designing high-performance and energy-efficient In-memory Computing (ICM) based accelerators for DNNs [6, 7, 8]. Such accelerators use emerging NVMs cells (i.e., ReRAM, PCM) to store the network's parameters and perform the matrix-vector multiplication in-place by organizing NVMs cells in a crossbar manner. With in-place computations, NVMs based IMC architecture eliminates the data movement between memory and separate computing units, which is found to be very costly in conventional von Neumann architectures. These features make the emerging memories an ideal choice for the future hardware implementations of DNNs. Despite their promising features, emerging NVMs often suffer from hard error (i.e., stuck-at faults) [9, 10, 11] due to their low endurance and immature manufacturing process. A stuck-at fault occurs when the resistance/conductance state of an emerging NVMs cell can not be changed by a write operation. A stuck-at cell can still be read but not reprogrammed, thus, errors caused by stuck-at faults only arise when the stuck-at cell's state is not aligned with the desired data. Building on this insight, several fault-tolerance techniques have been proposed to increase the lifetime of an emerging NVMs-based memory system [12, 13, 14, 15, 16]. In NVMs-based DNN architectures, despite the inherent fault-tolerance of DNNs, a small number of stuck-at NVMs cells (especially those corresponding to the critical bits) can still cause a catastrophic loss to DNN's accuracy [17, 18]. Therefore, it is necessary to develop effective fault-tolerance enhancement techniques to mitigate such errors in NVMs-based DNN architectures. Existing works on tolerating stuck-at faults in emerging NVMs have aimed for neuromorphic applications [19, 20, 21, 22, 23]. Despite being effective, these techniques often rely on an expensive retraining process of DNNs or a utilization of frequent auxiliary bits leading to a high storage overhead. On the other hand, several architectural techniques have also been proposed to tackle the problem of stuck-at errors in emerging NVMs, in general [12, 13, 14, 15, 16]. Such techniques also require a large amount of hardware storage and complex encoding/decoding mechanisms, making them infeasible for resource-constrained hardware with real-time performance requirements. To address the problems of previous existing works, we propose multiple lightweight yet effective techniques, collectively named CRAFT, to tolerate errors caused by the stuck-at faults in NVMs based DNN architectures. The first technique, called Intra-Block Address Remapping, effectively remaps the weights inside a block of data so that the impact of stuck-at faults on DNN's accuracy is minimized. The second method addresses the problem of single-bit error by simply inverting the data in the data block. Results of these two techniques have been presented in our earlier work [24]. To further enhance the robustness of the NVMs based DNNs, a novel Criticality-Aware Bits Switching method is proposed which further enhances the DNN's accuracy in the presence of stuck-at faults by addressing the bit criticality in DNNs. Rest of the paper is organized as follows. Section II covers the background of DNNs, emerging NVMs and stuck-at faults in emerging NVMs. Related works are presented in Section III. Section IV introduces the proposed Criticality-Aware Fault-Tolerance Enhancement Techniques (CRAFT). Finally, we evaluate the effectiveness of the CRAFT against existing techniques in Section V. Section VI concludes the paper. ## II Background ### _Deep Neural Networks (DNNs)_ Artificial neural networks (ANNs) are the computer algorithms inspired by the biological brain of animals. A layer of an ANN often consists of multiple nodes (i.e., neurons) connected to next layer through multiple connections (i.e., synapses/weights). Typical ANNs are made up of an input layer, an output layer and multiple hidden layers in between. A subset of ANN, Deep Neural Network (DNN), is an ANN with a large number of hidden layers (hence the name _"Deep"_ Neural Network). Over the last decade, DNNs have made major breakthroughs in various fields, rendering the conventional programming approaches in these domains as obsolete. Especially, in the field of computer vision, Convolutional Neural Networks (CNNs) [3, 25] have attracted a lot of interest due to their exceptional effectiveness. Fig. 1 shows a typical CNN (ResNet-18) architecture. A CNN is often composed of three types of layers : Convolutional, Fully Connected (FC), Pooling and Normalization. The convolutional layer is often used for extracting features of the input data by convolving the input with multiple relatively small size filters. The output of the convolutional layer is then fed into the pooling layer to reduce the spatial size of the representation. In ResNet architecture, as shown in the figure, the output data is propagated through multiple residual blocks consisting of two convolutional layers and a shortcut connection. Such blocks allow the CNN to increase its depth (i.e., numbers of layer) while preventing the vanishing/exploding gradient effect. At the end of the network, data undergoes a fully connected layer for classification followed by a softmax layer which outputs the probability of each class. The hierarchical structure of DNNs allows them to outperform the conventional programming algorithms in solving complex problems by breaking them into simpler ones. However, DNNs require immense hardware resources for storing parameters and performing computations. This makes it extremely challenging to deploy a large-scale DNN on Fig. 1: Typical Convolutional Neural Network architecture (ResNet-18 [3]) resource-constrained hardware like mobile devices. To tackle this challenge, several techniques have been proposed to reduce the network size, thus making DNNs easier to deploy [1, 26]. The cost of using these techniques is the accuracy loss of DNNs. Depending on application, the accuracy loss may or may not be acceptable. In parallel with these approaches, several researches have investigated emerging non-volatile memories (NVMs) to provide high bandwidth, storage density and non-volatility for DNNs deployment. With such advantages over traditional charge-based memories, emerging NVMs are seen as ideal candidates for efficient and high-performance DNNs applications. ### _Emerging Non-Volatile Memories (NVMs)_ Prominent emerging NVMs that are well-suited for DNN-based applications include Phase Change Memory (PCM) and Resistive RAM (ReRAM) [10]. Phase Change Memory consists of a chalcogenide phase-change material (e.g., Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{3}\)) sandwiched between two electrodes. Data can be stored in PCM by modulating the phase-change material's state which is either crystalline or amorphous. The PCM cell has a low resistance in the crystalline state and a high resistance in amorphous state. ReRAM, on the other hand, consists of a conducting material (typically HfO\({}_{2}\)) placed in between two electrodes [27]. The resistance state of a ReRAM cell can be changed by altering the concentration of defects in the conductive filament. A high defects concentration toward the bottom electrode changes the state of the ReRAM cell to the low-resistance state and a high defects concentration towards the top electrode leads to the high-resistance state. Both PCM and ReRAM have promising features of non-volatility, high switching speed and better endurance compared to existing Flash memory [10]. However, due to certain intrinsic characteristics of the underlying technology and an immature manufacturing process, these memories often face reliability issues such as hard errors (stuck-at faults) [12], resistance drift [28, 29, 30, 31] and write disturbance [19, 32]. This poses a challenge when employing emerging NVMs for DNNs applications. ### _Stuck-at Faults and DNNs Accuracy_ #### Ii-C1 Stuck-at Faults in Emerging NVMs Stuck-at faults are a type of hard faults in emerging NVMs where the resistance state of a NVMs cell is locked at a certain state. When the cell resistance is fixed at the low resistance state, the fault is regarded as Stuck-at-Zero (SA0), otherwise, if the cell resistance is stuck at the high resistance state, this fault is considered to be Stuck-at-One (SA1). Depending on the data to be stored and the stuck-at state, a stuck-at fault may or may not cause error in the system. Fig. 2 depicts a phenomenon of stuck-at faults in emerging NVMs. The first row shows the correct data expected to be stored and read from the memory. Second row shows the location and state of the stuck-at faults in emerging NVMs and the third row indicates the erroneous data read from the memory. As shown in the figure, the last two stuck-at fault locations do not introduce any error in the data because these cells' desired data is in-line with their stuck-at states. On the other hand, a mismatch between desired data and the stuck-at state causes error, which unintentionally flips the corresponding bit. Therefore, by aligning the desired data with the stuck-at resistance state, the number of readout errors in the system can be reduced. Previous works have used this property of stuck-at faults to mitigate their impact on the system [14]. According to the experiments using real fabricated ReRAM array in [33], stuck-at zeros (SA0) and stuck-at ones (SA1) can be clustered in the entire column/row or distributed randomly in a ReRAM array. This stochastic nature of SAF makes it hard to model using any specific distributions, thus, many previous studies [19, 23] have chosen the uniform distribution to model SAFs to reduce complexity during the fault estimation process. The same consideration is also applied in our paper. Furthermore, since the proposed method does not focus on any specific type of eNMs or SAFs, using uniform distribution for evaluations of SAFs allows CRAFT to be generalized and applicable for any use case. #### Ii-C2 Impact of Stuck-at Faults on Accuracy of DNNs A small number of stuck-at faults, especially in the critical bits, can cause a catastrophic change in DNN models accuracy [18, 23]. Fig. 3 shows the impact of stuck-at faults on different DNN models' accuracy evaluated on CIFAR-10 (a popular image dataset for computer vision). As illustrated in the figure, when the Bit Error Rate (BER) for stuck-at faults increases to a certain point, the classification error of the DNN model increases exponentially. For example, for ResNet-18 (a state-of-the-art Convolutional Neural Network) [3], when the BER increases beyond \(2\times 10^{-6}\), the classification error stays at around 90% level, which is the same as if the network is randomly guessing the results regardless of the input and trained parameters. Similar results are observed when considering different DNN models or datasets. Experimental details with additional results are discussed in Section V. ## III Related Works Several memory-centric works have been proposed to address the problem of stuck-at faults in emerging NVMs. Error Correcting Pointers (ECP) [12] detect and correct stuck-at faults by keeping the stuck-at cell address and data in additional storage. [14] presents a method to enhance the correction capability of an ECC by a simple inversion operation. SAFER [13] dynamically partitions the data such that only single error is presented in each partition and then uses a single-bit error correction code for recovery. [34] proposes a method to reduce the storage overhead of ECP by allocating different number of error correction entries to different lines according to the number of hard errors presented Fig. 2: Example of Stuck-at faults in emerging NVMs in the line. Although being effective in tolerating stuck-at faults of eNVMs-based system, these works often incur a large storage/hardware overhead, which is infeasible in the case of most resource-constrained edge devices. The proposed encoding techniques adds a minimum hardware overhead (1.17% in terms of storage) to the system yet still efficiently enhances the robustness of DNN against SAFs. Apart from memory-centric approaches, several works have been proposed to enhance the stuck-at faults tolerance capability of DNN based systems. The work in [19] exploits the self-healing capability of DNNs and proposes a retraining method to reduce the impact of stuck-at cells in DNN accelerators. Such scheme can be effective in recovering the accuracy degradation from stuck-at errors; however, re-training is required when implementing these techniques, which is difficult to do when the DNN has been deployed to the edge devices. [35] redesigns the traditional error correction output code of DNNs using a collaborative logistic classifier, thus enhancing the DNN robustness against stuck-at faults. Despite being effective, this work also requires re-training (i.e., fine-tuning) to recover the accuracy impacted by SAFs. The need for re-training does not apply to the proposed fault-tolerant technique in this paper, since it is designed specifically for DNN inference on resource-constrained edge devices. More relevant to our proposed techniques, many data remapping and redundancy based techniques have been proposed [20, 21, 22]. Specifically, [20] introduces mapping technique and redundant crossbar arrays to compensate the accuracy loss of DNN model caused by the stuck-at faults in ReRAM crossbar array. Since this technique utilizes redundant crossbar array that has the same size as the original array, the storage overhead and energy consumption of such technique is considerably large. [21] classifies weights according to their criticality to the model's accuracy, remaps the significant weights to the fault-free memory cells and fine-tunes the DNN model to enhance the accuracy. By relying on the criticality of DNNs to address SAFs, such work is able to ease the re-training process and reduce storage overhead. Nonetheless, the storage overhead caused by such technique can still be as large as 5%, which is much higher compared to our proposed technique. The method in [23] uses matrix transformations to make the weights in the ReRAM crossbar array more robust to stuck-at faults. Similar to other redundancy-based methods, [23] also adds a significant amount of hardware overhead compared to the proposed technique. Specifically, such a scheme can come at the expense of 8.19 \(\times\) power consumption and 9.23 \(\times\) area overhead. The proposed technique in this paper only adds six additional bits for encoding/decoding, making it highly efficient in term of storage/energy overhead The existing memory-centric methods as well as DNN focused techniques often either require a large amount of additional hardware overhead or costly retraining process. In this paper, we propose a set of techniques that incur only minimal hardware overhead yet effectively enhance the fault-tolerance capability of DNNs in the presences of stuck-at faults. Moreover, the proposed techniques are orthogonal to the existing methods and can be implemented together to further enhance the robustness of DNNs. ## IV CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques The state of a stuck-at cell can be detected (by a read operation) but cannot be re-programmed to a different state. Leveraging this fact, we present multiple remapping and encoding techniques, collectively named CRAFT, to reduce the number of stuck-at errors in the DNN parameters (weights and biases). The proposed fault-tolerance techniques include _Intra-Block Address Remapping_, _Weight Inversion_ and _Criticality-Aware Bits Switching_. ### _Intra-Block Address Remapping_ The parameters (weights and biases) of DNN are often stored as a group in a data block. For instance, a typical 64B cache line sized data block can store up to sixteen 32-bit floating point weights/biases. The proposed remapping method operates within the typical data block to preserve memory access locality. Fig. 4 shows a simple example of 2-bit address remapping using the proposed _Intra-Block Address Remapping_ technique. An XOR operation of the address of each weight within a data block remaps the weight to a different location within the same block. This remapping allows to reduce the stuck-at faults by increasing the number of stuck-at cell states aligned with the desired bits. As shown in the figure, without remapping, the number of error due to stuck-at cells is five. After the proposed remapping technique (XOR the address with 01) is applied, the resulting errors due to stuck-at faults is reduced to one only. By consi Fig. 4: Example of Intra-Block Address Remapping technique Fig. 3: Impact of stuck-at faults on accuracy of different DNN models (ResNet-18, VGG-19 and MobileNet-V2 on CIFAR-10) through different XOR operations, the number of errors can be further reduced. The proposed method finally chooses the mapping which causes minimal impact (instead of the fewest errors, as explained in the next section) on DNNs accuracy. Fig. 5 illustrates the proposed remapping technique using 4-bit XOR operation. As shown in the figure, sixteen different mappings can be obtained using 4-bit XOR operation. #### Iii-B1 Minimizing the Impact of Faults on DNN's Accuracy Errors that occur in significant bits of DNN parameters are more harmful to the network accuracy than the errors in insignificant bits. This can be easily understood using a simple example shown in Fig. 6. As illustrated in the figure, frequent stuck-at faults in the insignificant bit positions result in only a small change from the actual weight value while fewer faults in the significant bit positions cause a greater change to the weight value. This is indicated by measuring the deviation in decimal value of the erroneous weight from that of the actual weight. As shown in Fig. 6, three stuck-at faults in the insignificant bit position result in deviation of 1 (in decimal), while a single stuck-at fault in the significant bit position causes a deviation of 32 (in decimal). In floating-point representation, this criticality difference is even more evident due to the greater difference in the significance of exponent bits as compared to the other bit positions. Furthermore, in case of DNN, certain weights are more critical than the other weights, thus, merely minimizing the number of faults in the memory would not be always helpful. Therefore, the proposed remapping method chooses to minimize the deviation (from the original value) of weights instead of simply minimizing the number of stuck-at faults presented in memory. The proposed deviation minimization technique can be formulated as: \[w^{\prime}\gets w_{ri}\text{ s.t }\delta=\min_{\delta\in\Delta}\sum_{i=0}^{N}|w_{ri} -w_{oi}| \tag{1}\] where \(w^{\prime}\) is the final weight that is used for inference, \(w_{ri}\) and \(w_{oi}\) refer to the new weight after remapping (obtained while considering stuck-at faults) and the original weight, respectively. \(N\) is the number of weights within a selected data block. \(\delta\) is the minimum net deviation and \(\Delta\) is the set of all possible net deviations for different remappings of the weights. The proposed _Intra-Block Address Remapping_ technique with with a deviation minimization algorithm is illustrated in Fig. 7. For illustration simplicity, four 4-bit weights with 2-bit XOR operation for address remapping are depicted in the example. Five stuck-at faults (three SA1 and two SA0) are randomly distributed across the memory block, as shown in the figure. Initially, without any remapping, the readout data results in five errors which leads to the net deviation of 13 (in decimal). Using 2-bit XOR operation, four possible mappings can be obtained. As illustrated in the figure, the mapping which uses XOR 11 operation leads to the minimum net deviation (= 2 in decimal) from the actual weight. Therefore, the proposed Fig. 5: Intra-Block Address Remapping for 16 weights using 4-bit XOR operation Fig. 6: Example of fault significance in a data block. Three errors in insignificant bit positions result in insignificant change in value while one error in significant bit position causes large deviation from the actual value. Fig. 7: An example of _Intra-Block Address Remapping_ technique with deviation minimization. Net deviation for each mapping is calculated by summing up all weight value deviations in the data block. Mapping with minimum net deviation is chosen as minimal mapping for inference. technique chooses this mapping as the minimal mapping to store weights in the stuck-at-faults-prone emerging NVMs. ### _Weight Inversion_ When a data block contains only a single stuck-at fault, the error caused by this fault can be tolerated by simply inverting the data block. Based on this intuition, to further enhance the error-tolerance of a DNN architecture, a simple weight inversion encoding is proposed. Fig. 8 illustrates an example of the proposed _Weight Inversion_ technique. As shown in the figure, an inversion operation is incorporated when there is a single stuck-at fault in the data block. After inversion and decoding, the read data results in zero error and zero deviation from the actual weights. The _Weight Inversion_ method is combined with the _Intra-Block Address Remapping_ technique by considering the possible mappings with original weight values as well as with the inverted weight values. ### _Criticality-Aware Bits Switching_ The proposed _Intra-Block Address Remapping_ and _Weight Inversion_ techniques efficiently enhance the fault-tolerance of DNNs by orders of magnitude, as shown by the evaluation results in Sec. V. However, by only minimizing the net deviation between the erroneous and the actual weight values, errors that are critical to DNNs and unmaskable using these techniques can still significantly impact the DNNs accuracy. To address this, we introduce another encoding technique, incorporated on top of the first two techniques, that focuses on minimizing errors in the critical bit positions of the DNNs. The bit position criticality of DNNs against stuck-at faults is first analysed. Based on the results of this analysis, the _Criticality-Aware Bits Switching_ technique is proposed. #### V-B1 Bit Position Criticality in DNNs Certain bit positions in DNNs' parameters are more significant to the accuracy than the rest of bit positions. In general, as discussed in Sec. IV-A1, errors in the higher order bit positions (MSBs) cause a greater impact to DNNs accuracy than errors in lower orders bit positions (LSBs). However, in order to design an efficient error-tolerance mechanism for DNNs, quantification of bit position criticality is also necessary. Fig. 9 shows an experiment in which each bit position of DNNs' parameters is randomly disturbed with stuck-at faults. The experiments are performed on ResNet-18 using CIFAR-10 dataset with the classification error corresponding to each bit position averaged over 100 iterations. DNN parameters in both floating-point and quantized precisions are considered. Fig. 9(a) shows the bit position criticality of the full-precision (32-bit) network and Fig. 9(b) illustrates the bit sensitivity of unsigned 8-bit quantized network. The stuck-at bit error rate (BER) is fixed (arbitrarily) as \(10^{-3}\) for both networks. As seen in the figures, only a few MSBs errors can cause a high impact to DNNs' accuracy while errors in the other bit positions have a negligible impact on the accuracy. Specifically, for network with 32-bit floating point parameters (Fig. 9(a)), two MSBs positions that cause significant impact to DNN's accuracy are the 30th and 26th bit position. The reason for that can be explained by considering the DNN's weight distribution. As shown in previous work [36], all weights in the DNN have a value less than 1, hence, the value of the 30th bit is always fixed at 0. This explains why a bit flip error in the 30th bit causes a catastrophic accuracy degradation. Another bit position that can cause a severe impact to DNN's accuracy would be the second-zero occurrence bit (SZOB) position (considering that the 30th bit is the first-zero occurrence position). As reported in [36], the SZOB can be in the 25th, 26th or 27th bit position depending on the network, e.g., the SZOB is found at 26th bit position in 96% of the weights in AlexNet and VGG16. As illustrated in Fig. 9, the SZOB of ResNet-18 is found to be at the 26th position, and thus causes a severe impact to DNN's accuracy compared to other bit positions. It is also worth noting that bit flip errors occurring in the sign bit (bit 31st) show an insignificant influence on the accuracy. This is because most weights in DNN have small value and thus would cause a small change in deviation when the sign is flipped. For e.g., if a weight value is equal to 0.05 and a sign bit error makes the weight changed to -0.05, the deviation in term of absolute value is only 0.1, which can be tolerable by the DNN. The same case can happen when errors occur in the mantissa bits (bit 0th-24th) which can only produce a maximum deviation of 1.5. On the other hand, if an error occurs in the exponent bit of the weight, the deviation between error weight and original weight can be as large as \(3.40\times 10^{38}\) which can severely impact the accuracy of DNN, as in case of bit 30th shown in Fig. 9(a). For 8-bit quantized network, because unsigned 8-bit representation is used in this experiment, it is seen that only the 7th bit position (MSB) causes a severe degradation to the network accuracy. Note that 32-bit floating point network's accuracy is deteriorated much more than that of 8-bit quantized network. Fig. 8: An example of the proposed Weight Inversion encoding technique Fig. 9: Bit position criticality of DNNs against stuck-at faults. Each experiment is performed on ResNet-18 using CIFAR-10 dataset. The classification error of each bit position is averaged over 100 iteration. (a) 32-bit Floating Point network (b) 8-bit Quantized network This is due to the difference in the range of representation for each parameter in the network. Parameters in 8-bit quantized networks have a much smaller dynamic range than those of the full-precision network and thus cause a higher drop in the network accuracy. Similar observation has also been made in several previous works such as [17, 18] as well as our evaluation result in Sec. V. The results shown in Fig. 9 are found to be consistent for different DNN models and datasets, therefore, the aforementioned observation can be applicable for any configuration of DNNs. Based on the analysis of bit criticality, in the next section, we propose a novel _Criticality-Aware Bits Switching_ technique for enhancing the stuck-at fault-tolerance in DNNs. #### Iv-C2 Criticality-Aware Bits Switching Since the previously introduced techniques address stuck-at fault on a data block level, it is not guaranteed that errors in critical bits of each weight will be tolerated. Therefore, the proposed _Criticality-Aware Bits Switching_ technique operates on each individual weight in order to minimize the number of critical stuck-at errors within each weight. Since applying bits switching to each individual weight would require more auxiliary bits to encode/decode the data, the proposed method uses one unified switching operation for all weights within a data block to keep the storage overhead minimal. By doing this, the method only requires one bit per (512-bit) block for encoding/decoding which is a negligible hardware overhead. Nevertheless, the proposed method is flexible and can be applied in a more fine-grained manner with additional auxiliary bits to achieve greater robustness. Fig. 10 depicts an example of the proposed _Criticality-Aware Bits Switching_ technique. For the sake of illustration, three 8-bit weights with three critical bit positions being stuck-at faults are shown. The proposed method is incorporated by rotating the four MSBs (\(7^{\text{th}}\), \(6^{\text{th}}\), \(5^{\text{th}}\), \(4^{\text{th}}\)-bit) of each weight to the left and four LSBs (\(3^{\text{rd}}\), \(2^{\text{nd}}\), \(1^{\text{st}}\), \(0^{\text{th}}\)-bit) to the right. For example, in the first weight, the original data is "0111 0101" (in binary). When programmed to emerging NVMs, a stuck-at cell in the \(5^{\text{th}}\) bit position will cause a net deviation of \(2^{5}=32\) (in decimal) which in turns causes a severe impact on DNNs accuracy. After criticality-aware bit switching, the encoded data becomes "0101 0111" and the error caused by stuck-at fault in the critical cell is eliminated when reading from the data block. Intuitively, by switching MSBs with LSBs, errors in the MSBs and LSBs are also switched and thus, the method reduces the impact of errors in the MSBs significantly. It is important to note that, by doing intra-weight rotation, errors in the LSBs can also be increased. However, as discussed in the previous section, these errors does not cause a high impact on DNN's accuracy and the overall deviation in weight value would still be smaller. Based on the observation in the previous section, for the networks that uses 8-bit quantized parameters, four MSBs are switched with four LSBs. For 32-bit floating point networks, any MSBs rotation greater than five bits can be effective for MSBs fault-tolerance. In the proposed method, we choose to rotate ten MSBs with the LSBs in 32-bit weights to provide enough safety of margin. By combining the proposed _Criticality-Aware Bits Switching_ technique with the net deviation minimization, the robustness of the DNNs can be enhanced significantly, as shown in the evaluation results in Sec. V. ### _Implementation and Overhead_ Since most of DNN applications are often trained once and then deployed to multiple edge devices, the emerging NVMs can be programmed after the training process is done. This eliminates a deployment-time overhead of the proposed system. When used only for inference, the proposed techniques do not have any significant impact on performance because the remapping and encoding operations add only a few logic gate delays to the critical path. Fig. 11 illustrates the implementation of the proposed CRAFT architecture. Specifically, Fig. 11(a) and Fig. 11(b) show the overall architecture and the remapping+encoding logic of CRAFT, respectively. During inference, the DNNs parameters read from the emerging NVMs get remapped/decoded based on the mapping/encoding that leads to minimal impact of stuck-at faults. As shown in Fig. 11(b), the _Intra-Block Address Remapping_ technique uses simple XOR logic to map the address to the new address while the _Weight Inversion_ method uses NOT operation to flip the data. The _Criticality-Aware Bits Switching_ technique uniformly rotates the weight data by simple rewiring logic. These remapping and encoding operations being trivial add negligible timing overhead during inference. The exact storage overhead of CRAFT can be calculated by dividing the number of auxiliary bits with the total number of data bits. For a typical 64B data block size and 32-bit floating-point DNN weights, CRAFT adds four auxiliary bits for address remapping, one bit for weight inversion and one bit for weight rotation (bits switching). Thus, the proposed techniques in CRAFT incur only 1.17% storage overhead which is negligible. The effectiveness of CRAFT can be further improved with more fine-grained implementation while using smaller data blocks for remapping/encoding at the cost of increase in the number of auxiliary bits. ## V Evaluation The proposed stuck-at faults tolerance techniques are evaluated using different experiments that consider various DNN models, datasets and parameter configurations. In the following, details of the experimental setup followed by the evaluation results and discussion are presented. Fig. 10: Example of _Criticality-Aware Bits Switching_. MSBs positions are switched with LSBs positions so that the impact of errors in MSBs is minimized. ### _Experimental Setup_ The simulations for stuck-at faults in DNNs are performed by using the Pytoch framework [37]. All simulations are performed on an Intel(r) Xeon(r) CPU E5-1650 v4 with two Nvidia Titan XP GPUs having 24Gb of RAM. Table I lists the specifications of the evaluated DNN models and datasets. The proposed fault-tolerance enhancement techniques of the CRAFT architecture are evaluated for various state-of-the-art DNNs using popular datasets. The DNN models used in the experiments include _ResNet, VGG, MobileNet_ and _Inception_. The two datasets considered are _CIFAR-10_ and _ImageNet_. The _CIFAR-10_ dataset contains 60,000 RGB images, of which, 50,000 images are for training and 10,000 images are for testing. Training set images are preprocessed by padding 4 pixels along the height and width and randomly cropping to a 32\(\times\)32 patch. Furthermore, random horizontal flip operation is also performed on the training set images. DNN models evaluated on _CIFAR-10_ dataset (i.e., ResNet-18, VGG-19 and MobileNet-V2) are trained using stochastic gradient descent with 0.9 momentum. The cross entropy loss function is used as the objective function to classify ten classes of the input images [3]. The learning rate is kept constant at 0.1 during the training process. solutions for the same purpose. As discussed earlier, the proposed techniques are flexible in terms of trade-off between the storage overhead and degree of robustness against stuck-at faults (more auxiliary bit can be used to enhance the fault-tolerance capability of the system at the cost of increased storage overhead). Therefore, we compare the proposed method with existing ECC techniques with similar storage overhead. In conventional memory system, ECC code such as (72, 64) Hamming code is often used for addressing soft errors. A typical (72, 64) Hamming code normally incurs more than 10% of storage overhead, which is 10\(\times\) larger than the combined overhead of all of the proposed techniques of CRAFT. Moreover, ECC schemes for addressing soft errors in conventional memory system are not as effective in emerging NVMs, as shown and discussed in [12]. Instead, Error Correcting Pointers (ECP) are often preferred for tolerating hard errors in emerging NVMs. For a fair comparison, we benchmark the proposed methods with the ECP variant that incurs a comparable storage overhead. For \(d\)-bit data, \(ECP_{n}\) is able to correct up to \(n\) bits with the storage overhead of \(\frac{1+n+n+.\lceil\log_{2}d\rceil}{d}\). For our experiments, we consider \(ECP_{1}\) which can correct a single bit error in 512-bit data block at the cost of \(2.15\%\) storage overhead, which is still higher than that of CRAFT. As discussed in Sec. IV, for 32-bit data (one full-precision weight or four quantized weights), the proposed techniques together require approximately 1.17% storage overhead. This amount is 2\(\times\) less than the evaluated ECP. Despite having a minimal storage overhead, CRAFT is found to be more effective as compared to ECP, regardless of the DNN model type or dataset size. The detailed results are presented in the next section. ### _Results and Discussion_ The evaluation results of the proposed methods for CIFAR-10 and ImageNet are shown in Fig. 12 and Fig. 13, respectively. The solid black line shows the baseline models (considering stuck-at faults) without any error-correction scheme. The dotted blue line indicates the models which incorporate ECP to mitigate stuck-at faults. The red dashed line illustrates models that use _Intra-Block Addressing Remapping_ and _Weight Inversion_ technique for stuck-at fault-tolerance. The green dash-dotted line shows results for CRAFT architecture which includes _Weight Remapping + Weight Inversion + Criticality-Aware Bits Switching_. As seen from the results, CRAFT improves the robustness of the baseline models significantly and outperforms the ECP method by a significant margin. Specifically, Fig. 12 shows the evaluation results of different fault-tolerance methods for ResNet-18 using CIFAR-10 dataset. The baseline and ECP classification error starts to increase exponentially at the bit error rate (BER) of \(1.5\times 10^{-7}\) and \(5\times 10^{-6}\), respectively. On the other hand, the classification error can be maintained below 10% at around \(5\times 10^{-5}\) BER when Weight Remapping and Weight Inversion are applied and Fig. 12: Comparison of different fault-tolerance techniques for various DNNs using CIFAR-10 dataset and 32-bit floating-point parameters (a) ResNet-18 (b) VGG-19 (c) MobileNet-V2 Fig. 13: Comparison of different fault-tolerance techniques for various DNNs using Imagenet dataset and 8-bit quantized parameters (a) ResNet-50 (b) VGG-19 (c) Inception-V4 at \(2\times 10^{-4}\) when used together with Criticality-Aware Bits Switching in CRAFT. In other words, CRAFT can increase the fault-tolerance by up to more than 1200\(\times\) compared to the baseline and 3\(\times\) compared to only using Weight Remapping + Inversion methods. A similar trend can also be observed in other networks using the CIFAR-10 dataset such as VGG-19 and MobileNet-V2. For example, in VGG-19, while ECP can only improve the robustness of the model up to 284\(\times\), CRAFT can increase the robustness to 12,320\(\times\) compared with the baseline model, which is orders of magnitude higher than ECP. The efficiency of CRAFT compared to the baseline model when using MobileNet-V2 is found to be 231\(\times\), which is 10\(\times\) higher than ECP and 4\(\times\) better than previously proposed methods [24]. To confirm the general applicability of the proposed techniques, we also perform different experiments on larger DNNs (ResNet-50, Inception-V4, etc...) with larger dataset (i.e., ImageNet) (Fig. 13). As discussed in Sec. IV-C1, the 8-bit quantized networks show a significant improvement in robustness compared to full-precision networks in Fig. 12 due to the smaller dynamic range representation. Regardless of such property, the proposed techniques still ensure a significant increase in robustness against stuck-at faults, as shown in Fig. 13. For example, for ResNet-50 using ImageNet (Fig. 13(a)), the network accuracy starts to drop at around \(2\times 10^{-4}\) BER while CRAFT can maintain the accuracy up to \(5\times 10^{-3}\) BER, indicating 29\(\times\) more robustness than the baseline model. This trend is found to be consistent in other DNNs evaluated on ImageNet dataset. Specifically, CRAFT can enhance the robustness of VGG-19 and Inception-V4 up to 12\(\times\) and 23\(\times\), respectively. A summary of the robustness improvement over the baseline configurations using different fault-tolerance techniques is given in Table. II. The improvement in robustness is obtained by measuring the BER at which the technique causes the classification error to increase by more than 5%. As shown in the table, CRAFT outperforms the existing techniques by orders of magnitude (up to \(10^{4}\) times over the baseline). While achieving a significant improvement in robustness against SAFs, as mentioned in Sec. IV-D, CRAFT incurs a minimum amount of storage overhead (\(\approx\)1.17%), which makes CRAFT highly practical when implementing in resource-constrained edge devices. Stuck-at faults in emerging NVMs are expected to become more frequent as technology scaling to smaller nodes. The two distinguishing features of CRAFT are consideration for the criticality of errors and flexibility to address more errors by employing a fine-grained remapping using smaller block size. This approach makes CRAFT a robust and scalable method which can tackle future trends of hard errors in NVMs. Finally, the proposed techniques of CRAFT are orthogonal to the existing error correcting techniques, therefore, further enhancement in the fault-tolerance can be achieved by implementing CRAFT along with the conventional techniques. ## VI Conclusion Hard errors such as stuck-at faults can severely impact the accuracy of Deep Neural Networks based systems which use emerging Non-Volatile Memories. This paper introduces a set of robust techniques, collectively named CRAFT, to enhance the error-tolerability of DNNs against stuck-at faults. The proposed techniques are simple and light-weight yet effective to tackle the problem of stuck-at faults in DNNs-based system. Working in a hierarchical manner, the proposed CRAFT architecture remaps the weights, encodes them using a simple inversion method and switches the bits within the weights based on their criticality, thus minimizing the impact of stuck-at faults on neural network's accuracy. The evaluation results show that CRAFT is able to enhance the robustness of the system by orders of magnitude. Specifically, for DNNs evaluated on CIFAR-10, CRAFT can increase the robustness by up to \(10^{4}\) times compared to the baseline model. For DNNs using ImageNet, CRAFT enhances the robustness of the model by up to 29 times. Being orthogonal, the proposed techniques of CRAFT can be easily incorporated with other existing methods to further increase the fault-tolerance of DNNs.
2305.08990
A Bi-CMOS electronic-photonic integrated circuit quantum light detector
Complimentary metal-oxide-semiconductor (CMOS) compatible quantum technology enables scalable integration with the classical readout and control electronics needed to build quantum computers. Homodyne detectors have applications across quantum technologies including quantum computers, and they comprise photonics and electronics. Here we report a quantum noise limited monolithic electronic-photonic integrated homodyne detector, with an overall footprint of $80~\mu\mathrm{m} \times 220~\mu\mathrm{m}$, fabricated in a 250~nm lithography bi-polar CMOS process. By monolithic integration of the electronics and photonics, overall capacitance is suppressed -- this is the main bottleneck to high bandwidth measurement of quantum light. We measure a 3~dB bandwidth of 19.8~GHz and a maximum shot noise clearance of 15~dB. This exceeds bandwidth limits of detectors with macroscopic electronic interconnects, including wirebonding and flip-chip bonding. This demonstrates CMOS electronic-photonic integration enhancing performance of quantum photonics.
Joel F. Tasker, Jonathan Frazer, Giacomo Ferranti, Jonathan C. F. Matthews
2023-05-15T20:08:48Z
http://arxiv.org/abs/2305.08990v1
# A Bi-CMOS electronic-photonic integrated circuit quantum light detector ###### Abstract Complimentary metal-oxide-semiconductor (CMOS) compatible quantum technology enables scalable integration with the classical readout and control electronics needed to build quantum computers. Homodyne detectors have applications across quantum technologies including quantum computers, and they comprise photonics and electronics. Here we report a quantum noise limited monolithic electronic-photonic integrated homodyne detector, with an overall footprint of 80 \(\mu\)m\(\times\)220 \(\mu\)m, fabricated in a 250 nm lithography bi-polar CMOS process. By monolithic integration of the electronics and photonics, overall capacitance is suppressed--this is the main bottleneck to high bandwidth measurement of quantum light. We measure a 3 dB bandwidth of 19.8 GHz and a maximum shot noise clearance of 15 dB. This exceeds bandwidth limits of detectors with macroscopic electronic interconnects, including wirebonding and flip-chip bonding. This demonstrates CMOS electronic-photonic integration enhancing performance of quantum photonics. Photonic integrated circuits (PIC) are a compelling approach to develop quantum technology [1; 2] and they underpin proposed architectures for optical quantum computing [3; 4]. CMOS compatible PIC platforms, such as silicon on insulator photonics [5; 6], offer paths to scaling up the manufacture of photonic devices for quantum technology in commercial foundries. This may prove critical in the construction of universal quantum computers, because the scale and performance required of components to build quantum computers is beyond anything yet constructed in information technologies [3]. Since initial experiments with silicon quantum photonic circuits [7; 8], CMOS compatibility for electronic-photonic integration has been a clear goal for quantum photonics. This is because it would enable integration at scale of components generating and utilising quantum states of light with the required high performance classical readout and control electronics. But to date, the development of foundry ePIC platforms [6; 9] has been driven by the performance demands of classical applications, with demonstrations including 56 GB/s direct detection receivers [10] and 128 Gb/s coherent receivers [11] for fibre optics telecommunications, and coherent detector arrays with active pixel amplifiers for 3D imaging [12]. Here we demonstrate electronic-photonic integration can be applied to enhance quantum technologies. We report integration in one monolithic ePIC chip (Figure 1) of all the electronics and silicon photonics needed for homodyne detection of quantum optical signatures [13]. The detector has a measured 3-dB bandwidth of \(19.8\pm 0.1\) GHz and a maximum measured shot noise clearance of 15 dB. By extrapolating the measured clearance, we infer shot noise limited performance beyond the bandwidth of our analysis equipment, measuring more than 10 dB at 26.5 GHz. Homodyne detectors can measure weak signals by interfering them with a local oscillator at an optical beam splitter. The resulting interference is observed in the subtraction of photocurrents from a pair of photodiodes placed at the two beamsplitter outputs. This subtraction current requires amplification, and when the amplification electronics are of sufficiently low noise, the homodyne detector is sensitive enough to reveal quantum noise signatures in the input. This is quantified by the clearance between optical shot noise and the electronic noise of the detector. Quantum technology applications of homodyne detectors include squeezed-light-enhanced gravitational wave detection [14; 15], quantum state tomography [13], measuring continuous variables cluster states [16; 17] for quantum computing and for continuous variables quantum communication [18]. Waveguide integrated beamsplitters have been used for homodyne detection with silica-on-silicon [19] and lithium niobate PICs [20]. In silicon-on-insulator photonics, on-chip germanium p-i-n photodiodes have been integrated with waveguides and interfaced with discrete amplifier electronics for quantum random number generation and coherent state tomography [21], and as a chip-scale receiver for continuous variables quantum key distribution [22]. In these cases, the detector bandwidths were limited to respectively \(\sim\)100 MHz and \(\sim\)10 MHz by discrete electronics, mounted on printed circuit boards (PCB). Consequently, micro-electronic amplifiers were wirebonded to silicon PICs, and the resulting detectors demonstrated 3-dB bandwidths of 1.7 GHz [23] and 1.5 GHz [24] - these detectors were respectively used to measure squeezing over a 9 GHz bandwidth and observe shot noise clearance out to 20 GHz. A remaining limiting factor in the speed of these detectors is the 20 fF - 100 fF capacitance overhead of the electrical bondpad interconnection [25], that interfaces the PIC with the integrated electronics. Flip-chip interfaces introduce similar capacitance overheads, and so also restrict the possible bandwidth of hybrid integration us ing macroscopic interconnects. In order to increase bandwidth further, monolithic integration is required. The reported single-chip homodyne detector is illustrated in Figure 1. It was designed and characterised in-house with fabrication outsourced to the Leibniz Institute for High Performance Microelectronics (IHP). We chose IHP's SG25H5_EPIC process, which features a 250 \(\mu\)m silicon node, germanium-based photodiodes with \(f_{\rm 3dB}>60\) GHz and vertically integrated heterojunction bipolar transistors (HBTs) for RF applications using 250 nm lithography with a specified transition frequency \(f_{T}=220\) GHz and a breakdown voltage of 1.7 V [9]. The RF performance of these HBTs is comparable to the lateral n-channel MOSFET transistors in references [26; 27; 28]. This is due to the vertical carrier transport of the HBT, meaning speed is less dependent on the lithography resolution allowing vertical bipolar transistors to outperform NMOS devices at the same process node [29]. The HBTs are integrated in the same front-end-of-line process as the silicon-on-insulator waveguides and active optical components, such as modulators and photodiodes. This approach removes all bondpad and packaging parasitics, with connections between photonics and electronics made in the metal interconnect layers of the back-end-of-line (BEOL). The IHP fabrication process begins with a SOI wafer optimised for photonics, with a 220nm silicon layer thickness and a 2um thick buried oxide layer. A 'local-SO' approach is employed in which SOI regions that are to be used for BiCMOS devices are etched down to the silicon substrate. Bulk silicon is selectively regrown epitaxially in these regions and is subsequently planarised using chemical-mechanical planarisation. Patterning of electronic and photonic structures is conducted in parallel and the electrical contacts to the photodiodes and transistors are made with the same process step [30]. Devices are then connected through a single shared BEOL with five metal layers. The transimpedance amplifier (TIA) used consists of a HBT common-emitter amplifier in shunt-feedback configuration, followed by a 50 \(\Omega\) buffer amplifier for interfacing with standard radio frequency (RF) test equipment (see Figure 1). The bandwidth of a single-stage shunt-feedback TIA with an ideal second-order Butterworth response is given by [31] \[f_{\rm 3dB}=\sqrt{\frac{A_{0}f_{A}}{2\pi C_{\rm in}R_{F}}}, \tag{1}\] where \(C_{\rm in}\) is the total capacitance at the amplifier input, \(R_{F}\) is the feedback resistance and \(A_{0}f_{A}\) is the gain-bandwidth product. A monolithic design reduces \(C_{\rm in}\) by minimising the stray capacitance between the photodiodes and amplifier due to bondpads or other wiring related sources. This comes in addition to the already low capacitance associated with integrated photodiodes and high performance HBTs - integrated photodiodes with capacitances as low as 9 fF and amplifier input capacitance of order 100 fF have been reported [32; 33]. This is in stark contrast to the packaging and layout associated parasitic capacitance on a PCB of up to tens of picofarads [21; 34]. Eq. 1 demonstrates the fundamental trade-off between the detector bandwidth and the transimpedance gain from the subtraction photocurrent to output voltage. Larger transimpedance gains are desirable to ensure the detector noise lies above the noise floor of any subsequent equipment and provide the maximum shot noise clearance when a local oscillator field is applied. However, the practically usable \(R_{F}\) and achievable bandwidth are constrained by the total input capacitance and the gain-bandwidth product. In the case of a single transistor amplifier, the gain-bandwidth product is proportional to the transistor transition frequency, \(f_{T}\) via \(A_{0}f_{0}\approx C_{I}/C_{L}f_{T}\), where \(C_{I}/C_{L}\) is ratio of transistor input and load capacitances [35]. The input-referred current noise power spectral density Figure 1: **A Bi-CMOS integrated homodyne detector for measuring quantum light.****a** The detector schematic. The photonics include grating couplers (G), mode converters, strip waveguides, a multi-mode interference coupler beamsplitter (MMI) and germanium-silicon photodiodes (PDs). The electronics are a two-stage TIA design. The first transistor (Q\({}_{1}\)) forms a common-emitter shunt-feedback TIA; the second, (Q\({}_{2}\)), constitutes a 50 \(\Omega\) output buffer amplifier. R\({}_{\rm F}\), R\({}_{\rm C}\), R\({}_{\rm E}\) label the feedback, load and emitter resistors. **b** A 3D illustration of connections between components using three of the five metal layers in the SG25H5 EPIC process [9] used to fabricate the device. Light grey indicates silicon-on-insulator, dark grey indicates bulk silicon. **c** A microscope image of the detector illustrates scale. AMP labels the TIA and buffer amplifier stages. This device fits within a 80 \(\mu\)m \(\times\) 220 \(\mu\)m footprint. is given by, \[I_{n,TIA}^{2}(f) = \frac{4k_{B}T}{R_{F}}+\frac{2qI_{C}}{\beta}+2qI_{C}\frac{\left(2 \pi C_{T}\right)^{2}}{g_{m}^{2}}f^{2}\] \[+\ 4kTR_{b}\left(4\pi C_{PD}\right)^{2}f^{2},\] where \(I_{C}\) is the HBT collector current, \(\beta\) is the DC current gain, \(g_{m}\) is the transistor transconductance and \(R_{b}\) the base resistance [35]. The first two terms are white noise terms, specifically the feedback resistor Johnson noise and base current shot noise, respectively. The latter terms scale quadratically with frequency to a limit set by the photodiode junction capacitance and total capacitance, including parasitics, presented to the amplifier input. The amplifier is implemented with two n-p-n transistors as shown in Figure 1 a, the design of which is provided as part of the SG25H5_EPIC process development kit (PDK). The transition frequency \(f_{T}\) is maximised for a particular collector current density--for our collector area, this corresponds to an optimal bias current \(I_{C}\) of 4.5 mA. Achieving the optimal collector current requires careful tuning of the biasing resistors \(R_{C}\) and \(R_{E}\) for a given transimpedance gain \(R_{F}\). We perform lumped element SPICE simulations of the amplifier to optimise resistances with \(V_{cc1}=2.2\ V\) and \(V_{cc2}=1.7\ V\) dictated by the transistor breakdown voltage. The chosen resistors are \(R_{F}=600\ \Omega\), \(R_{c}=250\ \Omega\) and \(R_{E}=35\ \Omega\) where the feedback resistance has been chosen to provide sufficient clearance above the fundamental thermal noise floor of the 50 \(\Omega\) termination resistor in RF test equipment. Photonic layout was performed using IPKISS and Cadence Virtuoso. Simulations, electronic design, layout and post-layout electronic simulations were performed using Cadence Virtuoso using PDK SPICE models provided by IHP. The gain spectrum of an ideal shunt-feedback TIA is that of a second-order Butterworth filter, given by \[G(f)=\frac{A_{0}^{2}}{1+\left(\frac{i2\pi f}{i2\pi f_{3\mathrm{dB}}}\right)^{2}} \tag{3}\] where \(A_{0}^{2}\) is the absolute gain at zero frequency. The second stage buffer operates as a unit gain amplifier and can be assumed to have a bandwidth approximately equal to the transistor transition frequency [35]. A 3D model and a microscope image of the detector is shown in Figure 1 b & c. A 20 \(\mu\)m trace connects the photodiodes subtraction signal the the amplifier input. Our parasitic extraction simulations estimate the parasitic capacitance of this interface at 7 fF, compared with 105 fF when simulating a single bondpad at the amplifier input. The ePIC is bonded to a purpose-made PCB designed for high-frequency operation. Vertical silicon capacitors (Murata UWSC, 1 nF) are used on the PCB for power supply decoupling on transistor and photodiode biases. The ePIC itself contains additional vertical metal-insulator-metal capacitors located next to each component for additional supply filtering (not shown in Figure 1 a). The TIA output wirebond is kept short to minimise parasitic inductance. We characterise the bandwidth, common-mode rejection ratio (CMRR), linearity and responsivity of the device. Light is coupled into the chip using grating couplers, and multimode interferometers (MMIs) are used as beamsplitters. A continuous-wave (CW) tuneable laser (PurePhotonics PPCL550) at 1550 nm and amplified with an erbium-doped fibre amplifier (PriTel), is used as a local oscillator (LO). A variable optical attenuator (VOA, OzOptics) adjusts LO power. Noise measurements are recorded using a Keysight N9020B MXA electronic spectrum analyser (ESA) with a 26.5 GHz bandwidth. Photodiode and transistor biases are supplied from sourcemeters (Keysight U2722A & Keithley 2450) which are also used to monitor the two individual photocurrents of the diodes. We compare measured photocurrents when injecting LO at the top and bottom MMI ports, finding a splitting ratio of 42:58 transmission to reflection. This imbalance results in a net photocurrent at the amplifier input and excess electronic noise at the amplifier output (see Appendix). We offset this effect by reducing the bias on the bottom photodiode until the photocurrents are matched, with a maximum difference of 80 \(\mu\)A at the maximum LO power. This reduces the quantum efficiency of the bottom diode to 72% of its maximum value. The top and bottom photodiodes are each reverse biased at 2 V and -0.3 V, respectively, relative to an amplifier input voltage of 0.9 V. The transistor supplies, \(V_{cc1}\) and \(V_{cc2}\) (see Figure 1 a), are set to 2.2 V and 1.65 V. To account for signal loss from PCB transmission lines and coaxial cables, we measure S21 parameters of a PCB co-planar waveguide test structure and the coaxial cable used in the experiment using a Keysight N5225A network analyser. We perform a bandwidth measurement by optimising coupling at maximum power using the monitored photocurrent, then recording a series of spectra on the ESA as the VOA adjusts the input power from 13.5 dBm to -26.5 dBm. We also record the ESA displayed average noise level (DANL, the intrinsic ESA noise) for later subtraction from the data. All spectra are recorded at 100 kHz RBW over a 26.5 GHz span. The results of this are plotted in Figure 2. By fitting the detector response to a second-order Butterworth response, we obtain a 3-dB bandwidth of \(19.8\pm 0.1\) GHz. As the clearance of the detector extends beyond the bandwidth of our ESA, we estimate the shot noise bandwidth using Eq. 2. We fit the clearance of the detector with \(A/(B+Cf^{2})+1\) where \(A\) describes the optical shot noise, \(B\) the white noise terms of Equation 2 and \(C\) the latter frequency dependent terms (see Appendix). The fit suggests that the shot noise clearance extends far beyond the measured bandwidth, vanishing beyond 100 GHz. In practice we anticipate the photodiode transit time bandwidth to become limiting [36]. Grating coupler losses are measured using grating-to-grating test structures that we included on the ePIC chip. This yields an average of approximately 4.0 dB per coupler. We characterise the photodiode responsivity by comparing the sum of measured photocurrents to off-chip LO power and correcting for grating coupler losses. From this, we obtain a maximum photodiode responsivity of 0.47 A/W at 2 V bias, including MMI insertion loss. CMRR measurements are made by intensity-modulating the LO using an electro-optic modulator and comparing the signal with either both photodiodes biased as above, or one biased and the other disconnected to eliminate its photocurrent contribution. The ESA is set to 10 kHz RBW and a span \(\pm 0.1\%\) of the modulation frequency. We observe a CMRR of 27 dB at 500 MHZ (Figure 3), which is limited by the intrinsic splitting ratio of our MMI. In future devices, this value can be improved by substituting the current static MMIs with thermoelectric tuneable Mach-Zehnder interferometers [23]. An ePIC quantum light detector is reported, combining photonics and readout electronics within a 80 \(\mu\)m \(\times\) 220 \(\mu\)m footprint. This was achieved thanks to the CMOS compatibility of silicon photonics, which can benefit the scalability and manufacturability of photonic quantum information processors and could be a potential necessity when considering the stringent timing limits imposed by feed-forward and delay lines [37]. The detector's \(19.8\pm 0.1\) GHz 3-dB bandwidth is an order of magnitude greater than previous fastest demonstrations and surpasses the speed performance limits of homodyne detectors constructed from macroscopic wire-bond interconnects [23]. The demonstration maintained shot noise efficiencies of at least 95%. Higher gains, and thus higher efficiencies, will be possible in future devices through multi-stage amplifier designs without sacrificing bandwidth [31]. Higher responsivity photodiodes have been demonstrated in silicon photonics, achieving 95% quantum efficiency with 30 GHz bandwidths in classical appliations [38], and fibre-coupling efficiencies of 95% have been observed with edge couplers [39]. Incorporating such improvements will enable ePIC detectors to simultaneously meet all of the performance requirements of future quantum technologies. We believe the current detector's footprint and performance already opens application of ePIC homodyne detectors to miniaturised and high speed receivers for quantum communications [18; 22], higher clock rates cluster state characterisation [16; 17] and large arrays of coherent receivers for continuous vari Figure 3: **Detector common mode rejection ratio at 500 MHz**. The LO power is set to generate 10 \(\mu\)A of total photocurrent and the noise power recorded with one or both photodiodes reverse biased. We observe a maximum of 27 dB CMRR at 500 MHz Figure 2: **Homodyne detector characterisation.****a**, Power spectral density (PSD) of the detector where ESA noise and the amplifier dark noise have been subtracted in addition to cable and transmission line loss corrections. The legend represents the total photocurrent measured on both photodiodes. The dashed line shows a fit to Eq 3 and gives a 3 dB bandwidth of \(19.8\pm 0.1\) GHz. **b**, PSD of the detector normalised to the amplifier electronic noise. **c**, Raw and electronic noise subtracted detector noise variance at 1 GHz against total photocurrent. The horizontal line represents the electronic noise level. A linear fit to the data (dashed) indicates a gradient of \(0.99\pm 0.01\), demonstrating the presence of vacuum shot noise up to a maximum clearance of 15 dB. ables photonic quantum computing [4] and photonic neural networks operating below the Landauer limit [40]. Beyond detectors, we anticipate future applications of ePICs to increase the performance of quantum device control, including increasing the number of simultaneously controlled phase shift parameters beyond O(\(10^{2}\)) in highly programmable quantum devices [41]. We expect the combination of minaturised readout and control within ePICs will reduce the requirements on optical delay lines for quantum technologies utilising state-measurement and feedforward [37]. This is important for large-scale implementations of quantum technology including, multiplexed sources of quantum states [42], quantum state engineering [43] and measurement-based and time-multiplexed quantum computing [4; 44].
2304.01323
A Random Group with Local Data Realizing Heuristics for Number Field Counting
We define a group with local data over a number field $K$ as a group $G$ together with homomorphisms from decomposition groups ${\rm Gal}(\overline{K}_p/K_p)\to G$. Such groups resemble Galois groups, just without global information. Motivated by the use of random groups in the study of class group statistics, we use the tools given by Sawin-Wood to construct a random group with local data over $K$ as a model for the absolute Galois group ${\rm Gal}(\overline{K}/K)$ for which representatives of Frobenius are distributed Haar randomly as suggested by Chebotarev density. We utilize Law of Large Numbers results for categories proven by the author to show that this is a random group version of the Malle-Bhargava principle. In particular, it satisfies number field counting conjectures such as Malle's Conjecture under certain notions of probabilistic convergence including convergence in expectation, convergence in probability, and almost sure convergence. These results produce new heuristic justifications for number field counting conjectures, and begin bridging the theoretical gap between heuristics for number field counting and class group statistics.
Brandon Alberts
2023-04-03T19:41:08Z
http://arxiv.org/abs/2304.01323v1
# A random group with local data ###### Abstract. We define a group with local data over a number field \(K\) as a group \(G\) together with homomorphisms from decomposition groups \(\operatorname{Gal}(\overline{K}_{p}/K_{p})\to G\). Such groups resemble Galois groups, just without global information. Motivated by the use of random groups in the study of class group statistics, we use the tools given by Sawin-Wood to construct a random group with local data over \(K\) as a model for the absolute Galois group \(\operatorname{Gal}(\overline{K}/K)\) for which representatives of Frobenius are distributed Haar randomly as suggested by Chebotarev density. We utilize Law of Large Numbers results for categories proven by the author to show that this is a random group version of the Malle-Bhargava principle. In particular, it satisfies number field counting conjectures such as Malle's Conjecture under certain notions of probabilistic convergence including convergence in expectation, convergence in probability, and almost sure convergence. These results produce new heuristic justifications for number field counting conjectures, and begin bridging the theoretical gap between heuristics for number field counting and class group statistics. ## 1. Introduction Class groups and Galois groups of unramified extensions of number fields \(K\) are predicted to be distributed along families of number fields according to certain random groups; that is, there exists a probability measure \(\mu_{\mathcal{F},\mathscr{C}}\) on the space of profinite groups such that conjecturally \[\lim_{X\to\infty}\frac{\#\{K\in\mathcal{F}:\mathscr{C}(K)\cong G,\ \operatorname{disc}(K)\leqslant X\}}{\#\{K\in\mathcal{F}: \operatorname{disc}(K)\leqslant X\}}=\mu_{\mathcal{F},\mathscr{C}}(G), \tag{1}\] where \(\mathcal{F}\) is a family of number fields and \(\mathscr{C}\) could be the class group, \(\operatorname{Gal}(K^{\text{un}}/K)\), or a similar construction such as the Galois group of the maximal unramified prime to \(2|\operatorname{Gal}(K/\mathbb{Q})|\) extension of \(K\)[10, 11, 12]. The classical version of this principle is the Cohen-Lenstra heuristics [13], which are shown by Friedman-Washington to be equivalent to predicting that the \(p\)-parts of class groups of quadratic fields are distributed as the cokernels of certain random \(p\)-adic matrices [10]. These heuristics have lead to a greater understanding of the structure of unramified extensions and highlight interesting equidistribution properties in the absolute Galois group. Malle's conjecture and generalizations for the distributions of number fields are closely related to the distributions of unramified extensions, as they ask more general questions about the rate of growth of functions like \[\#\{K\in\mathcal{F}:\text{other conditions, disc}(K)\leqslant X\} \tag{2}\] as \(X\to\infty\). The classical example studied by Malle [14, 15] is \[N(K,G;X):=\#\{L/K:[L:K]=n,\ \operatorname{Gal}(L/K)\cong G,\ \operatorname{ Nm}_{L/K}\!\operatorname{disc}(L/K)\leqslant X\}.\] for \(G\subset S_{n}\) a transitive subgroup and \(\operatorname{Gal}(L/K)\subset S_{n}\) the Galois group of the Galois closure \(\widetilde{L}/K\) together with the action on the \(n\) embeddings \(L\hookrightarrow\widetilde{L}\). Conjectural rates of growth for these counting functions are made by appealing to local information and presuming an "average local-to-global principle". Despite the clear similarities between Malle's counting function and the counting functions appearing in class group statistics, no association between Malle's conjecture and random groups currently exists in the literature. The goal of this paper is to bridge the gap between the theory of distributions of unramified extensions and distributions of other families of number fields by constructing a "random object model" for the absolute Galois group \(G_{K}:=\operatorname{Gal}(\overline{K}/K)\), which can be used to witness Malle's conjecture. Towards this end, we define two categories on which we will build a random object modeling the absolute Galois group. Let \(G_{K_{p}}:=\operatorname{Gal}(\overline{K}_{p}/K_{p})\) be the decomposition group at the place \(p\) of \(K\). 1. the category of groups with local data, \(\operatorname{proGrp}(K)\), whose objects are pairs \((G,\phi)\) of a profinite group and a tuple \(\phi=(\phi_{p})\) of continuous homomorphisms \(\phi_{p}:G_{K_{p}}\to G\) for each place \(p\) of \(K\). See Definition 2.1 for the full definition, including morphisms. 2. the category of finite groups with finite local data \(\operatorname{Grp}(K)\) whose objects are triples \((G,S,\phi)\) of a finite group \(G\), a finite set of places \(S\) of \(K\), and \(\phi=(\phi_{p})_{p\in S}\) a family of continuous homomorphisms \(\phi_{p}:G_{K_{p}}\to G\) for each place \(p\in S\). See Definition 2.2 for the full definition, including morphisms. We will prove that \(\operatorname{proGrp}(K)\) is (up to a null set) isomorphic to the category of pro-objects of \(\operatorname{Grp}(K)\). The moments problem over categories of pro-objects has been solved in a wide class of cases by recent work of Sawin-Wood [14] for very general sequences of finite moments, subject to some mild conditions. We will prove that \(\operatorname{Grp}(K)\) satisfies these conditions and give a family of well-behaved sequences of finite moments in the sense defined by Sawin-Wood, see Proposition 2.4. In particular, using the main results of [14] we prove the following: **Theorem 1.1**.: _Let \(K\) be a number field. Then there exists a unique probability measure \(\mu_{K}^{\operatorname{MB}}\) on the isomorphism classes of \(\operatorname{proGrp}(K)\) such that_ \[\int_{\operatorname{proGrp}(K)}\#\operatorname{Epi}(\mathscr{G},(G,S,\phi))\ d\mu_{K}^{\operatorname{MB}}(\mathscr{G})=|G^{ \operatorname{ab}}[|\mu(K)|]|^{-1}|G|^{-|S\cup P_{\varnothing}|+1}\] _for each \((G,S,\phi)\in\operatorname{Grp}(K)\). Here, \(\mu(K)\) is the group of roots of unity in \(K\) and \(P_{\varnothing}\) is the set of infinite places of \(K\)._ The superscript "\(\operatorname{MB}\)" stands for "Malle-Bhargava", as we will show that \(\mu_{K}^{\operatorname{MB}}\) is, in some sense, a random group analog to the Malle-Bhargava principle [1, 1]. The finite moments of \(\mu_{K}^{\operatorname{MB}}\) are constructed from the Chebotarev density theorem and heuristic predictions for class group statistics. We will show in Lemma 5.4 that these finite moments agree with those predicted by the Malle-Bhargava local series. Malle's counting function is, up to the Galois correspondence, counting surjections from the absolute Galois group to \(G\) with bounded discriminant. Given a group with local data \(\mathscr{G}\), we can define the discriminant of a surjection \(\pi:\mathscr{G}\to G\) via the local discriminants \(\operatorname{disc}(\pi|_{G_{K_{p}}})\). Thus, Malle's counting function can be extended to groups with local data. The author recently proved a version of the Law of Large Numbers for counting functions on random objects in a category [1], showing that these functions often have a particular growth rate with probability \(1\). We convert the discriminant ordering for number fields to this context, and using the main results of [1] we prove that \(\mu_{K}^{\mathrm{MB}}\) satisfies Malle's conjecture _with a leading constant given as a convergent Euler product_ in the naively expected cases with probability \(1\). **Theorem 1.2**.: _Let \(K\) be a number field, \(G\subset S_{n}\) be a transitive group, and \(\mu_{K}^{\mathrm{MB}}\) the constructed distribution of groups with local data. With \(\mathscr{G}\) distributed according to \(\mu_{K}^{\mathrm{MB}}\), it follows that_ 1. _For any_ \(\epsilon>0\)_,_ \[\frac{\#\{\pi\in\mathrm{Surj}(\mathscr{G},G):|\mathrm{disc}(\pi)|\leq X\}}{X^ {1/a(G)+\epsilon}}\xrightarrow{\text{a.s.}}0\] _as_ \(X\to\infty\)_, where the "a.s." stands for "converges almost surely"._ 2. _If_ \(G=\langle g\in G:\mathrm{ind}(g)=a(G)\rangle\) _is generated by minimal index elements then_ \[\frac{\#\{\pi\in\mathrm{Surj}(\mathscr{G},G):|\mathrm{disc}(\pi)|\leq X\}}{c( K,G)X^{1/a(G)}(\log X)^{b(K,G)-1}}\xrightarrow{\text{p.}}1\] _as_ \(X\to\infty\)_, where the "p." stands for "converges in probability"._ 3. _If every proper normal subgroup_ \(N\trianglelefteq G\) _satisfies one of_ 1. \(N\) _contains no minimal index elements, or_ 2. \(G\backslash N\) _contains at least two_ \(K\)_-conjugacy classes of minimal index, then_ \[\frac{\#\{\pi\in\mathrm{Surj}(\mathscr{G},G):|\mathrm{disc}(\pi)|\leq X\}}{c( K,G)X^{1/a(G)}(\log X)^{b(K,G)-1}}\xrightarrow{\text{a.s.}}1\] _as_ \(X\to\infty\)_, where the "a.s." stands for "converges almost surely"._ _Here \(a(G)\), \(b(K,G)\), and \(K\)-conjugacy classes are defined as in Malle's conjecture (Conjecture 1.3), and_ \[c(K,G)= \frac{(\mathrm{Res}_{s=1}\zeta_{K}(s))^{b(K,G)}}{a(G)^{b(K,G)-1}(b (K,G)-1)!|G^{\mathrm{ab}}[|\mu(K)|]|\cdot|G|^{u_{K}}}\prod_{p|\infty}\left( \sum_{f\in\mathrm{Hom}(G_{K_{p}},G)}1\right)\] \[\cdot\prod_{p|\infty}\left[\left(1-p^{-1}\right)^{b(K,G)}\left( \frac{1}{|G|}\sum_{f\in\mathrm{Hom}(G_{K_{p}},G)}p^{-\frac{\mathrm{\nu_{ \mathrm{place}}}(f)}{a(G)}}\right)\right]\] _for \(u_{K}=\mathrm{rk}\mathcal{O}_{K}^{\times}\) the unit rank of \(K\)._ These methods are very robust, and can be applied to a number of generalizations of Malle's counting function. For the sake of clarity, we leave the most general version of this statement (such as restricting local conditions at infinitely many places) for a future paper. However, in the course of our proof we will require a more general version of Theorem 1.2. This is due to natural relationships between discriminant orderings and non-discriminant orderings that we take advantage of in the proof. This more general result is stated in Theorem 6.1 and includes, in particular, the product of ramified primes ordering. Theorem 1.2 also highlights the fact that \(\mu_{K}^{\mathrm{MB}}\) is a random group analog of the Malle-Bhargava principle and Malle's original conjecture - as it agrees with Malle's conjecture even in cases where Malle's conjecture is _wrong_. Structuring these predictions as a random group will help us to highlight what is going wrong with the Malle-Bhargava principle and look for ways to fix it. In Section 7, we disucss the known obstructions to Malle's conjecture and how they interact with the random group with local data \(\mu_{K}^{\mathrm{MB}}\). We refrain from making conjectures, but instead focus on how phrasing Malle's prediction in terms of a random group with local data clarifies these obstructions and gives an indication of how to produce improved predictions. ### Historical Background and Motivation For \(K\) a number field and \(G\subseteq S_{n}\) a transitive subgroup, Malle's conjecture can be rephrased via the Galois correspondence to be about counting surjections from the absolute Galois group \[\mathrm{Surj}(G_{K},G;X)=\{\pi:G_{K}\twoheadrightarrow G:\mathrm{Nm}_{K/ \mathbb{Q}}\mathrm{disc}(\pi)\leq X\},\] where \(G_{K}\) denotes the absolute Galois group of \(K\) and \(\mathrm{disc}(\pi)\) is the discriminant of the field fixed by \(\pi^{-1}(\mathrm{Stab}_{G}(1))\). The Galois correspondence implies \(\#\mathrm{Surj}(G_{K},G;X)=|\mathrm{Aut}(G)|\cdot N(K,G;X)\), so determining the rate of growth of this function is an equivalent problem. **Conjecture 1.3** (Malle [13, 13]).: _Let \(G\subset S_{n}\) be a finite transitive group. Let \(a(G)=\min_{g\neq 1}\mathrm{ind}(g)\), where the index of an element is given by \(\mathrm{ind}(g)=n-\#\{\text{orbits of }g\}\). Then_ 1. _(Strong form) Let_ \(\chi:G_{K}\to\hat{\mathbb{Z}}^{\times}\) _act on_ \(G\) _by_ \(x.g=g^{\chi(x)}\)_, and let_ \(b(K,G)\) _be the number of orbits under the cyclotomic action of conjugacy classes_ \(c\subset G\) _for which_ \(\mathrm{ind}(c)=a(G)\) _is minimal. Then there exists a positive constant_ \(c(K,G)\) _for which_ \[\#\mathrm{Surj}(G_{K},G;X)\sim c(K,G)X^{1/a(G)}(\log X)^{b(K,G)-1}\] _as_ \(X\to\infty\)_._ 2. _(Weak form)_ \[X^{1/a(G)}\ll\#\mathrm{Surj}(G_{K},G;X)\ll_{\epsilon}X^{1/a(G)+\epsilon}\] _as_ \(X\to\infty\)_._ The strong form is known to be false in some cases, in particular \(C_{3}\wr C_{2}\subseteq S_{6}\) as shown by Kluners [13]. Yet, it is known to be true in many other cases including * abelian groups [13, 14], * \(S_{3}\) in degree 3 [15, 16] and degree 6 [17], * \(S_{4}\) and \(S_{5}\) in degree 4 and 5 respectively [1, 18], * \(D_{4}\) in degree 4 [1], * generalized quaternion groups [13], * most wreath products \(C_{2}\wr H\)[13], * \(A\times S_{n}\) for \(n=3,4,5\) and \(A\) an abelian group without certain small prime divisors [14, 15], and * \(\mathrm{Heis}_{3}\subseteq S_{9}\) with \(K=\mathbb{Q}\)[10]. The value of the constant \(c(K,G)\) is also the subject of investigation, but much less is known about what value to expect here. Bhargava originally formulated the Malle-Bhargava principle in part to predict the value of this constant when \(G=S_{n}\) is the symmetric group [1]. See [11] for a broad investigation in the case that \(G\) is abelian. Theorem 1.2 states that a group with local data \(\mathscr{G}\) distributed according to \(\mu_{K}^{\mathrm{MB}}\) satisfies Malle's conjecture for \(G\)-extensions with probability 1 (under some mild conditions on \(G\)). The category of groups with local data is built to resemble the absolute Galois group \(G_{K}\), being profinite groups with decomposition subgroups. In fact \(\mu_{K}^{\mathrm{MB}}\) was built out of properties of the absolute Galois group like Chebotarev density. It stands to reason that we could heuristically infer information about \(G_{K}\) from information about probability 1 events in \(\mu_{K}^{\mathrm{MB}}\), even though \(G_{K}\) is a deterministic object. Using a random model for a deterministic object has precedence, notably with the Cramer random model for the set of prime numbers [10, 11]. With such models, behavior that occurs \(100\%\) of the time is said to provide evidence that we should expect the same behavior for the corresponding deterministic object. For prime numbers, such random models are used to justify predictions like the Hardy-Littlewood conjecture, Goldbach's conjecture, and many other conjectures involving prime gaps. Along this line of thinking, Theorem 1.2 gives behavior for groups with local data with probability \(1\) which can be considered good evidence for \(G_{K}\) to share those properties. This form of justification is stated as a Vast Counting Heuristic in [1, Heuristic 1.7], where we can make predictions if we expect \(G_{K}\) to be "typical" among groups with local data distributed according to \(\mu_{K}^{\text{MB}}\). Of course, it is well known that Malle's conjecture is false as stated - Kluners provided the first counter example in \(C_{3}\wr C_{2}\subseteq S_{6}\) for which Malle's predicted \(b\)-invariant is too small [13]. Kluners' counter example is witnessing some atypical behavior for \(G_{\mathbb{Q}}\) among groups with local data distributed according to \(\mu_{K}^{\text{MB}}\), specifically the behavior that \(\operatorname{Gal}(\mathbb{Q}(\zeta_{3})/\mathbb{Q})\) is a quotient of \(G_{\mathbb{Q}}\). For this reason, we do not attempt to make any conjectures in this paper. Our intention is to get the ball rolling on modeling counting functions in the style of Malle's conjecture with random objects, but we recognize that the absolute Galois group is known to have some atypical behaviors. At the end of the paper, we include a discussion of how Theorem 1.2 compares to the known cases of (and counter examples for) Malle's conjecture. We do not attempt to solve these problems in this paper, but rather focus on explaining how to interpret these issues in the world of random groups with local data. It will be a goal of future work to use this framework to better capture behaviors of the absolute Galois group and create concrete predictions that are more accurate than the Malle-Bhargava principle. ### Layout of the Paper In Section 2 we construct the category of groups with local data and show that this is the category of pro-objects of a diamond category in the sense of [12]. Additionally, we prove Proposition 2.4 giving a family of "well-behaved sequences" as defined in [12]. These results prepare the category for solving the moment problem to construct a probability measure using to construct a probability measure with given finite moments using the main results in [12]. This is precisely what we do in Section 3 for a particular sequence of finite moments modeling the absolute Galois group, constructing the measure \(\mu_{K}^{\text{MB}}\) to prove Theorem 1.1. In Section 5 we translate the discriminant ordering for number fields into the language of [1], that is a sequence of \(L^{1}\)-functions \(f_{n}\) on the underlying category of finite objects. We then determine the moments of the ordering, which agree with the sum of coefficients of the Malle-Bhargava local series. The main results of [1] are then used to prove Theorem 1.2 in Section 6, as well as some suitable generalizations including the product of ramified primes ordering. In Section 7 we interpret the known issues of the Malle-Bhargava principle in the language of groups with local data. In some cases, we show that these issues occur with probability \(0\) in \(\mu_{K}^{\text{MB}}\), suggesting that in these cases \(G_{K}\) is "not typical enough" to use the Vast Counting Heuristic as justification for the predicted growth rates. We do not make any conjectures in this section, but we do use this information to point towards what adjustments to the random model are likely to produce more accurate predictions. ### Notation \[G_{K} =\operatorname{Gal}(\overline{K}/K)\text{ the absolute Galois group of }K\] \[P_{K} =\{\text{places of }K\}\] \[P_{\infty} =\{p\in P_{K}|p\mid\infty\}\] \[G_{K_{p}} =\operatorname{Gal}(\overline{K}_{p}/K_{p})\text{ the absolute Galois group of }K_{p}\text{, where }p\text{ is a place of }K\] \[I_{p} =\text{ the inertia group of }\overline{K}_{p}/K_{p}\text{, where }p\text{ is a place of }K\] \[\operatorname{Fr}_{p} =\text{ a representative of the Frobenius element in }G_{K_{p}}\] \[I_{K} =\text{ the group of fractional ideals of }K\] \[|\mathfrak{a}| =\text{ the norm down to }\mathbb{Q}\text{ of a fractional ideal }\mathfrak{a}\in I_{K}\] \[\operatorname{disc}(\pi) =\prod_{p}p^{\operatorname{ind}(g_{p})}\text{ where }g_{p}\text{ generates }\pi(I_{p})\] \[\operatorname{ind}(g) =n-\#\{\text{orbits of }g\}\text{, where }g\in G\subseteq S_{n}\] \[\operatorname{inv} =\text{called an invariant, is some map }\prod_{p}\operatorname{Hom}(G_{K_{p}},G)\to I_{K}\] \[\operatorname{MB}_{\operatorname{inv}}(K,\Sigma,s) =\text{ the Malle-Bhargava local series, see Lemma 5.4}\] \[\operatorname{Grp}(K) =\text{ the category of finite groups with finite }K\text{-local data, see Definition 2.2}\] \[(G,S,\phi) \text{ denotes an object in }\operatorname{Grp}(K)\] \[\operatorname{proGrp}(K) =\text{ the category of groups with }K\text{-local data, see Definition 2.1}\] \[\mathscr{G} \text{ denotes an object in }\operatorname{proGrp}(K)\text{ with implicit local data given by }\phi_{\mathscr{G}}\] \[N(\mathscr{G},f_{n}) =\sum_{(G,S,\phi)\in\operatorname{Grp}(K)}f_{n}(G,S,\phi)\# \operatorname{Epi}(\mathscr{G},(G,S,\phi))\text{, see \@@cite[cite]{[\@@bibref{}{Abb22}{}{}, Definition 1.1]}}\] \[M =\text{ a discrete measure on }\operatorname{Grp}(K)\text{ given by a sequence of finite moments}\] \[M(\{(G,S,\phi)\}) =M_{(G,S,\phi)}\] \[\mu_{K}^{\operatorname{MB}} =\text{ the unique measure determined by Theorem 1.1}\] \[M^{(j)} =\text{ the mixed moment induced by }\mu\text{, see Subsection 6.3}\] \[\xrightarrow{p.} \text{ converges in probability}\] \[\xrightarrow{a.s.} \text{ converges almost surely, i.e. converges on a measure 1 set}\] \[f(X)\ll g(X) \text{ there exists a constant }C\text{ such that }f(X)\ll Cg(X)\text{ for all }X\] \[f(X)=O(g(X)) \text{ there exists a constant }C\text{ such that }f(X)\ll Cg(X)\text{ for all }X\] \[f(X)=o(g(X)) \text{ means }\frac{f(X)}{g(X)}\to 0\text{ as }X\to\infty\] ## Acknowledgments The author would like to thank Melanie Matchett Wood for numerous discussions on the topic and direction of this paper over the course of several years. The author also thanks Nigel Boston, Yuan Liu, Peter Koymans, and Frank Thorne for helpful conversations and feedback. ## 2. The category of groups with local data In this section we define the category of groups with local data and realize this as a category of pro-objects in the language of [10]. By utilizing a number of tools proven in [10], we prove that this category satisfies the necessary hypotheses to apply Sawin-Wood's main results. We give a family of well-behaved sequences in Proposition 2.4 in preparation for solving the moment problem in Section 3. ### The categories of groups with local data We make the following precise definition for groups with local data: **Definition 2.1**.: _We let \(\mathrm{proGrp}(K)\) denote the category of **profinite groups with \(K\)-local data**._ * _The objects of this category are pairs_ \((G,\phi)\) _of a profinite group_ \(G\) _with a family_ \(\phi=(\phi_{p})\) _of continuous homomorphisms_ \(\phi_{p}:G_{K_{p}}\to G\) _for each place_ \(p\) _of_ \(K\)_._ * _A morphism_ \(\pi:(G,\phi)\rightarrow(H,\psi)\) _is a continuous homomorphism_ \(\pi:G\to H\) _such that_ \(\pi\phi_{p}=\psi_{p}\) _for each place_ \(p\) _of_ \(K\)_._ _We will often refer to the objects as just "groups with local data" when \(K\) is clear from context, with the profiniteness being left implicit._ Any Galois extension of \(K\) comes with not just a Galois group, but a Galois group with local data given by \((\mathrm{Gal}(L/K),\phi_{L/K})\) where \(\phi_{L/K}|_{G_{K_{p}}}\) is given by the corresponding local extension \(G_{K_{p}}\rightarrow\mathrm{Gal}(L_{p}/K_{p})\hookrightarrow\mathrm{Gal}(L/K)\). **Remark:** Technically, a Galois group with local data is only well-defined up to conjugation of the image of each \(\phi_{L/K,p}\). We fix throughout a choice of embedding \(G_{K_{p}}\hookrightarrow G_{K}\) for each place \(p\) of \(K\) so that we can specify the Galois group with local data explicitly. This is mostly for convenience - the results of this paper will still hold without making this choice as long as all orderings and local conditions are chosen to be conjugation invariant. However, the work is significantly easier to follow if we do not have an extra conjugation relation floating around. We want to apply the results of [10] to \(\mathrm{proGrp}(K)\), however this category has uncountably many isomorphism classes. Thus, we consider the pro-objects case in [10, Theorem 1.7 and 1.8], which makes sense as we allowed profinite groups in \(\mathrm{proGrp}(K)\). In order to apply these results, we need to find a category of finite objects for which \(\mathrm{proGrp}(K)\) is the corresponding category of pro-objects. It will not be enough to just restrict to pairs \((G,\phi)\) with \(G\) finite, as this will still have uncountably many objects. We also need to restrict the places at which we have local data. **Definition 2.2**.: _We let \(\mathrm{Grp}(K)\) denote the category of **finite groups with finite \(K\)-local data**._ * _The objects of this category are pairs_ \((G,S,\phi)\) _of a finite group_ \(G\)_, a finite set of places_ \(S\) _of_ \(K\)_, and a family_ \(\phi=(\phi_{p})_{p\in S}\) _of continuous homomorphisms_ \(\phi_{p}:G_{K,S}\to G\) _for each place_ \(p\in S\)_._ * _A morphism_ \(\pi:(G,S,\phi)\rightarrow(H,S^{\prime},\psi)\) _is a continuous homomorphism_ \(\pi:G\to H\) _such that_ * \(S^{\prime}\subseteq S\)_,_ * _For each place_ \(p\in S\cap S^{\prime}\)_,_ \(\pi\phi_{p}=\psi_{p}\)_, and_ * _For each place_ \(p\in S\backslash S^{\prime}\)_,_ \(\pi\phi_{p}(I_{p})=1\) _We will often refer to the objects as just "finite groups with finite local data" when \(K\) is clear from context._ It is clear that \(\operatorname{Grp}(K)\) has only countably many isomorphism classes, as there are countably many finite groups \(G\), countably many finite sets of places \(S\), and for each \(G\) and \(p\in S\) the set \(\operatorname{Hom}(G_{K_{p}},G)\) is finite. Our definition of morphism reflects what we want out of this category: morphisms can only pass from local data at more places to less places, reflecting that in the inverse limit we want to get local data at all places. The fact that we ask \(\pi\phi\) to be unramified at any place \(p\notin S^{\prime}\) is a bit more subtle. There are two reasons for this: * We want ramification data to be preserved so that these finite objects play nice with discriminants. In particular, we want any epimorphism \((G,S,\phi)\to(G,S^{\prime},\psi)\) restricting to the identity on \(G\) to not forget inertia data. This will imply that whenever such an epimorphism exists, \(\operatorname{disc}(G,S,\phi)=\operatorname{disc}(G,S^{\prime},\psi)\). * Why not require that \(\pi\phi(G_{K_{p}})=1\)? This would be too restrictive. In the inverse limit with this property, only groups with local data that are totally split at all but finitely many places would occur as pro-objects. By the Chebotarev density theorem, this will exclude all Galois groups with local data and so miss the very structure we are attempting to model. We give a brief summary of the notion of level and the topology on these categories as defined in [11]. For the most part we will be able to directly cite results of [11], but there will occasionally be times that we need to delve into the specifics of this topology. The notion of level, in particular, is important for working with pro-objects. A **level** of \(\operatorname{Grp}(K)\) is a subset of the isomorphism classes of \(\operatorname{Grp}(K)\), \(\mathcal{C}\), which is the smallest subset containing some finite set of isomorphism classes which is downward-closed and join-closed, where * downward closed means that if \((G,S,\phi)\in\mathcal{C}\) and \(\operatorname{Epi}((G,S,\phi),(H,S^{\prime},\varphi))\neq\emptyset\) then \((H,S^{\prime},\varphi)\in\mathcal{C}\), and * join closed means that for any finite object \((G,S,\phi)\), if \((H_{1},S_{1},\phi_{1})\) and \((H_{2},S_{2},\phi_{2})\) are quotients of \((G,S,\phi)\) (i.e. there exists an epimorphism to them) with both belonging to \(\mathcal{C}\), then so does the join \((H_{1},S_{1},\phi_{1})\vee(H_{2},S_{2},\phi_{2})\), taken as the join in the lattice of quotients of \((G,S,\phi)\). The **level topology** on either \(\operatorname{proGrp}(K)\) or \(\operatorname{Grp}(K)\) is defined by taking basic opens \[U_{\mathcal{C},\mathscr{H}}=\{\mathscr{G}:\mathscr{G}^{\mathcal{C}}=\mathscr{H}\},\] where \(\mathcal{C}\) is a level, \(\mathscr{H}\in\mathcal{C}\), and \(\mathscr{G}^{\mathcal{C}}\) is the maximal quotient of \(\mathscr{G}\) belonging to \(\mathcal{C}\), or equivalently the join of every element of \(\mathcal{C}\) below \(\mathscr{G}\) in the lattice of quotients of \(\mathscr{G}\). We now prove that these categories satisfy the precise conditions needed for the tools in [11]. **Proposition 2.3**.: _Let \(K\) be a number field. Then_ 1. \(\operatorname{Grp}(K)\) _is a diamond category_ _[_11_, Definition 1.3]__, and_ 2. \(\operatorname{proGrp}(K)\) _is (isomorphic to) the subcategory of pro-objects of_ \(\operatorname{Grp}(K)\) _for which every place_ \(p\) _of_ \(K\) _is defined in the local data of some finite quotient_ _[_11_, Section 1.2]__._ The category of pro-objects of \(\operatorname{Grp}(K)\) can be shown, by the same proof as below, to be isomorphic to the category of objects \((G,S,\phi)\) for \(G\) a profinite group, \(S\)_any_ set of places of \(K\), and \(\phi=(\phi_{p})_{p\in S}\) a family continuous homomorphisms \(\phi_{p}:G_{K_{p}}\to G\). The probability measure we define will be supported on the subcategory \(\operatorname{proGrp}(K)\) so it is not necessary to consider the full category of pro-objects. Proof.: Sawin-Wood prove extremely general tools for the recognition of diamond categories. We will use three of their results here. Sawin-Wood prove that the category of finite groups \(\operatorname{Grp}\) is a diamond category in [11, Lemma 6.19]. Given the set \(P_{K}\) of places of \(K\), let \(\mathcal{P}_{K}\) be the opposite category of the injective category whose objects are finite subsets of \(P_{K}\) and whose morphisms are inclusion maps. In this category, \(\operatorname{Hom}(S,S^{\prime})\) is either empty, or contains only the embedding \(S\hook S^{\prime}\). This category trivially satisfies the properties of a diamond category. The product category \(\operatorname{Grp}\times\mathcal{P}_{K}\) is then a diamond category by [11, Lemma 6.16]. The local data \(\phi\) can be seen as some "finite data" in this category. Let \(\mathcal{G}:\operatorname{Grp}\times\mathcal{P}_{K}\to\operatorname{FinSet}\) be the functor sending \[(G,S)\mapsto\prod_{p\in S}\operatorname{Hom}(G_{K_{p}},G).\] Then the category \((\operatorname{Grp}\times\mathcal{P}_{K},\mathcal{G})\) of pairs \((G,S,\phi)\) of \((G,S)\in\operatorname{Grp}\times\mathcal{P}_{K}\) together with \(\phi\in\mathcal{G}(G,S)\) is a diamond category by [11, Lemma 6.21]. This is precisely \(\operatorname{Grp}(K)\). A pro-object of \(\operatorname{Grp}(K)\) is defined in [11, Subsection 1.2] to be a sequence \(X=(X^{\mathcal{C}})\) indexed by levels \(\mathcal{C}\) for which \((X^{\mathcal{C}^{\prime}})^{\mathcal{C}}=X^{\mathcal{C}}\) whenever \(\mathcal{C}\subset\mathcal{C}^{\prime}\). Let \(\mathcal{P}(\operatorname{Grp}(K))\) denote the category of pro-objects of \(\operatorname{Grp}(K)\). There certainly exists a functor \(F:\operatorname{proGrp}(K)\to\mathcal{P}(\operatorname{Grp}(K))\) defined by \[(G,\phi)\mapsto((G,S,\phi)^{\mathcal{C}})\] and \[\pi\mapsto\pi^{\mathcal{C}}.\] This is essentially a tuple of forgetful functors from \(\operatorname{proGrp}(K)\) to the level \(\mathcal{C}\) for each level. The image of this functor is precisely those pro-objects that involve local data at all places. Let \(\mathcal{D}\) denote this category. The inverse functor \(F^{-1}:\mathcal{D}\to\operatorname{proGrp}(K)\) is given by the inverse limit. If we write \(X^{\mathcal{C}}\) as \((G_{\mathcal{C}},S_{\mathcal{C}},\phi_{\mathcal{C}})\), then \[X\mapsto\left(\varprojlim_{\mathcal{C}}G_{\mathcal{C}},\bigcup_{\mathcal{C}}S _{\mathcal{C}},\varprojlim_{\mathcal{C}}\phi_{\mathcal{C}}\right)\] and \[(\pi^{\mathcal{C}})\mapsto\varprojlim_{\mathcal{C}}\pi^{\mathcal{C}}\] It is clear that \(F^{-1}\circ F\) is the identity functor. Given that the subcategory \(\mathcal{D}\) of \(\mathcal{P}(\operatorname{Grp}(K))\) consists of precisely those objects for which \(\bigcup S_{\mathcal{C}}\) is the set of all places, we see that \(F\circ F^{-1}\) is also the identity functor. ### Well-behaved sequences The results of Sawin-Wood [11] apply to sequences of moments which are "well-behaved", i.e. they do not grow to fast. More explicitly, Sawin-Wood call a sequence of finite moments \(M_{(G,S,\phi)}\) "well-behaved" if the series \[\sum_{(G,S,\phi)\in\mathcal{C}}\sum_{\pi\in\operatorname{Surj}((G,S,\phi),(F, S^{\prime},\psi))}\frac{|\mu((F,S^{\prime},\psi),(G,S,\phi))|}{|\text{Aut}(G,S, \phi)|}Z(\pi)^{3}M_{(G,S,\phi)}.\] is absolutely convergent, where \(\mu(A,B)\) is the Mobius function on the lattice of quotients, \(Z(\pi)\) is the number of elements between \((G,S,\phi)\) and \((F,S^{\prime},\psi)\) which satisfy the lattice distributive law, and \(M_{(G,S,\phi)}\) are the moments in question. **Proposition 2.4**.: _Let \(M_{(G,S,\phi)}\) be a sequence of finite moments on the isomorphism classes of \(\operatorname{Grp}(K)\). Suppose there exist real constants \(f(S)\) and \(e(S)\)depending only on \(S\) such that \(M_{(G,S,\phi)}=O(f(S)|G|^{e(S)})\). Then the sequence \(M_{(G,S,\phi)}\) is well-behaved in the sense of [10]._ Proposition 2.4 can be seen as an analog for the corresponding result for groups: Sawin-Wood prove in [10, Corollary 6.13] that if \(M_{G}=O(|G|^{n})\) for some real number \(n\), then \(M_{G}\) is well-behaved in the category of finite groups. Proposition 2.4 is essentially the same strength as this, requiring very little control control as \(S\) varies and no control as \(\phi\) varies. Proof.: In practice, checking well-behavedness might be a bit of a chore. Sawin-Wood prove some useful tools for us to shorten this process. Recall that the category \(\operatorname{Grp}(K)\) is given by \((\operatorname{Grp}\times\mathcal{P}_{K},\mathcal{G})\) for the functor of finite data \(\mathcal{G}(G,S)=\prod_{p}\operatorname{Hom}(G_{K_{p}},G)\). The case of well-behavedness in a category with finite data is already studied by Sawin-Wood in [10, Lemma 6.22]. The sequence \(M_{(G,S,\phi)}=O(f(S)|G|^{e(S)})\) is well-behaved if the sequence \[\sum_{\phi\in\prod_{p}\operatorname{Hom}(G_{K_{p}},G)}M_{(G,S,\phi)}=O\left(f( S)\prod_{p\in S}|\operatorname{Hom}(G_{K_{p}},G)||G|^{e(S)}\right)\] is well-behaved in \(\operatorname{Grp}\times\mathcal{P}_{K}\). Sawin-Wood do not address well-behavedness in product categories, but many of the features of the well-behavedness sum factor over the product. Consider that in the product category we necessarily have \[\operatorname{Aut}(G,G^{\prime}) =\operatorname{Aut}(G)\times\operatorname{Aut}(G^{\prime}),\] \[\mu((F,F^{\prime}),(G,G^{\prime})) =\mu(F,G)\mu(F^{\prime},G^{\prime}),\] \[Z(\pi_{1},\pi_{2}) =Z(\pi_{1})Z(\pi_{2}).\] Moreover, each level in the product category is contained in a product of levels \(\mathcal{C}_{1}\times\mathcal{C}_{2}\) from the individual categories. One immediately proves the following result: **Lemma 2.5**.: _Let \(C_{1}\) and \(C_{2}\) be two diamond categories. If the sequences \((M_{G})_{G\in C_{1}}\) and \((M_{G^{\prime}})_{G^{\prime}\in C_{2}}\) are well-behaved in their respective categories, then the sequence \((M_{G}M_{G^{\prime}})_{(G,G^{\prime})\in C_{1}\times C_{2}}\) is well-behaved in the product category._ We leave the details of the proof to the interested reader, as we are not actually able to use this result. We make no such assumption that our moments sequence factors over the product, and in fact the upper bound \(f(S)|G|^{e(S)}\) does not factor over the product. Luckily, it turns out that the category \(\mathcal{P}_{K}\) is _particularly_ nice for the well-behavedness property. **Lemma 2.6**.: _Let \(C\) be a diamond category and \(\mathcal{N}\) be the opposite category of finite subsets of \(\mathbb{N}\) under inclusion. The sequence \(M_{(G,S)}\) is well-behaved in the category \(C\times\mathcal{N}\) if, for each fixed object \(S\in\mathcal{N}\), the sequence \(M_{(G,S)}\) is well-behaved in \(C\)._ Here we remark that \(\mathcal{P}_{K}\) and \(\mathcal{N}\) are isomorphic as categories, regardless of the choice of base field \(K\). This isomorphism comes from a choice of bijection from the countable set \(P_{K}\) to \(\mathbb{N}\). Proof.: Any level \(\mathcal{C}\) of \(\mathcal{N}\) consists solely of the finitely many subsets of some finite set \(S\subseteq\mathbb{N}\). In the category \(\mathcal{N}\) every morphism is an epimorphism, and for any object \(S\) the epimorphisms correspond precisely to the finitely many subsets of \(S\). Thus, for any product level \(\mathcal{C}=\mathcal{C}_{1}\times\mathcal{C}_{2}\) we separate the well-behavedness sum as \[\sum_{(G,S)\in\mathcal{C}}\sum_{\pi\in\operatorname{Surj}((G,S),(F,S^{\prime}) )}\frac{|\mu((F,S^{\prime}),(G,S))|}{|\operatorname{Aut}(G,S)|}Z(\pi)^{3}M_{(G,S)}\] \[=\sum_{S\in\mathcal{C}_{2}}\sum_{\pi_{2}\in\operatorname{Surj}(S,S^{\prime})} \frac{|\mu(S^{\prime},S)|}{|\operatorname{Aut}(S)|}Z(\pi_{2})\left(\sum_{G\in \mathcal{C}_{1}}\sum_{\pi_{1}\in\operatorname{Surj}(G,F)}\frac{|\mu(F,G)|}{| \operatorname{Aut}(G)|}Z(\pi_{1})^{3}M_{(G,S)}\right).\] The first two summations are finite, and the inner two summations are absolutely convergent by the well-behavedness of \(M_{(G,S)}\) in \(C\) for each object \(S\). Thus the entire summation is convergent. By [13, Corollary 6.13], any sequence \(M_{G}=O(|G|^{n})\) for some real number \(n\) is well-behaved over \(\operatorname{Grp}\). It is known that the decomposition groups \(G_{K_{p}}\) have finite rank for each place \(p\), depending on \(K\) and \(p\). Therefore \[M_{(G,S)} =\sum_{\phi\in\prod_{p}\operatorname{Hom}(G_{K_{p}},G)}M_{(G,S, \phi)}\] \[=O\left(f(S)\prod_{p}|\operatorname{Hom}(G_{K_{p}},G)||G|^{e(S)}\right)\] \[=O_{S}\left(|G|^{e(S)+O_{K,S}(1)}\right),\] which is necessarily well-behaved in \(\operatorname{Grp}\) for each fixed \(S\in\mathcal{P}_{K}\) (with \(K\) fixed throughout). Thus by Lemma 2.6, it is well-behaved in \(\operatorname{Grp}\times\mathcal{P}_{K}\), concluding the proof of well-behavedness of the sequence \(M_{(G,S,\phi)}\). ## 3. Constructing the Malle-Bhargava measure In this section, we prove Theorem 1.1 giving the existence and uniqueness of a probability measure \(\mu_{K}^{\operatorname{MB}}\) modeling the absolute Galois group. We do this by constructing a sequence of finite moments \(M_{G}\) out of the Chebotarev density theorem, then applying [13, Theorem 1.8]. This result states that given any sequence of measures on the isomorphism classes of pro-objects whose finite moments converges to a well-behaved sequence, the measures themselves weakly converge to a unique measure on the isomorphism classes of pro-objects. ### A sequence of measures approximating the absolute Galois group Write \(P_{\infty}=\{p\mid\infty\}\) for the set of infinite places. For \(S\) a set of finite places containing all the infinite places, we define the pro-free product \[F_{K,S}=\mathop{\gtrdot\hskip-1.0pt\gtrdot}_{p\in S}G_{K_{p}}.\] This is _not_ a group with local data, as we do not have any local data at places outside of \(S\). Informed by the Chebotarev density theorem and class field theory, for any tuple \((r_{x})\) of elements \(r_{x}\in F_{K,S}\) for \(x\in\{0,...,|S|-1\}\cup(P_{K}\backslash S)\) we define the quotient \[\mathscr{F}_{K,S}(r)=F_{K,S}/\langle r_{0},...,r_{|S|-1}\rangle.\] The \(r_{1},...,r_{|S|-1}\) come from class field theory, given by the rank of the \(S\)-unit group \(\mathcal{O}_{K}^{\times}\). These \(|S|-1=|S\backslash P_{\infty}|+(|P_{\infty}|-1)\) relations correspond to the \(n+u\) relations from [10, 10] for \(n\) corresponding to \(|S\backslash P_{\infty}|\), the number of finite places in \(S\), and \(u\) corresponds to the unit rank \(u_{K}=\mathrm{rk}\mathcal{O}_{K}^{\times}=|P_{\infty}|-1\). The relation \(r_{0}\) models the relations in class field theory coming from the roots of unity in the base field \(\mu(K)\). This relation does not appear in [10] as they only consider unramified extensions whose order is coprime to \(|\mu(K)|\). The elements \(r_{p}\) of the tuple for \(p\notin S\) do not contribute relations to the underlying group, but instead are used to specify local data. This gives \(\mathscr{F}_{K,S}(r)\) canonical local data \(\phi=(\phi_{p})\) by * \(\phi_{p}\) is the composition \(G_{K_{p}}\hookrightarrow F_{K,S}\rightarrow\mathscr{F}_{K,S}(r)\) if \(p\in S\), and * \(\phi_{p}|_{I_{p}}=1\) and \(\phi_{p}(\mathrm{Fr}_{p})=r_{p}\) if \(p\notin S\). Informed by Cheboterav density (which states that Frobenius elements vary Haar randomly within the absolute Galois group) and class group heuristics (which predicts that the unit group embeds Haar randomly into the ideles) we define the following probability measures. **Definition 3.1**.: _Let \(S\) be a finite set of places containing all infinite place. Define_ \[\mu_{K,S}^{\mathrm{MB}}(A)=\mathrm{Prob}\left(\mathscr{F}_{K,S}(r)\in A\right)\] _for any set \(A\) in the Borel \(\sigma\)-algebra of \(\mathrm{proGrp}(K)\), where each of the \(r_{x}\) are taken to vary independently Haar random from the following spaces:_ * \(r_{0}\) _is taken to vary Haar randomly in the preimage of_ \(F_{K,S}^{\mathrm{ab}}[|\mu(K)|]\) _in_ \(F_{K,S}\) _under the abelianization map, and_ * \(r_{1},r_{2},...,r_{|S|-1}\) _and_ \(r_{p}\) _for_ \(p\notin S\) _are taken to be independently Haar random in_ \(F_{K,S}\)_._ Let \(\mathrm{ab}:F_{K,S}\to F_{K,S}^{\mathrm{ab}}\). The distinct behavior of \(r_{0}\) is to ensure that there is a surjective homomorphism \(\mathcal{O}_{K,S}^{\times}\rightarrow\mathrm{ab}(\langle r_{0},r_{1},...,r_{ |S|-1}\rangle)\) from the \(S\)-units of \(K\), where a generator of \(\mu(K)\) is sent to \(r_{0}\) and a basis for the free part of \(\mathcal{O}_{K,S}^{\times}\) is sent to \(r_{1},...,r_{|S|-1}\). The ranks agree by Dirichlet's unit theorem, noting that \(P_{\infty}\subset S\). By local class field theory, the abelianization is given by \[F_{K,S}^{\mathrm{ab}}\cong\prod_{p\in S}K_{p}^{\times},\] Varying the relations \(r_{0},...,r_{|S|-1}\) Haar randomly corresponds to choosing a Haar random homomorphism \(\mathcal{O}_{K,S}^{\times}\rightarrow\prod_{p\in S}K_{p}^{\times}\), thus choosing a random image of the \(S\)-units. This construction is built with Malle's conjecture in mind. For a given real number \(X\), there are only finitely many extensions \(L/K\) with discriminant bouned above by \(X\). Let \(L_{X}\) be the compositum of all such extensions. Then the Galois group with local data \((\mathrm{Gal}(L_{X}/K),\phi_{L_{X}/K})\) is a quotient of \(\mathscr{F}_{K,S}(r)\) for at least one nontrivial tuple \(r\), where \(S=S_{X}\) is chosen large enough to generate \(\mathrm{Gal}(L_{X}\cap K^{ur}/K)\) and \(r\) is some tuple defining relations compatible with \(G_{K}\). The philosophy from class group statistics that the unit group has Haar random image in the group of \(S\)-ideals informs the choice to vary \(r_{0},r_{1},...,r_{|S|-1}\) randomly in the model. The Chebotarev density theorem informs the choice to allow \(r_{p}\) for \(p\notin S\) to vary Haar randomly in the model. Put together, this heuristic reasoning aligns with \(\mu_{K,S}^{\mathrm{MB}}\). In the limit as \(X\rightarrow\infty\), we will need larger and larger sets of place \(S_{X}\), so it makes sense to model \(G_{K}\) with a limit of \(\mu_{K,S}^{\mathrm{MB}}\) as \(S\) tends towards the set of all places, \(P_{K}\). ### The proof of Theorem 1.1 Constructing a measure \(\mu_{K}^{\mathrm{MB}}\) that is the limit of \(\mu_{K,S}^{\mathrm{MB}}\) is precisely the purpose of [10, Theorem 1.8]. It will suffice to compute the finite moments of \(\mu_{K,S}^{\mathrm{MB}}\) in the limit as \(S\) tends towards the set of all places, which we will check is well-behaved using Proposition 2.4 and corresponds to a unique measure using [10, Theorem 1.8]. **Proposition 3.2**.: _Let \(K\) be a number field and \((G,S,\phi)\in\mathrm{Grp}(K)\). Then for any set of places \(S^{\prime}\supseteq S\)_ \[\int\#\mathrm{Epi}(\mathscr{G},(G,S,\phi))\ d\mu_{K,S^{\prime}}^{\mathrm{MB}}= |G^{\mathrm{ab}}[|\mu(K)|]|^{-1}|G|^{-|S\cup P_{\infty}|+1}.\] This proposition is the source of the finite moments in Theorem 1.1. We remark that this result is stronger than evaluating the limit of finite moments. \(S^{\prime}\supseteq S\) will eventually be true in the limit so that \[\lim_{S^{\prime}\to P_{K}}\int\#\mathrm{Epi}(\mathscr{G},(G,S,\phi))\ d\mu_{K,S^{ \prime}}^{\mathrm{MB}}=|G^{\mathrm{ab}}[|\mu(K)|]|^{-1}|G|^{-|S\cup P_{\infty} |+1}\] because the sequence is eventually constant. Proof.: For a fixed \((G,S,\phi)\in\mathrm{Grp}(K)\), we consider that if \(S\subseteq S^{\prime}\) then \[\int\#\mathrm{Epi}(\mathscr{G},(G,S,\phi))\ d\mu_{K,S^{\prime}}^{\mathrm{MB}} =\sum_{\begin{subarray}{c}\varphi\in\mathrm{Hom}(F_{K,S^{\prime}},G)\\ \varphi|_{F_{K,S}}=\phi\\ \varphi(I_{p})=1\text{ if }p\in S^{\prime}\setminus S\end{subarray}}\int\# \mathrm{Epi}(\mathscr{G},(G,S^{\prime},\varphi))\ d\mu_{K,S^{\prime}}^{ \mathrm{MB}}.\] Each \(\phi\) can be understood as a homomorphism \(F_{K,S}\to G\). The set \(\mathrm{Epi}(\mathscr{F}_{K,S^{\prime}}(r),(G,S^{\prime},\varphi))\) can have at most one element, given by \(\varphi\) if \(\varphi\) factors through the quotient \(F_{K,S^{\prime}}\to\mathscr{F}_{K,S^{\prime}}(r)\). This happens if and only if each relation belongs to the kernel. The relations vary independently Haar randomly, so it follows that \[\int\#\mathrm{Epi}(\mathscr{G},((G,S^{\prime},\varphi),\pi))\ d \mu_{K,S^{\prime}}^{\mathrm{MB}} =\prod_{i=0}^{|S^{\prime}|-1}\mu_{Haar}\left(\ker q_{\star}\varphi\right)\] \[=|G^{\mathrm{ab}}[|\mu(K)|]|^{-1}|G|^{-|S^{\prime}|+1}.\] There are precisely \(|G|\) unramified continuous homomorphisms \(G_{K_{p}}\to G\) for finite places, so the summation includes an extra extra factor of \(|G|\) for each \(p\in S^{\prime}\backslash(S\cup P_{\infty})\). Thus the integral is given by \[\int\#\mathrm{Epi}(\mathscr{G},(G,S,\phi))\ d\mu_{K,S^{\prime}}^{ \mathrm{MB}} =|G^{\mathrm{ab}}[|\mu(K)|]|^{-1}|G|^{|S^{\prime}\backslash(S\cup P _{\infty})|-|S^{\prime}|+1}\] \[=|G^{\mathrm{ab}}[|\mu(K)|]|^{-1}|G|^{-|S\cup P_{\infty}|+1}.\] We are now ready to prove Theorem 1.1. Proof of Theorem 1.1.: The sequence of moments \[M_{(G,S,\phi)}=|G^{\mathrm{ab}}[|\mu(K)|]|^{-1}|G|^{-|S\cup P_{\infty}|+1}=O(| G|^{0})\] satisfies the hypothesis of Proposition 2.4, and so is well-behaved in the sense of [10]. Thus, Proposition 3.2 and [10, Theorem 1.8] together imply the existence of a measure \(\mu_{K}^{\text{MB}}\) on the category of pro-objects of \(\text{Grp}(K)\) for which \(\mu_{K,S}^{\text{MB}}\to\mu_{K}^{\text{MB}}\) weakly converges as \(S\to P_{K}\) and \(\mu_{K}^{\text{MB}}\) has the prescribed finite moments. As the \(\mu_{K,S}^{\text{MB}}\) have total measure \(1\), so too does \(\mu_{K}^{\text{MB}}\) making it a probability measure. Thus, it suffices to show that \(\mu_{K}\) is supported on \(\text{proGrp}(K)\). It is the case that \(\mu_{K,S}^{\text{MB}}\) are supported on \(\text{proGrp}(K)\) by construction, so it is tempting to say that \(\mu_{K}^{\text{MB}}\) is as well because of weak convergence. However, this is not a property held by weakly convergent sequences of measures in general. The fact that \(P_{K}\) is countable saves us here. For each place \(p\) of \(K\), let \(f_{p}\) be the function on the pro-objects of \(\text{Grp}(K)\) defined by \[f_{p}(G,S,\phi)=\begin{cases}1&p\notin S\\ 0&p\in S,\end{cases}\] where we recall that a pro-object can have local data at any set of places \(S\). This function is continuous, as \[f_{p}^{-1}(0)=\bigcup_{\mathcal{C}}\bigcup_{\begin{subarray}{c}(G,S,\phi)\in \mathcal{C}\\ p\notin S\end{subarray}}U_{\mathcal{C},(G,S,\phi)}\] is a union of basic opens and \[f_{p}^{-1}(1)=\bigcup_{\mathcal{C},\ p\in S(\mathcal{C})}\bigcup_{ \begin{subarray}{c}(G,S,\phi)\in\mathcal{C}\\ p\notin S\end{subarray}}\Bigg{(}U_{\mathcal{C},(G,S,\phi)}\backslash\bigcup_{ \begin{subarray}{c}(G,S\cup\{p\},\psi)\in\mathcal{C}\\ \psi|_{S}=\phi\end{subarray}}U_{\mathcal{C},(G,S\cup\{p\},\psi)}\Bigg{)},\] where \(S(\mathcal{C})\) is the set of primes appearing in at least one isomorphism class in the level \(\mathcal{C}\). The inner most union is a finite union of basic opens, which are in fact clopen in the level topology defined by [13]. Thus, the set difference is open, and this preimage is again the union of open sets. Thus \(f_{p}\) lifts to a continuous function on the pro-objects. The expected value of bounded continuous functions converges along weakly convergent sequences of measures. The fact that \(\mu_{K,S}^{\text{MB}}\) is supported on \(\text{proGrp}(K)\) implies \[\int f_{p}\ d\mu_{K}^{\text{MB}}=\lim_{S\to P_{K}}\int f_{p}\ d\mu_{K,S}^{ \text{MB}}=0.\] Thus, the set of pro-objects without local data at \(p\) is a null set. There are only countably many places \(p\), so by countable additivity we find that the complement of \(\text{proGrp}(K)\) is a null set, i.e. \(\mu_{K}^{\text{MB}}\) is supported on \(\text{proGrp}(K)\). ## 4. Outlining the Proof of Theorem 1.2 Malle's classical counting function for the transitive subgroup \(G\subseteq S_{n}\) is achieved by the ordering \(\text{disc}_{X}^{G}:\text{Grp}(K)\to\mathbb{R}\) defined by \[\text{disc}_{X}^{G}(H,S,\phi)=\begin{cases}1&H\cong G,\ |\text{disc}(\phi)| \leq X,\text{ and }S=\{|p|\leq X\}\\ 0&\text{else}.\end{cases}\] The author defines the corresponding counting function on a category in [1] by \[N(\mathscr{G},\text{disc}_{X}^{G}):=\sum_{(G,S,\phi)\in\text{Grp}(K)}\text{disc }_{X}^{G}(G,S,\phi)\#\text{Epi}(\mathscr{G},(G,S,\phi)).\] Theorem 1.2 will be proven utilizing the results of [1]. This will take place in (roughly) three important steps: 1. We first prove that Malle's counting function agrees with that of \(N(\mathscr{G},\operatorname{disc}_{X}^{G})\), that is \[N(\mathscr{G},\operatorname{disc}_{X}^{G})=\#\{\pi\in\operatorname{Surj}( \mathscr{G},G):|\operatorname{disc}(\pi)|\leq X\}.\] This will be a consequence of Lemma 5.2, proven in Section 5. [MISSING_PAGE_POST] the most technical part of the paper, Lemma 6.4 is where the magic happens and is the reason we can apply the results of [1]. In an effort to not obscure the ideas behind Lemma 6.4, we restrict to the simplest cases available. ## 5. Multiplicative orderings and the Malle-Bhargava local series The goal of this section is to complete steps (I) and (II) in full generality. We will define the notions of an admissible invariant and an admissible family of local conditions to work over, and prove Lemma 5.4 giving the explicit correspondence between orderings and the Malle-Bhargava local series. ### Ordering by admissible invariants We will study more general orderings as well as general local restrictions. This expanded setting has been widely considered in the study of Malle's conjecture. **Definition 5.1**.: _Fix a finite group \(G\), a finite set of places \(S\), an invariant \(\operatorname{inv}:\prod_{p}\operatorname{Hom}(G_{K_{p}},G)\to I_{K}\), and \(\Sigma=(\Sigma_{p})\) a family of local conditions \(\Sigma_{p}\subseteq\operatorname{Hom}(G_{K_{p}},G)\). Then we define the corresponding ordering \(\operatorname{inv}_{X}^{G,S,\Sigma}:\operatorname{Grp}(K)/\cong\to\{0,1\}\) to be the sequence of characteristic functions for the set of \((H,S^{\prime},\phi)\in\operatorname{Grp}(K)\) for which_ 1. \(H\cong G\)_,_ 2. \(S^{\prime}=S\,\cup\,\{|p|\leqslant X\}\)_,_ 3. \(\phi\in\prod_{p\in S^{\prime}}\Sigma_{p}\)_, and_ 4. \(|\operatorname{inv}(\phi)|\leqslant X\)_._ _We may omit \(\Sigma\) from the notation if \(\Sigma_{p}=\operatorname{Hom}(G_{K_{p}},G)\) is trivial for all places \(p\). We may omit \(S\) from the notation if \(S=\emptyset\)._ The discriminant ordering \(\operatorname{disc}_{X}^{G}\) is a special case, with \(S=\emptyset\) and \(\Sigma\) the trivial family. We defined \(\operatorname{inv}_{X}^{G,S,\Sigma}\) in the greatest generality possible, although not every choice of \(G\), \(S\), \(\operatorname{inv}\), and \(\Sigma\) will correspond to a number field counting function. For example, even when considering the counting function \(\#\{\pi\in\operatorname{Surj}(G_{K},G):|\operatorname{inv}(\pi)|\leqslant X\}\) we need a Northcott property for \(\operatorname{inv}\) to guarantee that this is finite. We call \(\operatorname{inv}:\prod\operatorname{Hom}(G_{K_{p}},G)\to I_{K}\) an **admissible invariant** if it satisfies condtions as described in [1], i.e. it satisfies * \(\operatorname{inv}=\prod_{p}\operatorname{inv}_{p}\) is multiplicative, * \(\operatorname{inv}_{p}(\pi)\) is determined by \(\pi|_{I_{p}}\) and equals \(1\) if and only if \(\pi|_{I_{p}}=1\), and **Lemma 5.2**.: _Let \(G\) be a finite group, \(S\) a finite set of places, \(\Sigma\) a family of local conditions, and \(\operatorname{inv}\) an **admissible** invariant. Then_ \[N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\Sigma})=\#\left\{\pi\in \operatorname{Surj}(\mathscr{G},G):(\pi|_{G_{K_{p}}})\in\prod\Sigma_{p}\text{ and }|\operatorname{inv}(\pi)|\leqslant X\right\}.\] _In particular, the counting function is independent of \(S\)._ That the counting function is independent of \(S\) will be extremely important for our proofs. The role that \(S\) plays is essentially purely bookkeeping - it is useful to impose local conditions at extra places for formulating the categorical results. As we see here, and will see again in Lemma 5.4, the choice of extra places does not affect the counting functions whatsoever. Proof.: For a fixed \(S\) and \(X\), consider a surjection \(\pi:\mathscr{G}\to G\). Certainly this specializes to an epimorphism in \(\operatorname{Epi}(\mathscr{G},(G,S\cup\{|p|\leq X\},\pi\phi_{\mathscr{G}}))\), so this implies \[\operatorname{Surj}(\mathscr{G},G)=\bigcup_{\phi}\operatorname{Epi}(\mathscr{G },(G,S\cup\{|p|\leq X\},\phi)).\] Moreover, suppose there are two epimorphisms \(\pi_{i}\in\operatorname{Epi}(\mathscr{G},(G,S\cup\{|p|\leq X\},\phi_{i}))\) for \(i=1,2\) for which their underlying group homomorphisms are equivalent to \(\pi\). Then \[\pi_{1}\phi_{\mathscr{G}}=\pi\phi_{\mathscr{G}}=\pi_{2}\phi_{\mathscr{G}},\] which implies \(\phi_{1,p}=\phi_{2,p}\) at every place \(p\in S\cup\{|p|\leq X\}\). In particular, the identity map on the level of groups induces an isomorphism \((G,S\cup\{|p|\leq X\},\phi_{1})\cong(G,S\cup\{|p|\leq X\},\phi_{2})\). Thus, the union is in fact a disjoint union \[\operatorname{Surj}(\mathscr{G},G)=\coprod_{\phi}\operatorname{Epi}(\mathscr{ G},(G,S\cup\{|p|\leq X\},\phi)).\] The result then follows by noting that admissibility implies \(|\mathrm{inv}(\pi)|=|\mathrm{inv}(\pi\phi_{\mathscr{G}})|\), so the sum over elements with invariant bounded above by \(X\) is the same on both sides. The fact that \(\mathrm{inv}_{X}^{G,S,\Sigma}(A),\mathrm{inv}_{X}^{G,S,\Sigma}(B)\neq 0\) implies \(\#\mathrm{Epi}(A,B)=0\) is important for proving the counting function is independent of the choice of \(S\), and is a rephrasing of part of the proof above. Our goal is to study these counting functions using the results of [1], which allows us to prove probability \(1\) statements by studying the moments of \(\mathrm{inv}_{X}^{G,S,\Sigma}\) with respect to the discrete measure \(M\) induced by the finite moments of \(\mu_{K}^{\mathrm{MB}}\). The fact that the counting function is independent of \(S\) is what allows us to give \(\mathrm{inv}\) and \(\Sigma\) any behavior at all at finitely many places. When proving asymptotic results, one often asks that \(\mathrm{inv}\) be **Frobenian** as in [1], which guarantees that a Tauberian theorem on the Malle-Bhargava local series will produce a nice asymptotic growth rate. This includes all the discriminant orderings, and notably includes the product of ramified primes ordering. We will be more stringent than this in Section 6. ### Types of restricted local conditions The type of family of local conditions \(\Sigma=(\Sigma_{p})\) taken can have an effect on the rate of growth. In particular, one could choose \(|\Sigma_{p}|=1\) for every place \(p\). There are uncountably many ways of making such a choice, but only countably many \(G\)-extensions of a number field \(K\), so we expect _no_\(G\)-extensions to satisfy such a strong choice of local restrictions. This behavior can be boiled down to how may splitting types we are allowed to control with \(\Sigma\). We make the following definition to capture this distinction: **Definition 5.3**.: _We call \(\Sigma=(\Sigma_{p})\) an **admissible** family of local conditions \(\Sigma_{p}\subset\operatorname{Hom}(G_{K_{p}},G)\) if \(\operatorname{Hom}_{ur}(G_{K_{p}},G)\subseteq\Sigma_{p}\) for all but finitely many places._ Lemma 5.4 will be proven at this level of generality. As in the case for the invariants, one also often asks for \(\Sigma\) to be **Frobenian** in the sense defined in [1]. We will be more stringent than this in Section 6 ### The first moment of \(\operatorname{inv}_{X}^{G,S,\Sigma}\) We prove that the \(L^{1}\)-norm of \(\operatorname{inv}_{X}^{G,S,\Sigma}\) agrees with the coefficients of the Malle-Bhargava local series. The proof is the same if we only assume admissibility, so we state the result with this level of generality: **Lemma 5.4**.: _Let \(M_{(G,S,\phi)}=|G^{\operatorname{ab}}[|\mu(K)|]|\cdot|G|^{-|S\cup P_{\infty}|+1}\) be the finite moments of \(\mu_{K}^{\operatorname{MB}}\). Fix a finite group \(G\), a finite set of places \(S\), an admissible invariant \(\operatorname{inv}\), and a family of local conditions \(\Sigma\). We let \((a_{n})\) denote the Dirichlet coefficients of the Malle-Bhargava local series_ \[\operatorname{MB}_{\operatorname{inv}}(K,\Sigma,s)=\prod_{p}\left(\frac{1}{|G |}\sum_{f_{p}\in\Sigma_{p}}|\operatorname{inv}(f_{p})|^{-s}\right)=\sum_{n=1} ^{\infty}a_{n}n^{-s}.\] _Then_ 1. _If_ \(\Sigma\) _is admissible then_ \[\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}dM=\frac{|G|}{ |G^{\operatorname{ab}}[|\mu(K)|]|}\sum_{n\leq X}a_{n},\] 2. _If_ \(\Sigma\) _is not admissible, then_ \(a_{n}=0\) _for all_ \(n\) _and there exists a constant_ \(r>1\) _depending only on_ \(K\) _for which_ \[\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}dM=O(r^{-X/ \log X}).\] One important consequence of Lemma 5.4 is that \(\operatorname{inv}_{X}^{G,S,\Sigma}\) is an \(L^{1}\)-ordering in the sense of [1, Definition 1.2]. Thus, [1, Lemma 3.1] states that \(N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\Sigma})\) is well-defined and finite almost surely with respect to \(\mu_{K}^{\operatorname{MB}}\). Proof.: Take \(P_{\infty}\subseteq\{|p|\leq X\}\) by convention and set \(S(X)=S\cup\{|p|\leq X\}\) for simplicity. The moment is equal to a finite sum \[\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}dM =\sum_{\begin{subarray}{c}(G,S(X),\phi)\\ |\operatorname{inv}(\phi)|\leq X\\ \phi\in\prod\Sigma_{p}\end{subarray}}|G^{\operatorname{ab}}[|\mu(K)|]|^{-1}|G |^{-|S(X)|+1}\] \[=\frac{|G|}{|G^{\operatorname{ab}}[|\mu(K)|]|}\sum_{\begin{subarray} {c}(G,S(X),\phi)\\ |\operatorname{inv}(\phi)|\leq X\\ \phi\in\prod\Sigma_{p}\end{subarray}}\prod_{p\in S\cup\{|p|\leq X\}}|G|^{-1}\] For any map \(f:I_{p}\to G\), we set \(\Sigma_{p}(f)=\{\psi\in\Sigma_{p}:\psi|_{I_{p}}=f\}\). Then it follows that \[\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}dM=\frac{|G|}{ |G^{\operatorname{ab}}[|\mu(K)|]|}\sum_{\begin{subarray}{c}\phi:\aleph_{p}I_{p }\to G\\ |\operatorname{inv}(\phi)|\leq X\\ \phi\in\operatorname{res}_{\phi_{p}}I_{p}(\prod\Sigma_{p})\end{subarray}}\prod_{ p\in S(X)}\frac{|\Sigma_{p}(\phi_{p}|_{I_{p}})|}{|G|},\] where \(\aleph_{p}I_{p}\) is the pro-free product of inertia groups. Suppose first that \(\Sigma\) is admissible to prove part (a). Take \(S\) to be large enough to contain all places for which \(\Sigma_{p}\ngeq\operatorname{Hom}_{ur}(G_{K_{p}},G)\), which is finite by admissibility. Given that \(|\Sigma_{p}(1)|=|G|\) for any \(p\notin S\), we can write \[\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}\ dM=\frac{|G|}{|G^{ \operatorname{ab}}[|\mu(K)|]|}\sum_{\begin{subarray}{c}\phi:\Bbbk_{p}I_{p} \to G\\ |\operatorname{inv}(\phi)|\leqslant X\\ \phi\operatorname{res}_{\phi_{p}I_{p}}(\prod\Sigma_{p})\end{subarray}}\ \prod_{p}\frac{|\Sigma_{p}(\phi_{p}|_{I_{p}})|}{|G|}.\] We consider the corresponding Dirichlet series \[\frac{|G|}{|G^{\operatorname{ab}}[|\mu(K)|]|}\sum_{\begin{subarray}{c}\phi: \Bbbk_{p}I_{p}\to G\\ \phi\operatorname{res}_{\phi_{p}I_{p}}(\prod\Sigma_{p})\end{subarray}}\prod_{p} \frac{|\Sigma_{p}(\phi_{p}|_{I_{p}})|}{|G|}\text{inv}(\phi)^{-s}.\] This is a sum of multiplicative functions, and so factors as \[\frac{|G|}{|G^{\operatorname{ab}}[|\mu(K)|]|}\prod_{p}\left(\frac{|\Sigma_{p}( 1)|}{|G|}+\frac{1}{|G|}\sum_{\begin{subarray}{c}f_{p}\in\operatorname{Hom}(I_ {p},G)\\ f_{p}(I_{p})\neq 1\end{subarray}}|\Sigma_{p}(f)||\text{inv}_{p}(f_{p})|^{-s}\right)\] \[=\frac{|G|}{|G^{\operatorname{ab}}[|\mu(K)|]|}\text{MB}_{\text{inv}}(K,\Sigma,s),\] noting that \(|\Sigma_{p}(1)|/|G|=1\) for all but finitely many \(p\) to ensure convergence of the product in a right half plane. Matching the coefficients concludes the proof of part (a). Now, suppose \(\Sigma\) is not admissible. That the Malle-Bhargava local series diverges to \(0\) is clear from the fact that infinitely many constant terms are smaller than \(1\) in this case. Let \(A\) be the infinite set of places for which \(\Sigma_{p}\ncong\operatorname{Hom}_{ur}(G_{K_{p}},G)\). Then \[\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}\ dM=\frac{|G|}{|G^{ \operatorname{ab}}[|\mu(K)|]|}\sum_{\begin{subarray}{c}\phi:\Bbbk_{p}I_{p} \to G\\ |\operatorname{inv}(\phi)|\leqslant X\end{subarray}}\prod_{\begin{subarray}{c }p\in A\\ \phi|_{I_{p}}=1\\ |p|\leqslant X\end{subarray}}\frac{|G|-1}{|G|}\prod_{\begin{subarray}{c}p\\ p\notin A\text{ or }\phi|_{I_{p}}\neq 1\end{subarray}}\frac{|\Sigma_{p}(\phi|_{I_{p}})|}{|G|}.\] At most one place with \(|p|\in[X/2,X]\) can be ramified in \(\phi\) at a time, so by appealing to a lower bound for the prime number theorem over \(K\) there exists a positive constant \(a\) depending only on \(K\) for which \[\int\text{inv}_{X}^{G,S,\Sigma}\ dM\leqslant\frac{|G|}{|G^{\operatorname{ab}}[ |\mu(K)|]|}\left(\frac{|G|-1}{|G|}\right)^{\frac{aX}{\log X}}\sum_{ \begin{subarray}{c}\phi:\Bbbk_{p}I_{p}\to G\\ |\operatorname{inv}(\phi)|\leqslant X\end{subarray}}\prod_{\begin{subarray}{c }p\\ p\notin A\text{ or }\phi|_{I_{p}}\neq 1\end{subarray}}\frac{|\Sigma_{p}(\phi|_{I_{p}})|}{|G|}.\] The remaining sum is multiplicative with generating Dirichlet series \[\prod_{p}\left(1+\frac{1}{|G|}\sum_{\begin{subarray}{c}f_{p}\in\operatorname{ Hom}(I_{p},G)\\ f_{p}(I_{p})\neq 1\end{subarray}}|\Sigma_{p}(f)||\text{inv}_{p}(f_{p})|^{-s}\right).\] A Tauberian theorem bounds the size of this sum above by \(O(X^{1+\epsilon})\) (see, for instance, [1, Corollary 2.4]). Thus \[\int_{\operatorname{Grp}(K)}\text{inv}_{X}^{G,S,\Sigma}\ dM\ll X^{1+\epsilon} \left(\frac{|G|-1}{|G|}\right)^{\frac{aX}{\log X}}\] \[\ll r^{-\frac{X}{\log X}}\] for some constant \(r>1\). ### Tauberian theorem Lemma 5.4 is extremely broad. Given the additional assumption that \(\operatorname{inv}\) and \(\Sigma\) are Frobenian, a Tauberian theorem can be applied to Lemma 5.4 to give the asymptotic growth rate of \(\int\!\operatorname{inv}_{X}^{G,S,\Sigma}dM\) as \(X\to\infty\). This process is done in general in [1, Section 2]. In order to keep the proof of Lemma 6.4 as accessible as possible, we restrict to the following case: **Corollary 5.5**.: _Suppose \(\operatorname{inv}:\prod_{p}\operatorname{Hom}(G_{K_{p}},G)\to I_{K}\) is an invariant for which there exists a weight function \(w:G\to\mathbb{Z}_{\geqslant 0}\) such that_ * \(w(1)=0\)_,_ * \(w\) _is constant on_ \(K\)_-conjugacy classes of_ \(G\)_, and_ * _for all but finitely many places_ \(p\)_,_ \(\operatorname{inv}_{p}(\pi)=w(g)\) _if and only if_ \(\pi(I_{p})=\langle g\rangle\)_._ _and \(\Sigma=(\Sigma_{p})\) a family of local conditions for which \(\Sigma_{p}=\operatorname{Hom}(G_{K_{p}},G)\) at all but finitely many places of \(K\). If \((a_{n})\) are the Dirichlet coefficients of the Malle-Bhargava local series \(\operatorname{MB}_{\operatorname{inv}}(K,\Sigma,s)\) then_ \[\frac{|G|}{|G^{\operatorname{ab}}[|\mu(K)|]|}\sum_{n\leqslant X}a_{n}\sim c_{ \operatorname{inv}}(K,\Sigma)X^{1/a_{\operatorname{inv}}(G)}(\log X)^{b_{ \operatorname{inv}}(K,G)-1},\] _where_ 1. _The minimal weight of elements in_ \(G\) _is given by_ \[a_{\operatorname{inv}}(G)=\min_{g\in G\setminus\{1\}}w(g),\] _generalizing_ \(a(G)\) _in Malle's Conjecture_ 1.3_._ 2. _The average number of ways to be ramified with minimal weight in_ \(G\) _is given by_ \[b_{\operatorname{inv}}(K,G)=\#\{K\text{-conjugacy classes }\kappa\subseteq G\text{ of minimal weight }w(\kappa)=a_{\operatorname{inv}}(G)\},\] _generalizing_ \(b(K,G)\) _in Malle's Conjecture_ 1.3_._ 3. _The leading coefficient is given by a convergent Euler product with factors accounting for units_ \[c_{\operatorname{inv}}(K,\Sigma)= \frac{\left(\operatorname{Res}\,\zeta_{K}(s)\right)^{b_{ \operatorname{inv}}(K,G)-1}}{a_{\operatorname{inv}}(G)^{b_{\operatorname{ inv}}(K,G)-1}(b_{\operatorname{inv}}(K,G)-1)!|G^{\operatorname{ab}}[|\mu(K) |]|\cdot|G|^{u_{K}}}\prod_{p|\infty}|\Sigma_{p}|\] \[\cdot\prod_{p|\infty}\left[(1-p^{-1})^{b_{\operatorname{inv}}(K,G )}\left(\frac{1}{|G|}\sum_{f\in\Sigma_{p}}p^{-\frac{\nu_{p}\operatorname{ inv}(f)}{a_{\operatorname{inv}}(G)}}\right)\right],\] _where_ \(u_{K}=\operatorname{rk}\mathcal{O}_{K}^{\times}=r_{1}(K)+r_{2}(K)-1=|P_{ \infty}|-1\)_._ The proof is a consequence of the Tauberian theorem [1, Corollary 2.4]. The discriminant is one such invariant covered by Corollary 5.5, with weight function given by the index function \(\operatorname{ind}(g)=n-\#\{\text{orbits of }g\}\). Another important example is the product of ramified primes ordering, corresponding to the weight function that equals \(1\) on all nonidentity elements. A family of local conditions \(\Sigma\) satisfying the conditions of Corollary 5.5 is one that makes a restriction at only finitely many places. ## 6. Applying the Law of Large Numbers We prove Theorem 6.1 in this section, which is a slight generalization of Theorem 1.2 to include other orderings like the product of ramified primes ordering and restricted local condtions at finitely many places. This will be done via the main results of [1]. We will prove some facts about epimproducts in \(\operatorname{Grp}(K)\), which will allow us to bound the mixed moments of \(\operatorname{inv}_{X}^{G,S,\Sigma}\) that appear in [1, Theorem 1.4] in order to complete step (). These, in conjunction with [1, Theorem 1.3], will be used to prove Theorem 6.1. ### Counting functions ordered by admissible invariants The proof of Theorem 1.2 extends to the following result with little work. As it is natural to work at this level of generality, we will prove the following: **Theorem 6.1**.: _Let \(K\) be a number field, \(S\) a finite set of places, \(G\) a finite group, \(\operatorname{inv}\) an admissible invariant for which there exists a weight function \(w:G\to\mathbb{Z}_{\geqslant 0}\) such that_ * \(w(1)=0\)_,_ * \(w\) _is constant on_ \(K\)_-conjugacy classes of_ \(G\)_, and_ * _for all but finitely many places_ \(p\)_,_ \(\operatorname{inv}_{p}(\pi)=w(g)\) _if and only if_ \(\pi(I_{p})=\langle g\rangle\)_,_ _and \(\Sigma=(\Sigma_{p})\) a family of local conditions for which \(\Sigma_{p}=\operatorname{Hom}(G_{K_{p}},G)\) at all but finitely many places of \(K\). Then_ 1. _For any_ \(\epsilon>0\)_,_ \[\frac{N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\Sigma})}{X^{1/a_{ \operatorname{inv}}(G)+\epsilon}}\stackrel{{ a.s.}}{{\longrightarrow}}0\] _as_ \(X\to\infty\)_, where the "a.s." stands for "converges almost surely"._ 2. _If_ \(G=\langle g\in G:w(g)=a_{\operatorname{inv}}(G)\rangle\) _is generated by minimal weight elements, then_ \[\frac{N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\varsigma})}{c_{\operatorname{ inv}}(K,\Sigma)X^{1/a_{\operatorname{inv}}(G)}(\log X)^{b_{\operatorname{inv}}(K,G)-1}} \stackrel{{ p.}}{{\longrightarrow}}1\] _as_ \(X\to\infty\)_, where the "_\(p\)_." stands for "converges in probability"._ 3. _Suppose every proper normal subgroup_ \(N\trianglelefteq G\) _satisfies one of the following:_ 1. \(N\) _contains no elements of minimal weight, or_ 2. \(G\backslash N\) _contains at least two_ \(K\)_-conjugacy classes of minimal weight. Then_ \[\frac{N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\Sigma})}{c_{\operatorname{ inv}}(K,\Sigma)X^{1/a_{\operatorname{inv}}(G)}(\log X)^{b_{\operatorname{inv}}(K,G)-1}} \stackrel{{ a.s.}}{{\longrightarrow}}1\] _as_ \(X\to\infty\)_, where the "a.s." stands for "converges almost surely"._ _The invariants \(a_{\operatorname{inv}}(G)\), \(b_{\operatorname{inv}}(K,G)\), and \(c_{\operatorname{inv}}(K,\Sigma)\) are defined as in Corollary 5.5._ Certainly the discriminant ordering satisfies these conditions, so it is clear that Theorem 1.2 is a special case of Theorem 6.1. Of particular interest is the product of ramified primes ordering, which also satisfies these conditions. The remainder of this section will be focused on proving Theorem 6.1. ### Epi-products in \(\operatorname{Grp}(K)\) The categorical results of [1] rely entirely on the existence or nonexistence of epi-products respected by the finite moments \(M\). The epi-product of objects \(G_{1}\) and \(G_{2}\) is defined in [1, Definition 3.1] as an object \(G_{1}\times_{\operatorname{epi}}G_{2}\) satisfying the universal property where all morphisms in the diagram (including the universal morphism) are required to be epimorphisms. The existence of epi-products is a property of the category, and not dependent on the ordering being considered, so we prove a classification of epi-products in \(\operatorname{Grp}(K)\) separately. The category \(\operatorname{Grp}(K)\) has finite products between objects with local data at the same places given by \[(G_{1},S,\phi_{1})\times(G_{2},S,\phi_{2})=(G_{1}\times G_{2},S,\phi_{1}\times \phi_{2}),\] which is inherited from the product structure on \(\operatorname{Grp}\). If an epi-product exists, it must be isomorphic to the usual product via the universal morphism. **Lemma 6.2**.: _The objects \((G_{1},S,\phi_{1})\) and \((G_{2},S,\phi_{2})\) have an epi-product in \(\operatorname{Grp}(K)\) if_ \[G_{1}\times G_{2}=\langle(\phi_{1}\times\phi_{2})(G_{K_{p}}):p\in S\rangle.\] The existence of epi-products is closely tied to the correlation of \(\#\mathrm{Epi}(\mathscr{G},(G_{1},S,\phi_{1}))\) and \(\#\mathrm{Epi}(\mathscr{G},(G_{2},S,\phi_{2}))\). These are constructed so that elements \(\pi_{i}\in\mathrm{Epi}(\mathscr{G},(G_{i},S,\phi_{i}))\) model number fields \(L_{1}/K\) and \(L_{2}/K\) with Galois groups \(G_{1}\) and \(G_{2}\) respectively with prescribed local conditions given by \(\phi_{1}\) and \(\phi_{2}\). The condition that \[G_{1}\times G_{2}=\langle(\phi_{1}\times\phi_{2})(G_{K_{p}}):p\in S\rangle\] translates to \(L_{1}\cap L_{2}=K\), i.e. \(L_{1}\) and \(L_{2}\) have trivial intersection. By the Chebotarev density theorem, this is equivalent to the distributions of Frobenius in \(\mathrm{Gal}(L_{1}/K)\) and \(\mathrm{Gal}(L_{2}/K)\) being independent. As \(\mu_{K}^{\mathrm{MB}}\) and \(\mu_{K,S}^{\mathrm{MB}}\) were constructed from the distribution of Frobenius elements given by Chebotarev density, intuitively it stands to reason that independence of these distributions corresponds to independence of \(\pi_{1}\) and \(\pi_{2}\) in some appropriate sense. The author proves that \(\#\mathrm{Epi}(\mathscr{G},(G_{1},S,\phi_{1}))\) and \(\#\mathrm{Epi}(\mathscr{G},(G_{2},S,\phi_{2}))\) are uncorrelated when the epi-product exists (and \(M\) respects the product structure) in greater generality in [1, Lemma 5.2]. This proof is done more directly, but the author's primary inspiration for those results is precisely this correspondence. Proof.: Suppose we have a commutative diagram where \(\pi\) is the universal morphism, and suppose that \(\langle(\phi_{1}\times\phi_{2})(G_{K_{p}})\rangle=G\). Then \[G_{1}\times G_{2}=\langle\pi\psi(G_{K_{p}})\rangle\leqslant\mathrm{im}\ \pi\leqslant G_{1}\times G_{2},\] which implies \(\pi\) is surjective as a group homomorphism and therefore is an epimorphism. This is independent of the choice of \((H,S^{\prime},\psi)\), and thus proves the existence of the epi-product. As described in [1, Theorems 1.4 and 1.5], proving the Law of Large Numbers for a counting function depends only on those tuples of elements for which an epi-product does not exist. By showing that these objects have density zero under the ordering, [1, Theorem 1.3] will give the asymptotic growth rate with probability \(1\). We can use Lemma 6.2 to give a description of these objects. As in [1, Theorem 1.4], we define \(E(2,M)\) to be the set of objects \(((G_{1},S_{1},\phi_{1}),(G_{2},S_{2},\phi_{2}))\in\operatorname{Grp}(K)^{2}\) for which * The epi-product \((G_{1},S_{1},\phi_{1})\times_{\operatorname{epi}}(G_{2},S_{2},\phi_{2})\) exists, and * \(M_{(G_{1},S_{1},\phi_{1})\times_{\operatorname{epi}}(G_{2},S_{2},\phi_{2})}= M_{(G_{1},S_{1},\phi_{1})}M_{(G_{2},S_{2},\phi_{2})}\). **Lemma 6.3**.: _Let \(M\) be the discrete measure on \(\operatorname{Grp}(K)/\cong\) given by \(M(G,S,\phi)=M_{(G,S,\phi)}=|G^{\operatorname{ab}}[|\mu(K)|]|^{-1}|G|^{-|S \cup S_{\operatorname{op}}|+1}\), and let_ \[((G_{1},S,\phi_{1}),(G_{2},S,\phi_{2}))\in\operatorname{Grp}(K)^{2}\backslash E (2,M)\] _with the set of places \(S\) being the same at each coordinate. Then the group_ \[D=\langle(\phi_{1,p}\times\phi_{2,p})\,(G_{K_{p}}):p\in S\rangle\] _is a proper subgroup of \(G_{1}\times G_{2}\) for which at least one of the following is true:_ 1. \(\rho_{i}(D)\neq G_{i}\) _for at least one_ \(i\in\{1,2\}\)_, where_ \(\rho_{i}:G_{1}\times G_{2}\to G_{i}\) _is projection onto the_ \(i^{\operatorname{th}}\) _coordinate, or_ 2. \(\iota_{j}(G_{j})\nsubseteq D\) _for all_ \(j\in\{1,2\}\)_, where_ \(\iota_{j}:G_{j}\to G_{1}\times G_{2}\) _is the inclusion morphism into the_ \(j^{\operatorname{th}}\) _coordinate._ Proof.: We prove this result by contrapositive, so we suppose that \(\rho_{m}(D)=G_{m}\) for each of \(m\in\{1,2\}\) and that there exists a coordinate \(j\) for which \(\iota_{j}(G_{j})\subseteq D\). Without loss of generality, suppose \(j=1\). Then \(G_{1}\times 1\subseteq D\) and \(\rho_{2}(D)=G_{2}\) implies \(G_{1}\times G_{2}\subseteq D\). By Lemma 6.2, this implies \((G_{1},S,\phi_{1})\) and \((G_{2},S,\phi_{2})\) has an epi-product, which is necessarily isomorphic to the direct product \((G_{1},S,\phi_{1})\times(G_{2},S,\phi_{2})\). Moreover, \[M_{(G_{1},S,\phi_{1})\times(G_{2},S,\phi_{2})} =|(G_{1}\times G_{2})^{\operatorname{ab}}[|\mu(K)|]|^{-1}|G_{1} \times G_{2}|^{-|S\cup P_{\infty}|+1}\] \[=|G_{1}^{\operatorname{ab}}[|\mu(K)|]|^{-1}|G_{1}|^{-|S\cup P_{ \infty}|+1}|G_{2}^{\operatorname{ab}}[|\mu(K)|]|^{-1}|G_{2}|^{-|S\cup P_{ \infty}|+1}\] \[=M_{(G_{1},S,\phi_{1})}M_{(G_{2},S,\phi_{2})}.\] By definition, this implies \[((G_{1},S,\phi_{1}),(G_{2},S,\phi_{2}))\in E(2,M).\] Thus, we have proven the result by contrapositive. ### Bounding the mixed moments We will be proving Theorem 6.1 by appealing to [1, Theorems 1.3 and 1.4]. This requires bounds on the mixed moments of \(\operatorname{inv}_{X}^{G,S,\Sigma}\) in order to apply these result. Given a probability measure \(\mu\) on the category of pro-objects of \(C\) with finite moments \(M_{G}\), the mixed moments \(M^{(j)}:C^{j}/\cong\to[0,\infty]\) are defined by \[M^{(j)}_{(G_{1},G_{2},\ldots,G_{j})}=\int\left(\prod_{i=1}^{j}\#\mathrm{Epi}( \mathscr{G},G_{i})\right)\ d\mu(\mathscr{G}).\] The mixed moments need not be finite in general, but we will prove that they are in the cases of interest to us. **Lemma 6.4**.: _Let \(M_{(G,S,\phi)}=|G^{\mathrm{ab}}[|\mu(K)|]|\cdot|G|^{-|S\cup P_{\infty}|+1}\) be the finite moments of \(\mu_{K}^{\mathrm{MB}}\). Fix \(K\), \(G\), \(S\), \(\mathrm{inv}\), and \(\Sigma\) as in Theorem 6.1. Then_ 1. _For each positive integer_ \(k\) _and each_ \(j\in\{1,2,...,2k\}\)_,_ \[\int_{\mathrm{Grp}(K)^{2k}}\mathrm{inv}_{X}^{G,S,\Sigma}\ dM^{(j)}(dM)^{2k-j} \ll X^{\frac{2k}{a_{\mathrm{inv}}(G)}+\epsilon}.\] 2. _For each integer_ \(j\in\{1,2\}\)_,_ \[\int_{\mathrm{Grp}(K)^{2}\setminus E(2,M)}\mathrm{inv}_{X}^{G,S,\Sigma}\ dM^{(j)}(dM)^{2-j}\ll X^{\frac{2}{a_{\mathrm{inv}}(G)}}( \log X)^{2b_{\mathrm{inv}}(K,G)-2\beta-1},\] _where_ \[\beta=\min\left\{\#\left\{\begin{array}{c}\mbox{$K$-conjugacy classes $\kappa\subseteq G\backslash N$}\\ \mbox{with $w(\kappa)=a_{\mathrm{inv}}(G)$}\end{array}\right\}\middle| \begin{matrix}N\preccurlyeq G\mbox{ a proper normal subgroup}\\ \mbox{with $N$ containing at least one}\\ \mbox{minimal weight element}.\end{matrix}\right\}.\] _Moreover, if \(G=\langle g\in G:\)\(w(g)=a_{\mathrm{inv}}(G)\rangle\) is generated by minimal weight elements then \(\beta\geqslant 1\)._ Similar results to Lemma 6.4(b) exist for the \(2k^{\mathrm{th}}\) mixed moments, in line with [1, Theorem 1.5], but they will not be necessary for the proof of Theorem 6.1. Lemma 6.4 is the most technical result of this paper, but is ultimately a consequence of Lemma 6.3. The primary technique is to use [1, Lemma 5.3] to bound the mixed moments above by a sum of first moments. The content of [1, Lemma 5.3] is as follows: Let \(C\) be a category in which every morphism decomposes uniquely (up to isomorphism) as the composition of an epimorphism with a monomorphism. If \(G_{1},...,G_{j}\) are objects in \(C\) for which \(G_{1}\times\cdots\times G_{j}\) exists and has finitely many subobjects, then \[M_{(G_{1},G_{2},\ldots,G_{j})}^{(j)}=\sum_{\begin{subarray}{c}\iota:H \hookrightarrow G_{1}\times\cdots\times G_{j}\\ \rho_{i}\iota\mbox{ is an epimorphism}\end{subarray}}M_{H},\] where the sum is over all subobjects \(H\) on which each projection map restricts to an epimorphism on \(H\). In particular, this implies \(M_{(G_{1},G_{2},\ldots,G_{j})}^{(j)}<\infty\). Many objects in the category \(\mathrm{Grp}(K)\) have these properties. * The product between elements with local conditions at the same set of places is given by \[(G,S,\phi)\times(H,S,\varphi)=(G\times H,S,\phi\times\varphi).\] The product need not exist when the sets of places differ, but this will be sufficient for our purposes. * A morphism \(\pi:(G,S,\phi)\rightarrow(H,S^{\prime},\varphi)\) decomposes as \[(G,S,\phi)\twoheadrightarrow(\mathrm{im}\ (\pi),S^{\prime},\varphi) \hookrightarrow(H,S^{\prime},\varphi),\] in the same way that morphisms of groups decompose. Using this result, we will bound the quantities in Lemma 6.4 above by the integrals of different admissible invariants. Lemma 5.4 and Corollary 5.5 will then give the asymptotic bounds. We get the stronger bounds in part (b) by showing that avoiding objects in \(E(2,M)\) can be translated into avoiding certain ramification behaviors. Proof of Lemma 6.4(a).: Firstly, we remark that \[\int_{\operatorname{Grp}(K)^{2k}}\operatorname{inv}_{X}^{G,S,\Sigma}\ dM^{(j)}(dM )^{2k-j}=\left(\int_{\operatorname{Grp}(K)^{j}}\operatorname{inv}_{X}^{G,S, \Sigma}\ dM^{(j)}\right)\left(\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X} ^{G,S,\Sigma}\ dM\right)^{2k-j}.\] Lemma 5.4 and Corollary 5.5 bounds \[\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}\ dM\ll X^{ \frac{1}{a_{\operatorname{inv}(C)}}+\epsilon},\] so it suffices to consider the \(M^{(j)}\) factor on its own. Given that \(\operatorname{inv}_{X}^{G,S,\Sigma}\) is supported on objects of the form \((G,S(X),\phi)\) for \(S(X)=S\cup\{|p|\leq X\}\), we can write the integral as \[\int_{\operatorname{Grp}(K)^{j}}\operatorname{inv}_{X}^{G,S,\Sigma}\ dM^{(j)}= \sum_{(G,S(X),\phi_{i})\in\operatorname{Grp}(K)^{j}}\left(\prod_{i=1}^{j} \operatorname{inv}_{X}^{G,S,\Sigma}(G,S(X),\phi_{i})\right)M^{(j)}_{((G,S(X), \phi_{1}),...,(G,S(X),\phi_{j}))}.\] Write \(\varphi=\phi_{1}\times\cdots\times\phi_{j}\) so that \(\rho_{i}\varphi=\phi_{i}\). We then use [1, Lemma 5.3] to bound the mixed moment \(M^{(j)}\) by \[M^{(j)}_{((G,S(X),\rho_{1}\varphi),...,(G,S(X),\rho_{j}\varphi))} =\sum_{\begin{subarray}{c}(H,S(X),\psi)\leq(G^{j},S(X),\varphi) \\ \forall i,\rho_{i}(H)=G\end{subarray}}M_{(H,S(X),\psi)}\] \[=\sum_{\begin{subarray}{c}H\leq G^{j}\\ \operatorname{im}\,(\varphi)\leq H\\ \forall i,\rho_{i}(H)=G\end{subarray}}M_{(H,S(X),\varphi)}\] For each \(H\leq G^{j}\) with \(\rho_{i}(H)=G\) for all \(i=1,2,...,j\), the universal property of direct products gives a bijection between the objects \((G,S(X),\phi_{i})\in\operatorname{Grp}(K)^{j}\) for which \(\operatorname{im}\,\left(\prod_{i=1}^{j}\phi_{i}\right)\ll H\) and the set of objects \((H,S(X),\varphi)\in\operatorname{Grp}(K)\). Thus, we can pull the sum over \(H\leq G^{j}\) to the outside to produce \[\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\sum_{(H,S(X),\varphi)\in\operatorname{ Grp}(K)}\left(\prod_{i=1}^{j}\operatorname{inv}_{X}^{G,S,\Sigma}(G,S(X), \rho_{i}\varphi)\right)M_{(H,S(X),\varphi)}\] \[=\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\sum_{\begin{subarray}{c}\varphi:F_{K,S (X)}\to H\\ \forall i,\ \rho_{i}\varphi\in\Sigma\\ \forall i,\ ||\operatorname{inv}(\rho_{i}\varphi)|\leq X\end{subarray}}M_{(H,S(X), \varphi)}.\] Given that there are \(|H|\) possible unramified maps \(\phi_{p}:G_{K_{p}}\to H\) for each finite place \(p\), we can reframe the finite moments for a larger set of places \[M_{(H,S(X),\varphi)} =|H^{\text{ab}}[|\mu(K)|]|^{-1}|H|^{-|S(X)|+1}\] \[=|H|^{|S(X^{j})|-|S(X)|}|H^{\text{ab}}[|\mu(K)|]|^{-1}|H|^{-|S(X^ {j})|+1}\] \[=\sum_{\begin{subarray}{c}\psi:F_{K,S(X^{j})}\to H\\ \psi\text{ unram at }p\notin S(X)\\ \psi|_{S(X)}=\varphi\end{subarray}}M_{(H,S(X^{j}),\psi)}.\] Plugging his into our integral, we can bound \[\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\sum_{\begin{subarray}{c}\varphi:F_{K,S(X) }\to H\\ \forall i,\ \rho_{i}\varphi\in\Sigma\\ \forall i,\ |\mathrm{inv}(\rho_{i}\varphi)|\leq X\end{subarray}}\sum_{ \begin{subarray}{c}\psi:F_{K,S(X^{j})}\to H\\ \forall i,\ |\mathrm{inv}(\rho_{i}\varphi)|\leq X\\ \psi\mathrm{\ un\ un\ at\ }p\not\in S(X)\\ \psi|_{S(X)}=\varphi\end{subarray}}M_{(H,S(X^{j}),\psi)}\] \[=\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\sum_{\begin{subarray}{c}\psi:F_{K,S(X )}\to H\\ \forall i,\ \rho_{i}\psi\in\Sigma\\ \forall i,\ |\mathrm{inv}(\rho_{i}\psi)|\leq X\end{subarray}}M_{(H,S(X^{j}), \psi)}\] \[\leq\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\sum_{\begin{subarray}{c}\psi:F_{K,S(X )}\to H\\ \forall i,\ \rho_{i}\psi\in\Sigma\\ \forall i,\ |\mathrm{inv}(\rho_{i}\psi)|\leq X^{j}\end{subarray}}M_{(H,S(X^{j}), \psi)}.\] For each \(H\leq G^{j}\), define an invariant \(\mathrm{inv}_{H}:\prod_{p}\mathrm{Hom}(G_{K_{p}},H)\to I_{K}\) by \[\mathrm{inv}_{H}(\pi)=\prod_{i=1}^{j}\mathrm{inv}(\rho_{i}(\pi)).\] This is an admissible invariant with weight function \(w_{H}(g_{1},...,g_{j})=w(g_{1})+\cdots+w(g_{j})\), inheriting the necessary properties from \(\mathrm{inv}\). We additionally define \(\Sigma^{j}=(\Sigma^{j}_{p})\) for the family of local conditions \(\Sigma^{j}_{p}\subseteq\mathrm{Hom}(G_{K_{p}},H)\) determined by \(\rho_{i}(\Sigma^{j}_{p})=\Sigma_{p}\) for projection to each coordinate \(i=1,...,j\). The sum simplifies then simplifies to \[\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\sum_{(H,S(X^{j}),\psi)\in\mathrm{Grp}( K)}(\mathrm{inv}_{H})^{H,S,\Sigma^{j}}_{X^{j}}(H,S(X^{j}),\psi)M_{(H,S(X^{j}), \psi)}\] \[=\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\int_{\mathrm{Grp}(K)}(\mathrm{inv}_{H} )^{H,S,\Sigma^{j}}_{X^{j}}\ dM.\] The result then follows from Lemma 5.4 and Corollary 5.5, noting that the minimal weight necessarily satisfies \[a_{\mathrm{inv}_{H}}(H)\geq a_{\mathrm{inv}}(G)\] and that, in the context of Corollary 5.5, \(C=G\) for the family \(\Sigma^{j}\). Thus \[\int_{\mathrm{Grp}(K)^{j}}\mathrm{inv}_{X}^{G,S,\Sigma^{j}}\ dM ^{(j)} \leq\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}\int_{\mathrm{Grp}(K)}(\mathrm{inv}_{H} )^{H,S,\Sigma^{j}}_{X^{j}}\ dM\] \[\leq\sum_{\begin{subarray}{c}H\leq G^{j}\\ \forall i,\rho_{i}(H)=G\end{subarray}}(X^{j})^{\frac{1}{a_{\mathrm{inv}^{(G)} }}+\epsilon}\] \[\leq X^{\frac{j}{a_{\mathrm{inv}^{(G)}}}+\epsilon}.\] Lemma 6.4(b) is more stringent, and so we cannot use as loose of bounds as in part (a). Instead, Lemma 6.3 will play a major role in controlling the number of minimum index elements that can appear in the local data. Proof of Lemma 6.4(b).: By Lemma 6.3, we know that \(((G_{1},S,\phi_{1}),(G_{2},S,\phi_{2}))\in\operatorname{Grp}(K)^{2}\backslash E(2,M)\) implies that the group \[D=\langle(\phi_{1}\times\phi_{2})(G_{K_{p}}):p\in S\rangle\] either has one of \(\rho_{i}(D)\neq G_{i}\), or \(\iota_{i}(G_{i})\upsubseteq D\) for each \(i\). We will use this to refine the proof of part (a) to bound the integral. Suppose first that \(j=2\). Similar to part (a), we compute \[\int_{\operatorname{Grp}(K)^{2}\backslash E(2,M)}\operatorname{ inv}_{X}^{G,S,\Sigma}\ dM^{(2)} \leq\sum_{\begin{subarray}{c}H\leq G^{2}\\ \forall i,\ \rho_{i}(H)=G\end{subarray}}\sum_{\begin{subarray}{c}D\leq G^{2} \\ \text{im }(\varphi)\in D\\ |\text{inv}(\mu\varphi)|\leq X\end{subarray}}\sum_{i=1}^{2}\operatorname{ inv}_{X}^{G,S,\Sigma}(G,S(X),\rho_{i}\varphi)M_{(H,S(X),\varphi)}\] \[=\sum_{\begin{subarray}{c}H\leq G^{2}\\ \forall i,\ \rho_{i}(H)=G\end{subarray}}\sum_{\begin{subarray}{c}D\leq G^{2} \\ \text{im }(\varphi)\in D\\ |\text{inv}(\mu\varphi)|\leq X\end{subarray}}\sum_{\begin{subarray}{c}\varphi :F_{K,S(X)}\to H\\ |\text{inv}(\mu\varphi)|\leq D\\ |\text{inv}(\mu\varphi)|\leq X\end{subarray}}M_{(H,S(X),\rho_{i}\varphi)},\] where the \(*\) on the middle sum indicates it is over those \(D\) described by Lemma 6.3. We bound this via each of the following facts: * We know that \[M_{(H,S(X),\varphi)}M_{(G,S(X),\rho_{2}\varphi)}=\sum_{\begin{subarray}{c} \psi:F_{K,S(X^{2})}\to H\\ \text{$\psi$ unram at $p\notin S(X)$}\\ \text{$\psi$ unram at $p\notin S(X)$}\\ \text{$\psi$ $|$}_{S(X)}=\varphi\end{subarray}}M_{(H,S(X^{2}),\psi)}\] similar to the equality used in part (a). * Let \(\Sigma_{H,D}=(\Sigma_{H,D,p})\) be the family of local conditions \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},H)\) for which \[\Sigma_{H,D,p}=\{f\in\Sigma_{p}^{2}\cap\operatorname{Hom}(G_{K_{p}},H):f(I_{ p})\leq D\}.\] * The invariant \(\operatorname{inv}_{H}:\prod_{p}\operatorname{Hom}(G_{K_{p}},H)\to I_{K}\) defined as in part (a) by \[\operatorname{inv}_{H}(\pi)=\operatorname{inv}(\rho_{1}\pi)\text{inv}(\rho_{2 }\pi)\] is necessarily admissible, inheriting the property from \(\operatorname{inv}\), with weight function \(w_{H}(g_{1},g_{2})=w(g_{1})+w(g_{2})\). Putting these together as in part (a) gives the bound \[\int_{\operatorname{Grp}(K)^{2}\backslash E(2,M)}\operatorname{ inv}_{X}^{G,S,\Sigma}\ (dM)^{2} \leq\sum_{\begin{subarray}{c}H\leq G^{2}\\ \forall i,\ \rho_{i}(H)=G\end{subarray}}\sum_{\begin{subarray}{c}D\leq G^{2} \\ |\text{inv}_{H}(\mu)|\leq X^{2}\end{subarray}}^{*}\sum_{\begin{subarray}{c} \psi:F_{K,S(X^{2})}\to H\\ |\text{inv}_{H}(\psi)|\leq X^{2}\end{subarray}}M_{(H,S(X^{2}),\psi)}\] \[=\sum_{\begin{subarray}{c}H\leq G^{2}\\ \forall i,\ \rho_{i}(H)=G\end{subarray}}\sum_{D\leq G^{2}}\sum_{(H,S(X^{2}),\psi) \in\operatorname{Grp}(K)}^{*}(\operatorname{inv}_{H})_{X^{2}}^{H,S,\Sigma_{H,D }}(H,S(X^{2}),\psi)M_{(H,S(X^{2}),\psi)}\] \[=\sum_{\begin{subarray}{c}H\leq G^{2}\\ \forall i,\ \rho_{i}(H)=G\end{subarray}}\sum_{D\leq G^{2}}\int_{\operatorname{Grp}(K)} ^{*}(\operatorname{inv}_{H})_{X^{2}}^{H,S,\Sigma_{H,D}}\ dM.\] We consider the summands separately. Suppose first that \(H\upsubseteq D\). Then \(\Sigma_{H,D}\) is necessarily nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\) while \(\operatorname{Hom}_{\operatorname{ur}}(G_{K_{p}},H)\upsubseteq\operatorname{ Hom}(G_{K_{p}},D)\) for each \(p\in S(X)\). Then \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D,p}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D}\) is nonadmissible as \(\Sigma_{H,D}\subseteq\operatorname{Hom}(G_{K_{p}},D)\). Thus \ place \(p\). Thus, Lemma 5.4(b) implies that each such summand is bounded by a decaying function \[\int_{\operatorname{Grp}(K)}(\operatorname{inv}_{H})^{H,S,\Sigma_{H,D}}\ dM=O\left(r^{-X/ \log X}\right)=o(1).\] Now suppose that \(H\subseteq D\). Then \(\Sigma_{H,D,p}=\operatorname{Hom}(G_{K_{p}},H)\) for all but finitely many places \(p\) imposes no local conditions at all. To indicate this, we write \(\operatorname{inv}_{X^{2}}^{H,S,\Sigma_{H,D}}=\operatorname{inv}_{X^{2}}^{H,S}\). We now apply Lemma 5.4 and Corollary 5.5. As in part (a), \(a_{\operatorname{inv}_{H}}(H)\geq a_{\operatorname{inv}}(G)\) by construction. If this inequality is strict, then we are done. If \(a_{\operatorname{inv}_{H}}(H)=a_{\operatorname{inv}}(G)\), we need to control the power of \(\log X\). The only nonidentity elements of \(H\leq G^{2}\) that can have minimal weight agreeing with the minimal weight of \(G\) are those of the form \((g,1)\) or \((1,g)\) for \(w(g)=a_{\operatorname{inv}}(G)\), because the weight function is given by \(w_{H}(g_{1},g_{2})=w(g_{1})+w(g_{2})\). Let \(H_{1}\leq G\) be the subgroup such that \(H_{1}\times 1=\ker\rho_{2}|_{H}=\iota_{1}(G)\cap H\), and similarly for \(H_{2}\leq G\). That these groups are given by kernels implies \(H_{i}\trianglelefteq G\). Lemma 6.3 together with the containment \(H\subseteq D\) implies \(H_{i}\subseteq\iota_{i}^{-1}(D)\neq G\), so they are proper normal subgroups. We then compute \[b_{\operatorname{inv}_{H}}(K,H) =\#\{K\text{-conjugacy classes in $H$ of minimal weight}\}\] \[=\sum_{i=1}^{2}\#\{K\text{-conjugacy classes in $H_{i}$ of minimal weight}\}\] \[\leq 2(b_{\operatorname{inv}}(K,G)-\beta)\] \[=2b_{\operatorname{inv}}(K,G)-2\beta.\] Part (b) for \(j=2\) then follows. The \(j=1\) case proceeds via the same argument, where only the summand \(H=G^{2}\) occurs in the bound. If we suppose that \(G=\langle g\in G:w(g)=a_{\operatorname{inv}}(G)\rangle\), any proper normal subgroup \(N\trianglelefteq G\) cannot contain all the minimal weight elements. Thus, \(G\backslash N\) contain at least one minimal weight element. Additionally, the fact that \(N\) is a normal subgroup implies that both \(N\) and \(G\backslash N\) are closed under conjugation and invertible powers, and so are unions of \(K\)-conjugacy classes. Thus. there exists at least one \(K\)-conjugacy class of minimal weight in \(G\backslash N\). As \(N\) was arbitrary, this implies \(\beta\geq 1\). ### Proving the counting results With this result in hand, we can now prove Theorem 6.1 (of which Theorem 1.2 is a special case). Proof of Theorem 6.1.: The proof of Theorem 6.1(i) follows from [1, Corollary 1.4] and Lemma 6.4(a). [1, Corollary 1.4] states that \[N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\Sigma})=o\left(\max_{0\leq j\leq k}X ^{\frac{1+\epsilon}{2k}}\left(\int_{\operatorname{Grp}(K)^{2k}}\operatorname{ inv}_{X}^{G,S,\Sigma}\ dM^{(j)}(dM)^{2k-j}\right)^{1/2k}\right).\] almost surely. Applying the bound Lemma6.4(a), this implies \[N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\Sigma})=o\left(\max_{0\leq j\leq k} X^{\frac{1+\epsilon}{2k}+\frac{1}{a_{\operatorname{inv}}(G)}+\epsilon}\right)\] almost surely. Replacing \(\epsilon\) with \(\epsilon/2\) and taking \(k\) sufficiently large concludes the proof. We can prove Theorem 6.1(ii,iii) by [1, Theorem 1.3(ii,iii)] respectively in conjunction with [1, Theorem 1.4]. [1, Theorem 1.4] states that \[\int_{\operatorname{proGrp}(K)}\left|N(\mathscr{G},\operatorname{inv}_{X}^{G,S, \Sigma})-\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}\,dM \right|^{2}d\mu_{K}^{\operatorname{MB}}(\mathscr{G})\ll\max_{j\in\{1,2\}}\int_ {\operatorname{Grp}(K)^{2}\setminus E(2,M)}\operatorname{inv}_{X}^{G,S, \Sigma}dM^{(j)}(dM)^{2-j},\] which by Lemma 6.4 is bounded by \[X^{1/\operatorname{a}_{\operatorname{inv}}(G)}(\log X)^{2b_{\operatorname{inv} }(K,G)-2\beta-1}.\] If \(G\) is generated by mimimal weight elements, we necessarily have \(\beta\geq 1\). Thus, by Corollary 5.5 \[\int_{\operatorname{proGrp}(K)}\left|N(\mathscr{G},\operatorname{inv}_{X}^{G,S,\Sigma})-\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}\,dM \right|^{2}d\mu_{K}^{\operatorname{MB}}(\mathscr{G})\ll\frac{1}{(\log X)^{2 \beta-1}}\left(\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma} \,dM\right)^{2}.\] It follows from \(\beta\geq 1\) that \(\frac{1}{(\log X)^{2\beta-1}}\leq\frac{1}{(\log X)}\to 0\). Thus [1, Theorem 1.3(ii)] directly proves Theorem 6.1(ii). In the case that all proper normal subgroups \(N\trianglelefteqslant G\) either contain no minimal weight elements or have trivial intersection with at least two \(K\)-conjugacy classes of minimal weight, we necessarily have \(\beta\geq 2\). Thus \[\int_{\operatorname{proGrp}(K)}\left|N(\mathscr{G},\operatorname{ inv}_{X}^{G,S,\Sigma})-\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S, \Sigma}\,dM\right|^{2}d\mu_{K}^{\operatorname{MB}}(\mathscr{G}) \ll\frac{1}{(\log X)^{3}}\left(\int_{\operatorname{Grp}(K)} \operatorname{inv}_{X}^{G,S,\Sigma}\,dM\right)^{2}\] \[\ll\frac{\left(\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X} ^{G,S,\Sigma}\,dM\right)^{2}}{\left(\log\int_{\operatorname{Grp}(K)} \operatorname{inv}_{X}^{G,S,\Sigma}\,dM\right)^{3}}.\] The last inequality is true by \(\int_{\operatorname{Grp}(K)}\operatorname{inv}_{X}^{G,S,\Sigma}\,dM\ll X^{1/ \operatorname{a}_{\operatorname{inv}}(G)+\epsilon}\ll X^{2}\) following from Lemma 5.4 and Corollary 5.5. The function \(\psi(X)=(\log X)^{3}\) is nondecreasing and satisfies \(\sum\frac{1}{n(\log n)^{3}}<\infty\), so [1, Theorem 1.3(iii)] directly proves Theorem 6.1(iii). ## 7. Making predictions for number field counting Our results suggest that the Vast Counting Heuristic described by the author in [1, Heuristic 1.7] applies to Malle's conjecture, as long as we expect the absolute Galois group \(G_{K}\) to be "typical" among groups with local data. Of course, it is well known that Malle's conjecture is false as stated - Kluners provided the first counter example in \(C_{3}\wr C_{2}\subseteq S_{6}\) for which Malle's predicted \(b\)-invariant is too small [13]. Kluners' counter example is witnessing some atypical behavior for \(G_{\mathbb{Q}}\) among groups with local data distributed according to \(\mu_{K}^{\operatorname{MB}}\), specifically the behavior that \(\operatorname{Gal}(\mathbb{Q}(\zeta_{3})/\mathbb{Q})\) is a quotient of \(G_{\mathbb{Q}}\). Lemma 5.4 can be understood as recognizing \(\mu_{K}^{\operatorname{MB}}\) as a random group version of the Malle-Bhargava heuristic [1, 2]. The Malle-Bhargava heuristic states that the growth rate of Malle's counting function can be read off the Malle-Bhargava local series \[\frac{|G|}{|G^{\operatorname{ab}}[|\mu(K)|]|}\prod_{p}\left(\frac{1}{|G|}\sum_{ f\in\operatorname{Hom}(G_{K_{p}},G)}p^{-(\nu_{p}\operatorname{disc}(f))s} \right),\] with possibly some constant out front. The convergent Euler product representation for \(c(K,G)\) can immediately be recognized as coming from this series. The Malle-Bhargava principle, like Malle's conjecture, is of course wrong in some cases. By analogy, this means we expect our random group with local data \(\mu_{K}^{\text{MB}}\) to miss certain behaviors and produce incorrect predictions in similar cases to the Malle-Bhargava principle (like with Kluners' counter-example). It would be useful to compile a list of known "atypical" behaviors for the absolute Galois group, that is behaviors we know can break the Malle-Bhargava principle. To the author's knowledge, the following list represents the sources of all _currently known_ issues with the Malle-Bhargava principle (including the value of the leading constant): 1. Cases when \(G\) is **not generated by minimal index elements**. Theorem 1.2 does not even cover such cases, and with good reason - atypical behavior tends to be exacerbated by such orderings. One sees this in [10] for \(D_{4}\subseteq S_{4}\) and [14] for \(\text{Heis}_{3}\subseteq S_{9}\), where the leading constant is given explicitely by a convergent infinite sum of Euler product rather than just one. Indeed, even for abelian groups the leading constant may be a finite sum of convergent Euler products [13, 12]. 2. **Property E**, as called in [15], which states that a central embedding problem for \(G_{K}\) is solvable if and only if the corresponding local embedding problems are solvable. This is not generically true for \(\mu_{K}^{\text{MB}}\), although it is known that this behavior is important for counting number fields. In class group statistics and related statistics of unramified objects, this property is related to the size of certain \(|\mu(K)|\)-torsion in the Schur multiplier of \(G\) and can result in different constants out front. Some authors avoid this issue by considering groups for which the corresponding Schur multiplier is trivial [15, 16], while others have computed what the contribution of the Schur multiplier is expected to be [17, 18, 19] under various guises. 3. **Gaining extra roots of unity**. Kluners' counter example shows that the \(G\)-extensions \(L/K\) for which \(\mu(L)\neq\mu(K)\) can accumulate beyond Malle's predicted growth rate [14]. Malle's predictions are corrected for this behavior by Turkelli [14, 15], although the leading constant is not explicitly addressed in these corrections. 4. **The Grunwald problem**. Not every local condition can occur in a global extension of number fields. This was first noticed by Wang in correcting the Grunwald-Wang theorem [12]: there are no \(C_{8}\)-extensions of \(\mathbb{Q}\) which are totally inert at \(2\). Missing local conditions should result in missing terms in the leading Euler product, as seen for abelian groups in [12]. Identifying when the Grunwald problem has a positive solution is currently an open problem, see [13, 14, 15, 16]. 5. **Fair versus unfair counting functions**. Even when Grunwald-Wang counter examples can be avoided, the leading constant for counting abelian extensions need not agree with the convergent Euler product in Theorem 1.2. A result of Wood shows that these leading constants agree if the extensions are ordered by a so-called fair counting function [12] (Wood's main results say much more than this, but we restrict our attention only to the leading constant). Wood defines the notion of a fair counting function in general for abelian extensions, but it is not immediately clear how this definition should generalize to nonabelian groups. The author personally considers this the most mysterious obstruction for counting nonabelian extensions. The primary example of a fair counting function on abelian groups is the product of ramified primes ordering. One expects this ordering to be fair for nonabelian groups as well, but the author is not aware of any proposals for a general definition of "fair" in this context. We do not attempt to solve these issues in this paper. A complete fix accounting for any one of these behaviors would represent immense progress in the study of number field counting, and will be the subject of continued research of the author. We contend that one of the major difficultlies in surpassing these obstructions lies in their intersections. To some extent, each one is affected by the presence of roots of unity. It will often be the case that a finite group \(G\) witnesses several of these behaviors simultaneously, making it difficult to study them as individuals. Instead, we briefly discuss how \(\mu_{K}^{\mathrm{MB}}\) interacts with these behaviors. This verifies longstanding beliefs in the field that the Malle-Bhargava principle for the product of ramified primes ordering should be "essentially correct away from roots of unity." ### Agreement with known cases in the constant Theorem 6.1 agrees with the counting results proven by Wood [26] for abelian extensions ordered by a fair counting function, as long as the Grunwald problem has a positive solution for all local restrictions for the Galois group. This includes all odd order abelian groups. Theorem 6.1 otherwise disagrees with Wood's results, showing that Wood's counting results witness the "Grunwald problem" and "fair counting function" obstructions. Moreover, Theorem 1.2 agrees with the asymptotic growth rate for \(S_{n}\)-extensions of \(\mathbb{Q}\) for \(n=3,4,5\), and the predicted growth rate for \(n\geq 6\), down to the constant factor of \(1/2\) in front of the Malle-Bhargava local series [1]. This comes from the factor \[\frac{1}{|S_{n}^{\mathrm{ab}}[|\mu(\mathbb{Q})|]|}=\frac{1}{|C_{2}[2]|}= \frac{1}{2}\] appearing in the constant out front. ### \(G\) not generated by minimal index elements This obstruction is known to have an affect on number field counting. The leading constant for counting \(D_{4}\subseteq S_{4}\) extensions is a convergent sum of Euler products [10], which is significantly different than the leading constant in Theorem 1.2. This type of behavior is common, occurring also for \(C_{2}\wr H\)-extensions [13] and \(\mathrm{Heis}_{3}\subseteq S_{9}\)[12]. The root cause of these issues is that, when \(G\) is not generated by minimal index elements, there exist subfield that occur in a positive proportion of \(G\)-extensions. Despite changing the behavior of the leading constant, having a subfield occur in a positive proportion of \(G\)-extensions makes it _easier_ to determine the asymptotic growth rate. This is the subject of forthcoming work of the author with Lemke Oliver, Wang, and Wood, where the asymptotic growth rate is determined for Malle's counting function for a number of groups with are not generated by minimal index elements (including cases which agree _or disagree_ with Malle's Conjecture). When \(G\) is not generated by minimal index elements, one has \(\beta=0\) in Lemma 6.4(b). As a consequence, our current methods are not able to produce an asymptotic with probability \(1\). We claim that this should be expected for the random group with local data \(\mu_{K}^{\mathrm{MB}}\) just from its structure. The Law of Large Numbers results in [1] taking advantage of proving enough independence (or uncorrelatedness) between the random variables \(\#\mathrm{Epi}(\mathscr{G},(G,S,\phi))\). In a sense, this type of "independence" is a model for independence of the Frobenius distribution between two \(G\)-extensions \(L_{1}/K\) and \(L_{2}/K\). We know by Chebotarev density that the distributions of Frobenius in \(L_{1}\) and \(L_{2}\) are independent if and only if the extensions are disjoint, i.e. \(L_{1}\cap L_{2}=K\) is trivial. In this setting, [1, Theorem 1.4] can be interpreted as requiring that \(100\%\) of pairs of \(G\)-extensions have trivial intersection \(L_{1}\cap L_{2}=K\). Now, suppose \(N\trianglelefteqslant G\) is a proper normal subgroup of \(G\) containing all the minimal index elements. For a fixed \(G/N\)-extension \(F/K\), the twisted Malle's conjecture [1] uses the Malle-Bhargava principle to predict that a positive proportion of \(G\)-extensions \(L/K\) have fixed field \(L^{N}=F\). This is in direct contradiction to what we need for [1, Theorem 1.4], showing that there is no hope of Theorem 1.2 being true as stated in these cases. This argument shows that, in some sense, \(\mu_{K}^{\mathrm{MB}}\) is already seeing this obstruction. We require different methods to study the counting function in such cases, but one no longer expects a result that looks like Theorem 1.2. ### Property E Property E was so named by Liu-Wood-Zureick-Brown in [13] where the "E" stands for "Extensions". It is equivalent to a certain local-to-global property: given a central short exact sequence \(1\to Z\to E\to G\to 1\), the embedding problem is solvable if and only if the corresponding local embedding problems are solvable for each place \(p\) of \(K\). This forces the existence of certain \(E\)-extensions given the existence of certain \(G\)-extensions, clearly resulting in some effects for number field counting problems. No examples of this property affecting Malle's counting function specifically currently exist in the literature, but the author is aware of an example in forthcoming work of Koymans-Pagano where property E is used to show Malle's predicted power of \(\log X\) for the product of ramified primes ordering is incorrect for a certain nilpotency class \(2\) group. By replacing the absolute Galois group \(G_{K}\) with a group with local data \(\mathscr{G}\in\mathrm{proGrp}(K)\) in the above diagrams, we can consider property E for groups with local data. Relating the Malle-Bhargava local series directly to central embedding problems is much trickier to formulate; by considering the random group with local data \(\mu_{K}^{\mathrm{MB}}\) we make this relationship clear. In particular, we can ask what the probability is that property E holds with respect to \(\mu_{K}^{\mathrm{MB}}\). One can show that this event is measurable, but at this time we are not able to compute the actual probability. Given work on statistics of unramified objects whose leading constants disagree with the Malle-Bhargava local series, such as [1, 13, 14, 15], we expect that property E has probability strictly less than \(1\) with respect to \(\mu_{K}^{\mathrm{MB}}\). We will not, however, leave this subsection empty handed. It is worth noting at this time that \(\mu_{K}^{\mathrm{MB}}\) "essentially" specializes to the random \(\Gamma\)-group defined by Liu-Wood-Zureick-Brown [13]. We consider maximal quotients of \(\mathscr{G}\) of the form \(H\rtimes\Gamma\), in which all ramification lies outside of \(H\) and \(H\) is prime to \(|\mu(K)||\Gamma|\). Given any \(\pi:F_{K,S}\to\Gamma\), if \(\pi\) factors through \(\mathscr{F}_{K,S}(r)\) then the maximal unramified extension of \(\pi\) of this form is given by the group \[\mathscr{F}_{K,S}(r)/\langle\ker\pi|_{I_{p}}\rangle =F_{K,S}/\langle r_{i}\in\ker\pi\ (i=1,...,|S|+u),\ker\pi|_{I_{p}}\rangle\] \[=\langle\mathrm{Fr}_{p}:p\in S\rangle_{|\mu(K)||\Gamma|}\rtimes \Gamma/\langle r_{i}\in\ker\pi\ (i=1,...,|S|+u)\rangle\] \[=\big{(}\langle\mathrm{Fr}_{p}:p\in S\rangle_{|\mu(K)||\Gamma|}/ \langle r_{i}\gamma r_{i}^{-1}\gamma^{-1}\ (i=1,...,|S|+u\ \mathrm{and}\ \gamma\in\Gamma)\rangle\big{)}\rtimes\Gamma\] If \(\pi\) corresponds to a \(\Gamma\)-extension \(L/K\), then this group is exactly equal to the random group proposed in [10] (with a semidirect product by \(\Gamma\)). This is where we say these random groups are "essentially" the same. The difference is that \(\pi\) which factors through \(\mathscr{F}_{K,S}(r)\) merely _models_ a \(\Gamma\)-extension rather than being equal to one on the nose. This is due to a difference in setting: our random group is constructed with the goal of counting extensions with restricted local behavior, so the entire \(H\rtimes\Gamma\) extensions are being modeled here. Meanwhile, Liu-Wood-Zureick-Brown construct a random group to model just the unramified \(H\) piece, and varying over actual \(\Gamma\)-extensions (not models of \(\Gamma\)-extensions). It is notable that Liu-Wood-Zureick-Brown's special relations of the form \(r\gamma r^{-1}\gamma^{-1}\) falls out of the inertia data we included in the definition of a group with local data. Liu-Wood-Zureick-Brown constructed this expression for their relations by imposing property E on their random group. This suggests that the inclusion of inertia data in \(\mu_{K}^{\mathrm{MB}}\) captures at least some part of property E. ### Root of unity behavior Kluners gave the example \(C_{3}\wr C_{2}\subseteq S_{6}\) for which Malle's predicted growth rate was incorrect [15]. The issue was not just in the constant term, but in fact Malle's predicted power of \(\log X\) is incorrect for this group. The behavior of the counting functions changes in the presence of additional roots of unity. The \(C_{3}\wr C_{2}\)-extensions \(L/\mathbb{Q}\) for which \(\mathbb{Q}(\zeta_{3})\subseteq L\) are asymptotically more abundant than other \(C_{3}\wr C_{2}\)-extensions. The root cause is that every prime ideal of \(\mathbb{Q}(\zeta_{3})\) either divides \(3\) or is \(1\) mod \(3\), which is not true for other quadratic extensions, allowing for more ways to ramify in the \(C_{3}\)-extension on top. This makes the extensions by roots of unity "exceptional" in some sense, capturing some fundamental behavior of the absolute Galois group. This behavior was not built into the random group with local data \(\mu_{K}^{\mathrm{MB}}\), and we can in fact prove that root of unity behavior occurs with probability \(0\). **Definition 7.1**.: _Let \(L/K\) be an extension of number fields. The subcategory \(\mathrm{proGrp}_{\neg L}(K)\) of groups with local data away from \(L\) is defined to be the subcategory of \(\mathrm{proGrp}(K)\) whose objects satisfy_ \[\mathrm{Hom}(\mathscr{G},(\mathrm{Gal}(E/K),\phi_{E/K}))=\emptyset\] _for each subextension \(E\leqslant L\) with \(K\neq E\)._ Given an extension \(M/K\), the corresponding Galois group with local data belongs to \(\mathrm{proGrp}_{\neg L}(K)\) if and only if \(M\cap L=K\). Groups with local data that could be the source of a Kluners-style counter example would be objects in \(\mathrm{ProGrp}(K)\backslash\mathrm{proGrp}_{\neg K(\zeta^{e})}(K)\) for some integer \(e\). As the following theorem shows, these objects occur with probability \(0\) in the distribution: **Theorem 7.2**.: _Let \(L/K\) be a nontrivial finite extension of number fields and \(\mu_{K}^{\mathrm{MB}}\) the measure given by Theorem 1.1. Then_ \[\mu_{K}^{\mathrm{MB}}\left(\mathrm{proGrp}(K)\backslash\mathrm{proGrp}_{\neg L }(K)\right)=0.\] This means that taking probabilities conditional on \(\mathrm{proGrp}_{\neg L}(K)\) acts as if we had taken no conditions at all. Thus, we can avoid any finite extension \(L/K\) we want without losing any information from the distribution \(\mu_{K}^{\mathrm{MB}}\). This behavior is not shared by the absolute Galois group; counting all \(G\)-extensions \(L/K\) versus only those \(G\)-extensions for which \(L\cap K(\zeta^{e})=K\) can give different asymptotic growth rates by affecting the power of \(\log X\). This result suggests that the random group with local data \(\mu_{K}^{\mathrm{MB}}\) would be a better fit for counting the latter extensions. Proof.: Any element \((G,\phi)\in\mathrm{proGrp}(K)\backslash\mathrm{proGrp}_{\neg L}(K)\) has a morphism to \((\mathrm{Gal}(E/K),\phi_{E/K})\) for some \(E\leq L\), \(E\neq K\), and as the conjugates of \(\phi_{E/K}(G_{K_{p}})\) generate \(\mathrm{Gal}(E/K)\) it must be that the morphism is an epimorphism. Thus, it is given by some surjective group homomorphism \(\pi:G\to\mathrm{Gal}(E/K)\) for which \(\pi\phi_{p}=\phi_{E/K,p}\). Thus, it follows that \[\int_{\mathrm{proGrp}(K)\backslash\mathrm{proGrp}_{\neg L}(K)}d \mu_{K}^{\mathrm{MB}} \leq\int_{\mathrm{proGrp}(K)\backslash\mathrm{proGrp}_{\neg L}( K)}\max_{\begin{subarray}{c}E\leq L\\ E\neq K\end{subarray}}\#\mathrm{Epi}(\mathscr{G},(\mathrm{Gal}(E/K),\phi_{E/ K}))d\mu_{K}^{\mathrm{MB}}\] \[\leq\max_{\begin{subarray}{c}E\leq L\\ E\neq K\end{subarray}}\int_{\mathrm{proGrp}(K)\backslash\mathrm{proGrp}_{ \neg L}(K)}\#\mathrm{Epi}(\mathscr{G},(\mathrm{Gal}(E/K),S,\phi_{E/K,S}))d\mu _{K}^{\mathrm{MB}}\] \[\leq\max_{\begin{subarray}{c}E\leq L\\ E\neq K\end{subarray}}|\mathrm{Gal}(E/K)^{\mathrm{ab}}[|\mu(K)|]|^{-1}|\mathrm{ Gal}(E/K)|^{-|S\cup P_{\infty}|+1}\] for any finite set of places \(S\). Taking \(S\to P_{K}\) proves the result, noting that \(1<|\mathrm{Gal}(E/K)|<\infty\). Turkelli posed a correction to Malle's prediction [10, 11], although this correction is a little ad hoc. Essentially, one partitions the count of \(G\)-extensions \(L/K\) by the isomorphism class of \(\mu(L)\) and apply the Malle-Bhargava principle to each case. There are only finitely many isomorphism classes that \(\mu(L)\) can attain among \(G\)-extensions for a fixed finite group \(G\), so the asymptotic growth rate is given by the maximum growth rate of the finitely many individual cases. It would be interesting to refine the construction of \(\mu_{K}^{\mathrm{MB}}\) in order to construct a new measure incorporating root of unity behavior, in such a way that \(\mathrm{proGrp}_{\neg K(\zeta^{e})}(K)\) is forced to be a null set, and see if this refined measure matches Turkelli's correction. ### The Grunwald problem The Grunwald problem \((G,K,S)\) is said to have a positive solution if the restriction map \[\mathrm{Hom}(G_{K},G)\to\prod_{p\in S}\mathrm{Hom}(G_{K_{p}},G)\] is surjective. In other words, every local restriction for the places in \(S\) is realizable in a global extension. The Grunwald problem immediately generalizes to arbitrary groups with local data, simply by replacing \(G_{K}\) with the group with local data in question \(\mathscr{G}\). We can then ask for the probability that the Grunwald problem has a positive solution with respect to \(\mu_{K}^{\mathrm{MB}}\). **Theorem 7.3**.: _The event that \(\mathscr{G}\in\operatorname{proGrp}(K)\) has a positive solution to the Grunwald problem \((G,\mathscr{G},S)\) for every \(G\) and \(S\) has measure \(1\) with respect to \(\mu_{K}^{\operatorname{MB}}\). That is,_ \[\mu_{K}^{\operatorname{MB}}\left(\left\{\mathscr{G}\in\operatorname{proGrp}(K) \middle|\begin{matrix}\text{for any finite group $G$ and finite set of places $S$},\\ \operatorname{Hom}(\mathscr{G},G)\to\prod_{p\in S}\operatorname{Hom}(G_{K_{p}},G)\text{ is surjective}\end{matrix}\right\}\right)=1.\] Theorem 7.3 is not surprising, as \(\mu_{K}^{\operatorname{MB}}\) was constructed from all possible local data. This is, however, distinct from the behavior of the absolute Galois group \(G_{K}\). The Grunwald problem is known to have a negative solution for \((C_{8},\mathbb{Q},\{2\})\), following from the Grunwald-Wang theorem. The source of this failure is again roots of unity (specifically the \(8^{\text{th}}\) roots of unity for the \(C_{8}\) case), and so it stands to reason that refining the measure to incorporate root of unity behavior might fix this issue as well. Alternatively, we know the Grunwald problem has a positive solution for any set of finite places \(S\) when \(G\) is an odd solvable group or when \(G\) has a generic extension. This includes odd order abelian groups and symmetric groups, for which we see \(\mu_{K}^{\operatorname{MB}}\) predicting growth rates that agree with number field counting results (for fair counting functions). This suggests that similar results might be true for other groups for which the Grunwald problem always has a positive solution, detecting when we might reasonably expect Theorem 1.2 to make accurate predictions for number field counting. Proof of Theorem 7.3.: The proof follows from Theorem 6.1. For each \(\phi\in\prod_{p\in S}\operatorname{Hom}(G_{K_{p}},G)\), take \(\Sigma=(\Sigma_{p})\) defined by \[\Sigma_{p}=\begin{cases}\{\phi_{p}\}&p\in S\\ \operatorname{Hom}(G_{K_{p}},G)&p\notin S.\end{cases}\] Additionally, take the invariant inv to be given by a weight function in the following way: * If there exists a \(\mathbb{Q}\)-conjugacy class (that is, a minimal set closed under conjugation and invertible powers) \(\kappa\subseteq G\) for which \(G\backslash\kappa\) is a subgroup of \(G\), take the weight function \[w(g)=\begin{cases}0&g=1\\ 1&g\in\kappa\\ 2&g\in G\backslash(\kappa\cup\{1\}).\end{cases}\] * Otherwise, take the weight function \[w(g)=\begin{cases}0&g=1\\ 1&g\neq 1\end{cases}\] giving the product of ramified primes ordering. We claim that Theorem 6.1(iii) applies to each group under these orderings, proving that the \((G,\mathscr{G},S)\) Grunwald problem has a positive solution with probability \(1\). Applying countable additivity to the countably many finite groups \(G\) and finite sets of places \(S\) then concludes the proof. If there does not exist a \(\mathbb{Q}\)-conjugacy class (that is, a minimal set closed under conjugation and invertible powers) \(\kappa\subseteq G\) for which \(G\backslash\kappa\) is a subgroup of \(G\), then every proper normal subgroup \(N\lhd G\) necessarily has \(G\backslash N\) containing at least two \(K\)-conjugacy classes. Thus, the conditions of Theorem 6.1(iii) are automatically satisfied. Alternatively, suppose such a \(\kappa\) exists. We claim that no proper normal subgroup of \(G\) can contain a minimal weight element. The conditions of Theorem 6.1(iii) would then be automatically satisfied. Consider that \(G\backslash\kappa\) being a proper subgroup necessarily implies \(|G\backslash\kappa|\leqslant\frac{|G|}{2}\). Any normal subgroup \(N\trianglelefteqslant G\) that contains a minimal index element has nontrivial intersection with \(\kappa\), and therefore contains \(\kappa\) by virtue of being closed under conjugation and invertible powers. \(N\) must also contain \(1\), so we bound \[|N| \geqslant|\kappa|+1\] \[=|G|-|G\backslash\kappa|+1\] \[\geqslant|G|-\frac{|G|}{2}+1\] \[>\frac{|G|}{2}.\] Thus, \([G:N]<2\), which necessarily implies \(N=G\) is not a proper subgroup. ### Fair counting functions Wood proves that ordering abelian extensions by a fair counting function is sufficient for the leading constant to be given by a convergent Euler product [10]. This leading constant for counting abelian \(G\)-extensions of a number field \(K\) agrees with Theorem 6.1 if \(G\) and \(K\) are such that every Grunwald problem \((G,K,S)\) has a positive solution. Thus, away from the Grunwald problem obstruction, fairness of the ordering is sufficient for the growth rate for abelian groups in Theorem 6.1 to agree with the true growth rate of Malle's counting function for abelian groups. It natural to ask if the condition of fairness given by Wood can be weakened. Theorem 6.1 gives an asymptotic growth rate for the counting function correspoinding to an admissible invariant inv counting surjections into a finite group \(G\) in the following circumstance: \[G \tag{3}\] is generated by the minimal weight elements with respect to inv It is natural to ask if conditon (3) is sufficient for the asymptotic growth rate in Theorem 6.1 to agree with the true growth rate of Malle's counting function for counting \(G\) extensions over a number field \(K\) ordered by inv. If this were true, it would indicate that the fairness obstruction is subsumed by the "\(G\) not generated by minimal index elements" obstruction. However, this is known to be false for counting \(G\)-extensions. Condition (3) is not sufficient for the asymptotic growth rate in Theorem 6.1 to agree with the true growth rate of Malle's counting function. Wood shared a counter example with the author in a personal correspondence [10]: Order \(\mathbb{Z}/4\mathbb{Z}\)-extensions of \(\mathbb{Q}(\sqrt{2})\) by the admissible invariant inv determined by the weight function \[w(g)=\begin{cases}0&g=0\\ 1&g=1,3\\ 2&g=2.\end{cases}\] It is clear that \(\mathbb{Z}/4\mathbb{Z}\) is generated by minimal weight elements, and so satisfies condition (3). However, explicit computation (which Wood does following along Wright's original proof of Malle's conjecture for abelian extensions [11, 10]) reveals that \[\#\{K/\mathbb{Q}(\sqrt{2}):\operatorname{Gal}(K/\mathbb{Q}(\sqrt{2}))\cong \mathbb{Z}/4\mathbb{Z},|\operatorname{inv}(K/\mathbb{Q}(\sqrt{2}))|\leqslant X\} \sim cX\] for a positive constant \(c\) given explicitly by the sum of _two distinct_ convergent Euler products. This is certainly different than the growth rate given in Theorem 6.1, which involves only a single convergent Euler product. Wood's counter example shows that there is some nontrivial condition beyond (3) an ordering must satisfy for Theorem 6.1 to match the true growth rate of Malle's counting function. Fairness is sufficient in the case of abelian groups avoiding the Grunwald problem obstruction, and it would be an interesting question to determine if fairness is also necessary in such cases. Moving beyond abelian groups, the notion of a fair counting function defined in [10] does not have an immediate generalization to nonabelian groups. To the author's knowledge there has been no proposed extension of the notion of a fair counting function to nonabelian extensions, other than the widespread recognition that the product of ramified primes ordering should be considered fair. Possibly, the discriminant ordering for \(S_{n}\)-extensions in degree \(n\) should also be considered fair for similar reasons [1]. This makes it difficult to determine for which orderings we should expect the growth rates in Theorem 6.1 to agree with the true growth rate of Malle's counting functions (even assuming all other obstructions can be avoided). Vaguely speaking, the source of the definition of "fair" counting functions for abelian extensions is similar to that of the Grunwald problem. In the generating Dirichlet series computations in [10], there occurs a finite sum of Euler products with rightmost pole of maximal order. When the sum cancels out, it can be proven that the entire generating Dirichlet series cancels out giving a negative solution to the corresponding Grunwald problem. When the sum does not cancel out, the individual Euler products may produce different leading constants that must be added together. A fair counting function forces this sum to only include those Euler products which produce the same leading constant. Because of this relationship, the author is optimistic that incorporating the Grunwald problem into the random model will shed light on what a fair counting function should be for nonabelian extensions. In particular, a good next step in this direction is to refine the measure to include root of unity behavior and ask which orderings have nice local probabilities in analogy with Wood's main results in [10].
2306.17338
A Survey on Blockchain-Based Federated Learning and Data Privacy
Federated learning is a decentralized machine learning paradigm that allows multiple clients to collaborate by leveraging local computational power and the models transmission. This method reduces the costs and privacy concerns associated with centralized machine learning methods while ensuring data privacy by distributing training data across heterogeneous devices. On the other hand, federated learning has the drawback of data leakage due to the lack of privacy-preserving mechanisms employed during storage, transfer, and sharing, thus posing significant risks to data owners and suppliers. Blockchain technology has emerged as a promising technology for offering secure data-sharing platforms in federated learning, especially in Industrial Internet of Things (IIoT) settings. This survey aims to compare the performance and security of various data privacy mechanisms adopted in blockchain-based federated learning architectures. We conduct a systematic review of existing literature on secure data-sharing platforms for federated learning provided by blockchain technology, providing an in-depth overview of blockchain-based federated learning, its essential components, and discussing its principles, and potential applications. The primary contribution of this survey paper is to identify critical research questions and propose potential directions for future research in blockchain-based federated learning.
Bipin Chhetri, Saroj Gopali, Rukayat Olapojoye, Samin Dehbash, Akbar Siami Namin
2023-06-29T23:43:25Z
http://arxiv.org/abs/2306.17338v1
# A Survey on Blockchain-Based Federated Learning and Data Privacy ###### Abstract Federated learning is a decentralized machine learning paradigm that allows multiple clients to collaborate by leveraging local computational power and the model's transmission. This method reduces the costs and privacy concerns associated with centralized machine learning methods while ensuring data privacy by distributing training data across heterogeneous devices. On the other hand, federated learning has the drawback of data leakage due to the lack of privacy-preserving mechanisms employed during storage, transfer, and sharing, thus posing significant risks to data owners and suppliers. Blockchain technology has emerged as a promising technology for offering secure data-sharing platforms in federated learning, especially in Industrial Internet of Things (IIoT) settings. This survey aims to compare the performance and security of various data privacy mechanisms adopted in blockchain-based federated learning architectures. We conduct a systematic review of existing literature on secure data-sharing platforms for federated learning provided by blockchain technology, providing an in-depth overview of blockchain-based federated learning, its essential components, and discussing its principles, and potential applications. The primary contribution of this survey paper is to identify critical research questions and propose potential directions for future research in blockchain-based federated learning. Federated learning, Data privacy, Privacy-preserving, Blockchain, Industrial Internet of Things (IIoT), Data Security, Data-sharing platforms ## I Introduction The rapid development of the Industrial Internet of Things (IIoT) has resulted in a significant increase in data generated by connected devices [7]. The current privacy and security measures for IIoT are outdated and require significant updates. In addition, some of these measures are still under development and testing with a myriad of vulnerabilities. As a result, new techniques and policies are urgently needed to secure data sharing across wireless networks and address security challenges in IIoT. To address these challenges, Monrat et al. [26] proposes the use of blockchain technology as a secure data-sharing architecture and thus introducing the Blockchain technology as a decentralized and secure IoT revolution. Rao et al. [31] note that user privacy laws in many regions worldwide that mandate technological companies handle user data with extra care. The conventional machine learning techniques have a significant limitation in that they require all data to be gathered in a single location, typically a data center. This approach poses a potential risk to user privacy and could violate data confidentiality laws that protect sensitive information. As a result, new machine-learning techniques that can preserve data privacy and confidentiality are needed to address these concerns. To address the limitations of conventional machine learning techniques, Cloud Service Providers (CSPs) have adopted a strategy of centralizing data storage. This approach helps to ensure data integrity, availability, privacy, and confidentiality. This methodology enables CSPs to manage and protect data more effectively, ensuring that it is secure and readily available to authorized users. By centralizing data storage and management, CSPs can also integrate advanced security measures and technologies to protect against cyber threats and safeguard sensitive information. Nevertheless, CSPs do not always deliver trustworthy data services to customers, and there are problems with cloud data storage such as data breaches, data theft, privacy concerns, and cloud data unavailability. Therefore, submitting raw data to a central server raises privacy and communication concerns for data owners, reducing the likelihood of uploading data. Bonawitz et al. [6] state that Federated Learning (FL) is an innovative approach to machine-learning that resolves problems associated with traditional methods. FL allows multiple parties to train a shared model without revealing their data. The training algorithms on data are distributed across several clients with no need for data samples to be exchanged. A centralized server manages the training process in the traditional FL approach including client management, global model maintenance, and gradient aggregation. The server sends the current model to nodes each round, updated with their data and gradients sent back. The gradients are aggregated and integrated into the model for the next round by the server. FL preserves data privacy by sharing gradient information instead of raw data. However, as Li et al. [16] note, there is still a risk of sensitive information being exposed to a third party or the central server. Moreover, the conventional FL framework is vulnerable to malicious data and single points of failure, which can undermine its reliability. Blockchain technology is an alternative to centralized methods in IoT and edge computing, overcoming their limitations. Blockchain's decentralization enables Smart Contracts (SC) to function as a centralized server through blockchain transactions. This study focuses on the implementation of privacy-preserving machine learning methods based on blockchain and their practical applications. The main objective of this exploratory study is to understand how blockchain-based federated learning can enhance the training stage in machine learning algorithms. The focus is on addressing data privacy requirements while enabling practical applications. A review of existing literature on blockchain-based federated learning is conducted. The aim is to provide a comprehensive overview of the current state-of-the-art in this field. The research efforts are categorized and analyzed to identify open research questions and challenges that need to be addressed to advance this line of research. The contributions of this study are as follows: * An in-depth overview of blockchain-based federated learning along with its essential components, underlying principles, and potential applications are presented. * A comprehensive systematic review of existing literature in the context of privacy-preserving in Blockchain is conducted. * A number of critical open research questions and potential directions for future research in blockchain-based federated learning are proposed. The paper is organized as follows: Section **II** covers the motivation for the study, while Section **III** provides background information on Federated Learning, Blockchain, Blockchain-based Federated Learning, and Data Privacy. Recent studies on implementing Federated Learning using Blockchain are discussed in Section **IV**. A discussion about the weaknesses of using blockchain technology in federated learning is explored in Sections **V**. The future directions of the research are explored **VI**. The conclusion is presented in Section **VII**. ## II Motivation The primary motivation behind this study is the pressing issues surrounding data privacy and sharing. Security concerns regarding federated learning have been arisen. These concerns stem from the potential for malicious clients or central servers to attack the global model or access user privacy data. Qu et al. [29] tackled the data privacy issue by implementing federated learning, where only the models and not the raw data are shared. This ensures the data's efficiency and usefulness while maintaining privacy. A fully decentralized federated learning system is proposed. The system employs blockchain technology as the underlying architecture and the proof-of-work (PoW) consensus process. The decentralized system offers resistance to poisoning attacks. Incentives are provided and accuracy is enhanced through member selections. Data leakage during storage, transmission, and sharing is a significant challenge faced by data owners and providers. This challenge is particularly prominent in the context of Industrial Internet of Things (IIoT) applications [34]. To address attacks on global models or user privacy data, Li et al. [17] proposed the Blockchain-based Federated Learning framework with Committee consensus (BFLC), a decentralized, blockchain-based federated learning framework. However, the Committee consensus mechanism (CCM) used in the framework may result in a large amount of communication overhead between nodes. This could lead to slow training times and increased energy consumption. Lu et al. [19] proposed a solution to data-sharing challenges by combining federated learning with permissioned blockchain. The permissioned blockchain in this system creates secure connections between end IoT devices. These connections are established through encrypted records maintained by supernodes, such as base stations and roadside units. This ensures data privacy and accessibility. The proposed architecture does not store raw data. It uses permissioned blockchain to access related data and controls data accessibility. This addresses storage constraints and privacy concerns. To facilitate collaborative learning and data protection, it is crucial to conduct research in blockchain-based federated learning and data privacy. In conventional machine learning scenarios, sharing a large dataset required for training a model is challenging. This is particularly true when the data is distributed across multiple organizations or individuals [13]. Federated learning solves the problem of sharing a large dataset for training a model. It does so by allowing each party to train a local model on their data. The local model's parameters are then shared with a central server, which aggregates these updates to produce a global model [23]. To enhance the security and trustworthiness of this process. Blockchain technology can provide a tamper-proof and transparent ledger for recording the model updates [4]. This way, participating parties can verify the integrity of the global model. This ensures equitable incorporation of their contributions. Furthermore, blockchain can be used to implement privacy-preserving mechanisms. These mechanisms, such as differential privacy, enable participating parties to share their model updates. The sharing of updates is accomplished without revealing sensitive information. ## III Technical Background This section provides a discussion of the background of federated learning, blockchain technology, blockchain-based federated learning, and data privacy along with the fundamental concepts and principles of each area. Understanding the backgrounds of blockchain and federated learning is crucial. It allows us to appreciate the potential benefits and limitations of applying blockchain-based federated learning. This application can address data privacy concerns in collaborative learning scenarios. ### _Federated Learning_ Federated Learning (FL) is a decentralized approach to machine learning. FL allows clients to participate in training a machine learning model. Clients can contribute to training without uploading their data samples to a centralized data warehouse. This preserves the privacy of each data sample, as they remain with the clients. In FL, clients train a model locally on their private data samples. This local training happens in each iteration of model training. Clients use an initial global model provided by the aggregation server. After training, the model gradients or weights are generated and uploaded to a server for aggregation. The core workflow involved in traditional FL includes several steps. The first step is the selection of participating clients. Clients then train their own local model. The local model weights are uploaded to the centralized server. Finally, the server aggregates the local model weights to obtain a global model [40]. ### _Blockchain_ Blockchain is a decentralized and tamper-proof system used for permanently recording transactions. It consists of a network of nodes, transactions, a chain of blocks, and a shared ledger. The transactions are recorded and maintained by all the nodes in the network, providing benefits such as decentralization, immutability, transparency, and anonymity [21]. Based on the level of access control, blockchain can be categorized as public, private, or consortium. In a public blockchain, any node can participate in the network without requiring permission. In contrast, nodes on a private blockchain require authorization to join the network and access the shared ledger. In a consortium blockchain, control is typically restricted to selected nodes. These selected nodes have the right to generate new blocks. This makes consortium blockchain a partially decentralized system [40]. ### _Blockchain-based Federated Learning_ The traditional approach to Federated Learning (FL) depends on a central server for model aggregation, which can be considered as a weak point in the system. To improve reliability, blockchain technology, which is decentralized, has been proposed as a solution [24]. Blockchain-based Federated Learning (BCFL) integrates the decentralized property of blockchain with the distributed nature of FL to eliminate the threat of a single point of failure in the FL system's aggregation server [12, 17]. Various BCFL architectures have been proposed, which can be categorized into three groups: (1) fully coupled BCFL, (2) flexible coupled BCFL, and (3) loosely coupled BCFL [40]. Smart Contracts (SC) have been used to implement the functions of FL aggregation in more recent approaches such as BLOCKFL [12] and BAFFLE [30]. These SCs are activated through blockchain transactions to facilitate the aggregation of local model updates from participating clients. In BLOCKFL, a two-phase consensus protocol is used to ensure the integrity of the aggregated model. BAFFLE employs a modified Byzantine fault-tolerant consensus algorithm. The algorithm is used to address the security and scalability issues faced in traditional FL systems [30]. ### _Data Privacy_ In traditional FL, the data samples of each participating client are not exposed to each other or to the aggregation server. However, the local model updates sent over for aggregation are exposed to the server. Typically in a BCFL, the model updates from the participating clients are also uploaded to the blockchain network as raw data. These scenarios lead to data leakage and pose a threat to the system as malicious clients or attackers could exploit this vulnerability [19, 41, 20]. Li et al. [15] proposed a blockchain-based collaborative system. The system, called BLADE-FL, is designed to share data across distributed multiple parties. The goal is to reduce the risk of data leakage. They ensured data privacy by incorporating differential privacy into federated learning. Other techniques, such as Homomorphic Encryption [11], and Secure Multiparty Computation [9], have been integrated to preserve privacy from end to end in a BCFL. These data privacy techniques are briefly discussed as follows: #### Ii-D1 Differential Privacy (DP) A mathematical definition of privacy that protects users' privacy in published data by adding some randomly generated noises. Basically, the definition of DP is stated as for a random algorithm \(A\), \(Q\) is the set of all possible outputs. If for any pair of the neighboring datasets \(D\) and \(D^{\prime}\), any subset \(S\) of \(Q\), algorithm \(A\) satisfies. The algorithm \(A\) satisfies differential privacy, where \(\varepsilon\) is the privacy protection budget. \[\Pr\left[A(D)\in S\right]\leq e^{\varepsilon}\Pr\left[A\left(D^{\prime}\right) \in S\right] \tag{1}\] The above equation represents a probabilistic inequality. The inequality indicates that the probability of algorithm \(A\) producing a result in set \(S\) with dataset \(D\) is less than or equal to \(e\) raised to the power of \(\varepsilon\) times the probability of obtaining a result in set \(S\) with a different dataset \(D^{\prime}\). In simpler terms, using dataset \(D^{\prime}\) increases the likelihood of obtaining a result in set \(S\) compared to dataset \(D\), by a factor of \(\varepsilon\) raised to the power of \(\varepsilon\). #### Ii-D2 Homomorphic Encryption (HE) This is a cryptography method that allows computation on encrypted data and provides encrypted results to the user without having to decrypt the encrypted data. Asides from being able to process encrypted data, HE also ensures that the privacy of data is preserved. Given encryption of messages (\(m_{1},m_{2},m_{3},....m_{n}\)) as \(E(m_{1}),E(m_{2}),E(m_{3})....,E(m_{n})\), a ciphertext that efficiently encrypts \(f(m_{1},m_{2},m_{3}....m_{n})\) can be computed for any computable function \(f\). #### Iii-A3 Secure Multiparty Computation (MPC) Is also a cryptography technique that enables different parties to carry out distributed tasks in a well-secure manner. In an MPC, a given number of parties, \(P_{1},P_{2},P_{3},....,P_{N}\) and each have private data, \(D_{1},D_{2},D_{3},....,D_{N}\) respectively. The participants individually compute the value of a public function on their private data as; \(F(D_{1},D_{2},D_{3},....,D_{N})\) while keeping their individual inputs secret. ## IV Blockchain-based Federated Learning This section reviews existing research studies related to the application of blockchain technology to federated learning. ### _Homomorphic encryption based_ Wang et al. [38] proposed a blockchain-based privacy-preserving federated learning (BPFL) model in the Internet of Vehicles (IoV). The main goal is to mitigate the privacy risk of poisoning attacks by participants and stealing sensitive data by aggregation servers. BPFL model consists of four sections: Client user, federated learning node (FL node), model aggregation node (MA node), virtual verification node (VV node), and certificate authority (CA). Using homomorphic encryption and developing Multi-Krum [5] lead to verifying and filtering local model changes. Multi-Krum uses the Krum function to calculate scores for each proposed vector, which helps identify reliable participants while excluding outliers in distributed machine learning. As a result, the system decreases runtime overhead [38]. In another study, Miao et al. [25] provided privacy by designing a blockchain-based privacy-preserving byzantine-robust federated learning (PBFL) model. They make a trusted global model by checking cosine similarities to identify negative gradient and honest gradient vectors. Also, they applied Cheon-Kim-Kim-Song (CKKS) scheme, a fully homomorphic encryption (FHE) method to encrypt local gradients and provide privacy protection. However, their work is suitable for a balanced distribution of client data only and not in cases where the client data is Non-independent and identically distributed (Non-IID). Sun et al. [35] approach to blockchain-based federated learning involves encrypting gradients using the BCP (Bresson-Catalano-Pointcheval) mechanism, which adds noise to each encrypted gradient. Then, all the updated gradients are collected to another blockchain. This blockchain could evaluate the malicious client if they provided a low-quality gradient. In terms of overhead, the algorithm does not add any extra overhead of encryption compared to previous works, but it reduces the time-consuming process. Also, the accuracy of the audit algorithm in the baseline is over 92%, and it decreased to 90% when faced with a poisoning attack (i.e., low gradients). This shows that the proposed model can recognize malicious owners. However, as the number of data owners increases, the behavior and audit chains may become overwhelming. This can lead to increased processing times and delays. This could limit the practical use of the proposed approach in large-scale federated learning scenarios. A novel technique was proposed by Alzubi et al. [1] using deep learning and blockchain paradigms to preserve the privacy of electronic health records. First, the health records are classified using CNN into normal and abnormal users. Next, a federated learning mechanism based on cryptography is integrated into the Blockchain system. The blockchain becomes responsible for keeping track of encrypted local models from the FL clients. The blockchain also ensures that the client's contributions to the global model are verified before aggregation. An efficient and secure blockchain-based FL system paradigm (ESB-FL) was also developed by Chen et al. [8]. In the introduced scheme, a new lightweight cryptography tool was proposed. The tool is based on a non-interaction designated decryptor function. The tool is used to encrypt each participant's local model updates. ESB-FL can ensure the privacy protection of FL participants. ESB-FL can also efficiently preserve the global model's accuracy. The approach achieves this with considerably low and effective communication costs. ### _Differential Privacy-based Approaches_ Zhao et al. [42] designed a blockchain-based federated learning model for home appliance manufacturers to develop their services and products. First, customers train a model using a collection of home appliance data. Then, they send the trained model to the blockchain to trace clients' or manufacturers' activities and prevent the probability of cyber threats. Finally, as a miner, one of the clients uploads the model to the blockchain. The authors recommended using differential privacy techniques on features to provide clients' privacy by adding \(\varepsilon\)-DP noise to features. In another study, Qi et al. [27] proposed a federated learning-based Traffic Flow Prediction (TFP) system. They have integrated GRU neural networks with blockchain and FL-based TFP schemes. Rather than directly sending individual data, the participating vehicles use their data to perform local model training and share local model updates, thus protecting privacy. Blockchain prevents the security risks of the central server and clients. This is achieved by replacing the central server with a group of trusted nodes. The nodes manage all the local model updates. In addition, they apply differential privacy by adding Gaussian noise to local model updates, thereby protecting the client's data. The proposed model was compared with LSTM, stacked autoencoder (SAE), and SVM, in which the proposed model could accurately predict traffic flow better than the other models. Moreover, the proposed model effectively mitigates poisoning attacks since the accuracy of blockchain does not reduce even if the number of malicious clients increases. The proposed model faces a challenge in that the convergence rate of the SAE model is faster. This is because the SAE model does not need to complete a model aggregation step. The SAE model has a centralized learning paradigm, which contributes to its faster convergence rate. Wan et al. [37] proposed a novel blockchain-based federated learning framework to avoid data falsification beyond 5G networks (B5G) enabling edge computing. They also added a differential privacy identifier to Wasserstein Generative Adversarial Network (WGAN) [3] to distinguish if synthetic data complies with differential privacy. Lastly, a time delay analysis was conducted on a single epoch of the proposed model, which was then used to determine the optimal rate for generating blockchain blocks. The trained local parameters of edge devices are regenerated by the WGAN generator and then assessed by DP-identifier and WGAN discriminator during the FL training process. With better data utility, this technique ensures that the resulting synthetic model parameters fulfill DP. Blockchain-enabled FL's convergence latency has been seen to be quadratic to the block production rate. As a result, the experimental findings lead to an optimum block generation rate. Shayan et al. [33] introduced Biscotti, a blockchain-based system for federated learning. It uses cryptography and blockchain technology to enable secure and private federated learning across multiple organizations. The system allows organizations to store and process data locally. The system also allows machine learning models to be trained across all participating organizations. Biscotti comprises four main components: blockchain ledger, consensus protocol, smart contracts, and off-chain storage. The system provides a variety of security measures such as data privacy and access control. The measures are put in place to ensure that data is secure and only accessible to authorized parties. Additionally, the system utilizes various techniques to facilitate efficient and secure data exchange, such as differential privacy and distributed data aggregation. Salim et al. [32] proposed a differential privacy blockchain-based explainable FL (DP-BFL) architecture using Social Media 3.0 networks. This architecture allows internet-enabled devices to participate in training global models while preserving data privacy. After local training, participants upload their deferentially private local model updates to the blockchain system. These local updates are then evaluated and verified by the miners of the blockchain system. DP-BFL ensures that the privacy of the participants as well as a good performance of the global model, is achieved by mitigating the impact of malicious participants' local updates. Qu et al. [28] proposed a novel approach to block-chain-enabled adaptive asynchronous federated learning (FedTwin). The approach enables adaptive and asynchronous training in digital twin networks. The approach addresses the challenges of centralized processing, data falsification, privacy leakage, and lack of incentive mechanisms in digital twin networks. FedTwin uses a proof-of-federalism consensus algorithm for efficient and secure synchronization of digital twin networks (DTN), enabling a personalized incentive mechanism. The approach also uses privacy-preserving local digital twin (DT) training with falsification filtering. The approach uses adaptive asynchronous global aggregation of DTN with a roll-back mechanism. The authors evaluate the performance of FedTwin on a real-world dataset, which shows its superior performance for DTN. ### _Secure Multi-party Computation-based Approaches_ Lu et al. [19] proposed a collaborative architecture enabled by blockchain to share data among multiple parties. The architecture also minimizes the risk of data leakage and grants data owners greater control over access to shared data. By using federated learning to construct data models and share them instead of raw data, the authors transformed data sharing into a machine learning problem, thereby, enhancing the usage of computing resources and the effectiveness of the data-sharing system. To safeguard data privacy, the authors integrated differential privacy into federated learning. The effectiveness of the proposed model was evaluated for data categorization using benchmark, open real-world datasets. However, three potential threats were identified: data quality, data security, and data authority management. To address these threats, the authors integrated federated learning to achieve differential privacy. They employed a permissioned blockchain to eliminate centralized trust. They ensured the quality of shared data to prevent invalid sharing. They facilitated secure data management by allowing data providers to upload data only through permissioned blockchain. Li et al. [17] proposed BFLC, a Blockchain-based Federated Learning framework with Committee consensus. To address the integrating storage optimization, analysis of hostile node threats, and community node administration issues, FL is performed by participating nodes using blockchain. The blockchain maintains global models and local updates without the use of a centralized server. The authors employ a novel delegated consensus mechanism, which addresses the missions of gradient selection and block generation while accounting for the communication cost of FL. In the experiment, BFLC demonstrated higher accuracy compared to basis FL and stand-alone framework [22] in various malicious proportion settings. The author incorporated real-world datasets into the BFLC framework, enabling them to obtain global models that closely resemble the centralized training approach in federated learning. However, the need for a trusted blockchain system raises unexplored aspects of ensuring trustworthiness, which may pose challenges and require further investigation in order to address potential vulnerabilities and maintain the integrity of the BFLC framework. Qu et al. [29] proposed a decentralized paradigm for big data-driven cognitive computing (D2C). This paradigm addresses issues such as unreliable performance, inefficiency, privacy leakage, and poisoning attacks. It does so by combining federated learning and blockchain. Their novel architecture uses the federated learning paradigm for massive D2C. This architecture significantly improves the manufacturing performance of Industry 4.0 [14]. It also overcomes privacy and performance problems associated with cognitive computing. To enhance performance, accuracy, and incentive mechanisms for Industry 4.0 automation, the authors integrate blockchain into federated learning, creating a D2C paradigm for the Industry 4.0 model. They develop an optimization model with a modified Markovian decision process to simulate a conflict with adversaries. The model increases accuracy and robustness against poisoning attacks. Rehman et al. [36] presented a novel approach for secure and privacy-preserving federated learning.The proposed approach is based on blockchain technology, providing a distributed consensus mechanism for reputation-aware federated learning. This system allows the federated learning participants to securely and privately exchange information. It also ensures data privacy and integrity. The system uses a blockchain-based distributed ledger to provide a trustless and decentralized environment for federated learning. Aracchchchige et al. [2] proposed PriMod-Chain, a new framework for trustworthy and privacy-preserving machine learning in Industrial IoT systems. To ensure privacy and trustworthiness, the framework combines smart contracts, blockchain, Federated Machine Learning (FedML), Differential Privacy (DP), and InterPlanetary File System (IPFS). The proposed framework was tested for feasibility, and the results for privacy, security, reliability, safety, and resilience were all positive. The authors suggested further research to reduce latency in order to improve efficiency. The PriMod-Chain protocol is presented as a viable solution for reliable privacy-preserving machine learning in Industrial IoT systems. Wang et al. [39] introduced a blockchain-empowered decentralized, secure multiparty learning system called BEMA, where \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} **Paper** & **Privacy Approach** & **Challenges** \\ \hline Wang et al. [38] & BPFL- A combination of Multi-Krum and homomorphic encryption & Efficient model combination and complex homomorphic encryption management. \\ \hline Zhoa et al. [42] & DP- Laplace noise & Optimizing noise level and selecting privacy parameters. \\ \hline Miao et al. [25] & PBFL- Fully homomorphic encryption and cosine similarity & Scheme on a balanced distribution of client data and not on cases where the client data is non-IID \\ \hline Sun et al. [35] & Homomorphic encryption (BCP-based) for gradient & Blockchain-based audit approach for encrypted gradients may have limited scalability due to the increased processing times and delay \\ \hline Li et al. [17] & BFL- Blockchain-based Federated Learning framework with Committee consensus & CCM approach has increased energy consumption due to a large amount of communication overhead involved during model updates between nodes \\ \hline Lu et al. [19] & PBFL- Privacy-preserving data sharing the mechanism for distributed multiple parties & Improving the utility of data models mapped from raw data is necessary \\ \hline Qu et al. [29] & BFL- Decentralized paradigm for big data-driven cognitive computing (D2C) & Improves on Markov decision process (MDP) rather than addressing privacy issues with blockchain that is assumed tamper-proof \\ \hline Wan et al. [37] & BFL- Wasserstein generative adversarial network (WGAN) & Need for efficient communication and computation methods, privacy and security concerns in federated learning, and the problem of non-iid data distribution in edge computing environments \\ \hline Shayan et al. [33] & Biscott: a fully decentralized peer-to-peer (P2P) approach to multi-party ML & Requiring large honest samples for Multi-Krum, limited scalability for large deep learning models, and vulnerability to privacy attacks. \\ \hline Qi et al. [27] & BFL- Traffic Flow Prediction (TFP) & Slower convergence rate due to its decentralized learning paradigm and model aggregation step, as opposed to the SAE model’s centralized learning. \\ \hline Ur Rehman et al. [36] & BFL- Reputation-aware fine-grained & A reputation-aware federated learning system that exchanges information secured and privately while maintaining data privacy and integrity. \\ \hline Aracchchige et al. [2] & PriModChain - Differential privacy, Federated ML, Ethereum blockchain, and Smart contracts. \\ \hline Wang et al. [39] & BEMA - Multiparty multiclass margin system initialization, off-chain sampling mining and on-chain mining. & Lack of guaranteed robustness against Byzantine attacks. \\ \hline Alzubi et al. [1] & Deep learning and Blockchain techniques for electronic health record privacy-preservation model aggregation. & User classification, integration using cryptography, and client contribution verification prior to model aggregation. \\ \hline Salim et al. [32] & DP-BFL - Differential Privacy blockchain-based explainable FL & Ensuring participant privacy, maintaining global model performance, and mitigating the impact of malicious local updates. \\ \hline Chen et al. [8] & ESB-FL - Blockchain-based FL system paradigm using Cryptography & To protect FL participants’ privacy, maintain global model accuracy with low communication costs. \\ \hline Liu et al. [18] & Privacy-Preserving permissioned Blockchain enabled FL with Multi-Party Computation and Fully Homomorphic Encryption & Privacy protection of participants, anonymity, and secure model updates using multi-party computation and fully-homomorphic encryption. \\ \hline Qu et al. [28] & BFL - Digital twin networks (DTN) & Challenges in digital twin networks include centralized processing, data falsification, privacy leakage, and lack of incentive mechanisms. \\ \hline \end{tabular} \end{table} TABLE I: Overview of recent studies on Data Privacy in BCFL. learning parties hold diverse local models. Their work suggests "on-chain" and "off-chain" mining strategies for defense against attacks. The proposed approach involves two steps. The first step is to identify data samples suitable for model calibration. On the other hand, the second step is used to calibrate particular local models based on the discovered samples. Then, these models are entered into the new blocks. Mainly, BEMA includes off-chain and system initialization, both on-chain mining and sample mining. During system startup, The operators (OPs) register their names (IDs) and model details on the link. The participating party can then register their IDs and chain model information. Once a miner uses a valid sample to update models on the chain, each party can broadcast it to the blockchain and earn certain system rewards. Lu et al. [18] proposed a blockchain-enabled secure federated learning system for distributed banks. This system combines multi-party computation (MPC) and the multi-key fully-homomorphic encryption(FHE) scheme. The local model updates from the participants are encrypted using the Multi-key FHE scheme and then signed with some pseudo-ID before sharing with others participating in the MPC. With this approach, the participants' privacy protection and anonymity is conveniently achieved. ## V Data Privacy Challenges Several studies have proposed different solutions to implement federated learning using blockchain technology. Table I lists these studies, which aim to address privacy challenges in federated learning. However, the approaches employed by these studies encounter some challenges such as efficient model combination, selecting privacy parameters, non-IID data distribution, scalability, and privacy attacks. Some proposed solutions include homomorphic encryption, differential privacy, reputation-aware federated learning, and digital twin networks. These approaches aim to maintain global model accuracy, protect participant privacy, and reduce the impact of malicious local updates. However, they also have limitations, such as increased energy consumption, communication overhead, and security vulnerabilities. Despite these challenges, data privacy and security concerns still arise. Although blockchain technology has effectively decentralized federated learning, it has some drawbacks. ### _Data Leakage_ The privacy of BCFL can be compromised by inference attacks even though the model updates are encrypted. Malicious users can still analyze the updates to deduce information. To address this issue, future works may explore the use of other FL structures or combine different techniques. ### _Model accuracy and Latency_ BCFL models require maintaining and improving the accuracy and efficiency of the model. The mini-batching at the client during each training epoch and increasing multi-client parallelism to reach a target test-set accuracy framework can be adopted. The learning performance of Federated Learning has not been discussed in details. It is necessary to verify the multi-key encryption protocol because of the way it secures the federated ML model data. The accuracy and latency of PrimodChain [2] systems still need optimization. ### _Unexplored Complexity_ The work presented by Wang et al. [39] provides only theoretical analysis, and BEMA robustness against Byzantine attacks cannot be guaranteed. A Byzantine attack occurs when an attacker adheres to the system protocol but disseminates malicious information to innocent system participants, with the goal of diminishing system performance and manipulating or influencing the system's output. More evaluation is still needed as the learning strategies and security concerns are not fully investigated. The existing research on multiparty learning has mainly focused on homogeneous local models. However, there is still a lack of research on multiparty learning over heterogeneous local models. This is despite the fact that such a scenario may be more practical and useful in real-world applications. ### _Incentive Mechanism Scheme_ The blockchain incentive mechanism plays a crucial role in motivating users to participate in consensus. Abandoning the token incentive could significantly decrease users' motivation to participate if the incentive system of the agreement is not sufficient or perfect enough. The rewards are typically accessible when a new block is either automatically generated or obtained by charging fees for transactions. Blockchain technology requires honest mining for the successful completion of mining the block. The significant role of incentive mechanisms schemes in blockchain federated learning is often overlooked in many studies. ### _Scalability_ The blockchain-based audit approach for encrypted gradients in federated learning provides privacy while assessing gradient quality. However, scalability can be a challenge due to the consensus requirement in blockchain technology. Adding new blocks necessitates agreement among all network nodes, resulting in time-consuming and expensive audits, particularly for large federated learning systems. Two approaches to mitigate scalability limitations are (1) off-chain computation [10], which performs audits on a subset of nodes, and (2) compression, which reduces the size of encrypted gradients before blockchain storage. These techniques aim to improve efficiency without compromising quality. Continued research seeks to enhance scalability and make this approach more practical. ## VI Future Lines of Research The field of blockchain-based federated learning presents several promising avenues for future research. One such area is investigating incentive systems that can encourage data providers to participate in the federated learning process within blockchain networks. Incentive systems can help address challenges such as the lack of motivation for data providers to contribute to federated learning or the potential for free-riding on the contributions of others. The design and implementation of incentive mechanisms that provide adequate rewards to data providers without compromising the security and privacy of the system is a crucial research question. Another promising direction for future research in blockchain-based federated learning is the development of privacy-preserving techniques. Although current systems use secure aggregation to encrypt model updates before transmitting them to the server, this approach may not be sufficient to protect sensitive data. Therefore, exploring novel techniques for data privacy protection in federated learning is necessary. Such techniques could include encryption methods such as homomorphic encryption, differential privacy, or secure multi-party computation. These techniques ensure that the data remains private, even from the server or other nodes, while allowing for effective aggregation and model updates. Smart contracts, which are significant programs that run on the blockchain, can also play a significant role in blockchain-based federated learning. Smart contracts can automate and enforce the rules governing the training process, including data privacy and security protocols and incentives for participating nodes. Using smart contracts could enhance the transparency and fairness of the system, allowing for more efficient and secure federated learning. Moreover, integrating machine learning techniques such as transfer learning and meta-learning can enhance the efficiency and effectiveness of the federated learning system. Transfer learning is a machine learning technique that allows the transfer of knowledge from one task to another. On the other hand, meta-learning is a learning approach that utilizes prior knowledge to achieve faster and more accurate learning. Incorporating these techniques into the federated learning system can reduce the required training data and improve the models' accuracy. ## VII Conclusion Blockchain-based Federated Learning (FL) is an emerging approach that has garnered significant interest in improving the privacy and security of machine learning models. The decentralized and immutable nature of blockchain technology has the potential to replace traditional centralized methods, enhancing the privacy and efficiency of FL. Blockchain technology can protect against data breaches and malicious actors while enabling multiple parties to train models collaboratively without sharing their data. Moreover, data distribution among multiple parties without a centralized server can further reduce the risks of data breaches and enhance data privacy. The use of blockchain technology also ensures that data is securely stored and accurately tracked, enabling efficient and trustworthy collaborative learning. The integration of blockchain and FL holds immense potential in advancing the field of machine learning and improving its practical applications in various industries.
2306.08218
Sequential Deep Operator Networks (S-DeepONet) for Predicting Full-field Solutions Under Time-dependent Loads
Deep Operator Network (DeepONet), a recently introduced deep learning operator network, approximates linear and nonlinear solution operators by taking parametric functions (infinite-dimensional objects) as inputs and mapping them to solution functions in contrast to classical neural networks that need re-training for every new set of parametric inputs. In this work, we have extended the classical formulation of DeepONets by introducing sequential learning models like the gated recurrent unit (GRU) and long short-term memory (LSTM) in the branch network to allow for accurate predictions of the solution contour plots under parametric and time-dependent loading histories. Two example problems, one on transient heat transfer and the other on path-dependent plastic loading, were shown to demonstrate the capabilities of the new architectures compared to the benchmark DeepONet model with a feed-forward neural network (FNN) in the branch. Despite being more computationally expensive, the GRU- and LSTM-DeepONets lowered the prediction error by half (0.06\% vs. 0.12\%) compared to FNN-DeepONet in the heat transfer problem, and by 2.5 times (0.85\% vs. 3\%) in the plasticity problem. In all cases, the proposed DeepONets achieved a prediction $R^2$ value of above 0.995, indicating superior accuracy. Results show that once trained, the proposed DeepONets can accurately predict the final full-field solution over the entire domain and are at least two orders of magnitude faster than direct finite element simulations, rendering it an accurate and robust surrogate model for rapid preliminary evaluations.
Junyan He, Shashank Kushwaha, Jaewan Park, Seid Koric, Diab Abueidda, Iwona Jasiuk
2023-06-14T03:16:00Z
http://arxiv.org/abs/2306.08218v2
Sequential Deep Operator Networks (S-DeepONet) for Predicting Full-field Solutions Under Time-dependent Loads ###### Abstract Deep Operator Network (DeepONet), a recently introduced deep learning operator network, approximates linear and nonlinear solution operators by taking parametric functions (infinite-dimensional objects) as inputs and mapping them to solution functions in contrast to classical neural networks that need re-training for every new set of parametric inputs. In this work, we have extended the classical formulation of DeepONets by introducing sequential learning models like the gated recurrent unit (GRU) and long short-term memory (LSTM) in the branch network to allow for accurate predictions of the solution contour plots under parametric and time-dependent loading histories. Two example problems, one on transient heat transfer and the other on path-dependent plastic loading, were shown to demonstrate the capabilities of the new architectures compared to the benchmark DeepONet model with a feed-forward neural network (FNN) in the branch. Despite being more computationally expensive, the GRU- and LSTM-DeepONets lowered the prediction error by half (0.06% vs. 0.12%) compared to FNN-DeepONet in the heat transfer problem, and by 2.5 times (0.85% vs. 3%) in the plasticity problem. In all cases, the proposed DeepONets achieved a prediction \(R^{2}\) value of above 0.995, indicating superior accuracy. Results show that once trained, the proposed DeepONets can accurately predict the final full-field solution over the entire domain and are at least two orders of magnitude faster than direct finite element simulations, rendering it an accurate and robust surrogate model for rapid preliminary evaluations. keywords: Machine/Deep Learning, Deep Operator Network (DeepONet), Gated recurrent unit (GRU), Long short-term memory (LSTM), Transient Heat Transfer, Plastic Deformation + Footnote †: journal: Engineering Applications of Artificial Intelligence ## 1 Introduction Recent technological advances in high-performance computing hardware and machine learning (ML) methods have given rise to a wide range of applications in fields like autonomous driving, image and speech recognition, bioinformatics, medical diagnosis, document categorization, and others. Physics-based modeling has shown much interest in applications of Deep Learning with Artificial Neural Networks (ANN), a branch of machine learning motivated by the brain's biological structure and operation. Without the need for expensive computing power or modeling software, a well-trained surrogate deep learning model can almost immediately produce (infer) outcomes that are comparable with traditional modeling techniques. Many data-driven surrogate deep learning models have been devised and trained to quickly solve problems in additive manufacturing [1], topologically optimized materials and structures [2; 3], automatic damage detection in civil structures [4; 5], bio-inspired structures [6; 7], composite materials [8], nonlinear material responses [9; 10; 11], as well as a variety of other applications. Besides data-driven models, collocation point-based physics-informed neural networks (PINN) was created by Raissi et al. [12] and Abueidda et al. [13] capable of solving partial differential equations governing deformation and stress generation in solids or other physics without the aid of finite elements or other conventional numerical techniques, other than for validation. Similar to this, Nguyen-Thanh et al. [14], Samaniego et al. [15], Abueidda et al. [16; 17], Fuhg et al. [18] and He et al. [19; 20] devised a deep energy method (DEM), which makes use of potential energy to resolve nonlinear material responses. Besides forward problems, Haghighat et al. [21] used PINNs for inverse problems in solid mechanics. Cai et al. [22] even used a combination of measured data and a physics-informed deep-learning method to obtain a solution for an ill-posed thermal fluid flow that was previously thought to be unsolvable. Nevertheless, most of these methods require retraining or transfer learning if input parameters like loads, boundary conditions, and material properties, or geometry change. The same is true of traditional numerical methods such as finite elements (FE) in that each new input parameter value calls for a new independent simulation. To address this problem, the universal approximation theorem for operators [23] inspired Lu et al.'s Deep Operator Network [24], often known as DeepONet, an innovative operator learning architecture. It contains two sub-networks, a branch network to encode the input functions and a trunk network to encode the input domain geometry. In its original form, both networks are feed-forward neural networks (FNNs). For a few linear and nonlinear PDEs in that landmark work, DeepONet successfully mapped between unknown parametric functions and solution spaces in addition to learning explicit operators like integrals. This gave rise to a powerful new method for solving stochastic and parametric PDEs. In the so-called physics-informed DeepONet, Wang et al. [25] improved the DeepONet formulation by including information from the governing PDE. They found improved prediction accuracy and data handling efficiency but at the expense of a higher computing cost for training. Recently, DeepONet has been used in heat conduction with a spatially variable heat source by Koric and Abueidda [26], fracture mechanics by Goswami et al. [27], and multiscale modeling using elastic and hyperelastic materials by Yin et al. [28], and elastic-plastic stress field prediction on topologically optimized geometries [29]. Although the DeepONet is capable of predicting the full-field solution, the above-mentioned works do not cover time-dependent loads. In path-dependent phenomena such as plasticity and material damage, causal relationships in time-dependent input signals are pivotal, However, a FNN in the branch network does not retain the causality of input data. Applied loads encountered in the real world are hardly static and stationary. Often, they are time-dependent, such as wind loads, vibrations, and impact. When dealing with time-dependent signals, recurrent neural networks such as the gated recurrent unit (GRU) [30] and long short-term memory (LSTM) [31] are commonly employed. They are two solutions to the vanishing gradient issues [32] of a simple recurrent neural network. In LSTM and GRU, hidden state (memory) cells are intended to dynamically "forget" some outdated and pointless information via particular gated units that regulate the information flow inside a memory cell, preventing the multiplication of a lengthy series of numbers during temporal backpropagation. Many previous studies employed these architectures to accurately predict time-dependent phenomena such as adsorption [33; 34], landslide [35], plastic deformation [11], damage [36], and active noise cancellation [37; 38]. Therefore, it is reasonable to consider the GRU and LSTM networks as capable candidates for encoding time-dependent input load histories. However, these recurrent neural network architectures, in their original form, are meant for predicting sequences. That is, they learn from a series of input signals and predict the corresponding output signals. Those output signals are typically 1D and only contain temporal information. While these recurrent neural networks have been extensively used in sequence-to-sequence "translation", the application of these architectures to predict full-field, spatial contour distribution is relatively under-explored. Shi et al. [39] proposed a variant of LSTM known as ConvLSTM, which combines the temporal encoding capability of LSTM with the spatial encoding of convolutional neural networks. This architecture is subsequently used by Frankel et al. [40] to predict the stress field evolution in polycrystals. However, the ConvLSTM is limited to a structured grid due to its convolutional nature, which is not a limitation of the DeepONet model. There are also no previous studies investigating the effects of combining a recurrent neural network in the DeepONet structure. Many nonlinear thermal, mechanical, and multiphysics analyses have time-dependent loads often coupled with highly nonlinear thermo-mechanical properties, including phase transformation and/or materially nonlinear path-dependent constitutive models, such as in plastic deformation. Therefore, capturing the time-dependent loading history is vital to accurately solving these kinds of problems. Frequently, only the final stress or temperature field solution, obtained through incremental nonlinear finite element analysis, is of interest for analysis, designs, and optimizations. Such problems can be particularly computationally challenging and time-consuming for sensitivity analysis, uncertainty quantification, topology optimization, and similar iterative procedures where thousands, or even millions of forward evaluations, are needed to be done by classical nonlinear analysis to achieve statistical convergence. Therefore, there is a scientific need for developing accurate data-driven surrogate models for time-dependent nonlinear problems to reduce the number of FE simulations and to generate full-field contours for quantities of interest. However, as discussed above, the classical DeepONets, although capable of predicting a full-field contour, lack the capability of causal relationships in input data with its FNN branch network. On the other hand, classical recurrent neural networks like GRUs and LSTMs can accurately capture time-dependent signals but are not designed to predict a full-field output with spatial information. Therefore, the objective of this work is to combine sequence learning models such as the gated recurrent unit (GRU) and long short-term memory (LSTM) in the branch network of the DeepONet. Since temporal loading histories are essentially long time series, it is reasonable to apply recurrent neural network architectures to capture the temporal information embedded in the input history. To the best of the authors' knowledge, such a combination of sequence learning models and the DeepONet branch-trunk architecture is a first in the literature and is the most significant and novel scientific contribution of this work. We have tested our approach with two example problems: (1) a transient heat conduction problem with phase transformation (solidification) and (2) a path-dependent plasticity problem, both involving significant nonlinearity and representing real-world engineering use cases. We used the proposed DeepONets to predict full-field solutions and compared performance with the classical DeepONet formulation. This paper is organized as follows: Section 2 introduces three neural networks architectures and provides detail on the data generation method. Section 3 presents and discusses the performance of the three models. Section 4 summarizes the outcomes, limitations, and highlights future works. ## 2 Methods ### Neural network models This work explored three different NN architectures to predict the final temperature or von Mises stress distribution. All three NNs were implemented in the DeepXDE framework [41] with a TensorFlow backend [42]. #### 2.1.1 Fnn-DeepONet A DeepONet model with FNN in both the branch and trunk networks is used as the performance baseline. In an infinite-dimension functional input space \(M\), \(m\in M\) represents an input function with history-dependent load magnitudes defined on \(n\) control points (or input sensors) and refers to an unknown temperature or stress field solution in the functional solution space \(S\). We consider that for every input \(m\in M\) there is a solution \(s=s(m)\in S\) for the temperature or stress field distribution from the Section 2.2.1 and Section 2.2.2, which are also subject to respective boundary conditions (BCs). Consequently, the mapping solution operator of a DeepONet \(G:M\to S\) can be defined as: \[G(m)=s(m). \tag{1}\] For a collection of \(N\) points \(\mathbf{X}\) on a domain, each denoted by its coordinates \((x_{i},y_{i})\), the DeepONet considers both the load magnitudes \(m\) in its branch and positions \(\mathbf{X}\) in its trunk and predicts the solution operator \(\hat{G}(m)(\mathbf{X})\) by combining intermediate encoded outputs \(b_{i}\) (from branch) and \(t_{i}\) (from trunk) in a dot product enhanced by bias \(\beta\), as shown in a schematic of the FNN-based DeepONet model in Fig. 1. In a larger sense, it is possible to think of \(\hat{G}(m)(\mathbf{X})\) as a function of \(\mathbf{X}\) conditioning on input \(m\), and DeepONet is more general and capable than other neural networks. Specifically, in this work, we seek to use an FNN-DeepONet to predict the final temperature (in Section 3.1) and von Mises stress (in Section 3.2) contours given a time-dependent input load. In both cases, the time-dependent input load function is evaluated at \(n=101\) time steps to form a \(101\times 1\) input load vector \(\mathbf{m}\), which is fed to the branch network of the FNN-DeepONet. The 2D Figure 1: Schematic of the FNN-based DeepONet used in this work. \(m\), \(x\), \(y\), \(b_{i}\), \(t_{i}\), \(HD\), \(\beta\) and \(\hat{G}\) denote the load magnitude, X coordinate, Y coordinate, branch output, trunk output, hidden dimensions, the bias vector and the approximated solution operator. problem geometry is described by \(N\) nodes within the domain, assembled into a \(N\times 2\) matrix, and fed to the trunk network of the DeepONet. The DeepONet is used as a regressor that predicts an output field of shape \(N\times 1\) with the field value (e.g., temperature or Mises stress) at the end of the load step defined at each node. The outputs can be obtained via a simple forward evaluation of the DeepONet model given the above inputs. The FNN-DeepONet used in this work has seven layers in its branch and trunk networks. The numbers of neurons in the branch and trunk networks are \([101,100,100,100,100,100,HD]\) and \([2,100,100,100,100,100,HD]\), respectively. Here, \(HD\) denotes the hidden dimension of the branch and trunk networks and was set to 100 in this work. The ReLU activation function was applied to the outputs of the branch and trunk networks of the DeepONet. This network contains 111500 trainable parameters and was used as the benchmark model to solve both problems introduced in Section 2.2. A larger network with a similar number of parameters as in Section 2.1.2 was tested but showed similar performance as this smaller network, so the smaller network was used in the result sections. The model was trained for 350000 epochs with a batch size of 64. The Adam optimizer [43] was used and the scaled mean squared error (MSE) was used as the loss function, which is defined as: \[\mathrm{MSE}=\frac{1}{N}\sum_{i=1}^{N}(f_{FE}-f_{Pred})^{2}, \tag{2}\] where \(N\), \(f_{FE}\), and \(f_{Pred}\) denote the number of data points, the FE-simulated field value, and the NN-predicted field value, respectively. #### 2.1.2 Sequential DeepONets The FNN-based DeepONet introduced in Section 2.1.1 uses an FNN to encode the time series signal of the input magnitude. In this work, two different recurrent neural network architectures, the GRU and LSTM, are considered in the branch network of the DeepONet, the resulting sequential DeepONets are called GRU- and LSTM-DeepONets in subsequent discussions. The new branch networks are to be designed with identical input and output signatures as the FNN branch network described in Section 2.1.1 so they can be dropped directly into the DeepONet architecture. The two sequential branch networks are described in detail below. The first sequential branch network architecture considered in this work is the GRU architecture. We utilized an encoder-decoder structure inside the branch work, which was shown in many previous sequence-to-sequence prediction tasks to significantly improve the prediction accuracy [44; 45]. The schematic of the GRU-based branch network is shown in Fig. 2. The developed Figure 2: Schematic of the GRU branch network. The black GRU blocks return a sequence (2D outputs), while the green GRU block compresses the output into 1D. The hidden dimension for this branch network is 101, identical to the number of time steps in the input load vector. network consists of four GRU layers, the first two being the encoder and the last two being the decoder. The first layer of the encoder encodes the information into 256 latent features, which are compressed to a 128 vector by the second GRU layer of the encoder (green block in Fig. 2). A repeat vector layer is added to match the shape of the decoder portion of the network. The decoder portion consists of two GRU layers with 128 and 256 units, respectively, to decode the encoded upstream information. All GRU layers use a tanh activation function. Finally, a time-distributed dense layer with linear activation is used to output the results to the larger DeepONet architecture with a hidden dimension of 101. To accommodate the hidden dimension of the branch network, the trunk network of the GRU-DeepONet is an FNN with the following seven layers of neurons: \([2,101,101,101,101,101,101]\). The resulting GRU-DeepONet has 792422 trainable parameters. The LSTM network is also considered in the branch network of a DeepONet structure, and its schematic is shown in Fig. 3. Similar to the GRU branch network, the LSTM branch network contains 4 LSTM layers, the first two form an encoder with 256 and 128 units, and the last two form a decoder with 128 and 256 units, respectively. The second LSTM block in the encoder (green block in Fig. 3) is intended to compress the information into 1D as a 128 vector. The tanh activation function was used in all LSTM layers, and the trunk network of the LSTM-DeepONet is identical to that of the GRU-DeepONet. The proposed network has 1039206 trainable parameters. Identical to the FNN-DeepONet introduced Section 2.1.1, the GRU- and LSTM-DeepONets are intended to be used as a regression model that predicts the temperature or Mises stress profiles given a time-dependent input load and a set of nodal coordinates defining the geometry. The inputs, outputs, optimizer, loss function, and training epochs are the same as those of the FNN-DeepONet. ### Data generation In this work, we compare the performance of the two proposed sequential DeepONets with the classical FNN-DeepONet in transient heat transfer and structural deformation problems. #### 2.2.1 Transient heat transfer In the first example, a nonlinear, transient heat transfer problem is studied, which is representative of a solidifying shell of low-carbon steel in a continuous caster [11]. Fig. 4 provides a schematic to illustrate the process. To simplify the simulation, a 2D slice cross-section of the caster is modeled, which has a length of 30 mm and a width of 0.1 mm. As the domain moves down the mold in a Lagrangian frame of reference, it is subject to a prescribed time-dependent heat flux extrac Figure 3: Schematic of the LSTM branch network. The black LSTM blocks return a sequence (2D outputs), while the green LSTM block compresses the output into 1D. The hidden dimension for this branch network is 101, identical to the number of time steps in the input load vector. all other surfaces are insulated. A schematic of the 2D problem domain is shown in Fig. 5. The governing equation and initial and boundary conditions of the transient heat transfer are given by: \[\begin{split}\rho H(T)\frac{\partial T}{\partial t}=\nabla\cdot \left[k(T)\nabla T\right],\\ -k(T)\nabla T=\mathbf{q}(t),&\forall\mathbf{x}\in\partial \Omega_{q},\\ T(x,0)=T_{0},\end{split} \tag{3}\] where \(t\) is time, \(T\) is temperature, \(\rho\) is mass density, and \(\mathbf{q}(t)\) is the time-dependent heat flux. \(T_{0}\) = 1540 \({}^{o}\)C is the uniform initial temperature. \(H(T)\) and \(k(T)\) denote the temperature-dependent specific enthalpy and isotropic thermal conductivity, respectively. It is highlighted that the specific enthalpy used in this work includes the latent heat effect during phase transformations, such as in solidification and transition from \(\delta\)-ferrite to austenite, and brings significant nonlinearity to the system [46]. The material is a low carbon steel with a density of 7400 kg/m\({}^{3}\); other relevant material properties are shown in Table 1 and Table 2. that the slice spends in the caster traveling down the mold by using implicit time integration in Abaqus/Standard [46]. The temperature distribution at the end of the load step was extracted as the ground truth for NN training. To define a complex, time-dependent boundary heat flux, the sampling approach by Abueidda et al. [11] was used, where the time-dependent boundary heat flux is defined by six control points. The first and last control points (\({}_{cp}\)) correspond to \(t=0\) and \(t=17s\), respectively. The time values (\(t_{cp}\)) for the four remaining control points were randomly sampled from a uniform distribution in the range \((0,17)s\). Based on experimental measurements, the flux value \(q_{cp}\) generally has a decaying profile and can be approximated as: \[q_{cp}=A(t_{cp}+1)^{-B}+C, \tag{4}\] where \(A\in[3,8]\), \(B\in[0.3,0.7]\), and \(C\in[-0.5,0.5]\) were randomly chosen variables from their respective ranges. \(C\) can be considered as a random noise added to the flux magnitude to emulate additional fluctuations and nonlinearities observed in practice in the actual flux profile due to changes in contact and interfacial heat transfer between mold and steel [47; 48]. After all control points and the flux values are defined, a radial basis interpolation with a Gaussian function is used to interpolate the flux. A total of 4000 FE simulations were generated with distinct heat flux histories, and a typical example of the time-dependent heat flux is shown in Fig. 5(a). #### 2.2.2 History-dependent plastic deformation In the second example, plastic deformation of a dog bone specimen under a time-dependent loading history is studied, which follows from the recent work by Koric et al. [49]. The dog bone specimen has a length of 110 mm and a width (at the grip section) of 30 mm, with a gauge region width of 20 mm. A schematic of the domain along with the mesh used in FE simulations are shown in Fig. 7. A total of 4756 linear plane stress elements (four-node quadrilaterals and three-node triangles) were used to mesh the specimen with a plane-stress thickness of 1 mm. In the absence of any body and inertial forces, the equilibrium equations and boundary conditions can be stated in terms of the Cauchy stress \(\mathbf{\sigma}\) as: \[\begin{gathered}\nabla\cdot\mathbf{\sigma}=\mathbf{0},\ \ \forall\mathbf{X}\in\Omega,\\ \mathbf{u}=\bar{\mathbf{u}},\ \ \forall\mathbf{X}\in\partial\Omega_{u},\\ \mathbf{\sigma}\cdot\mathbf{n}=\bar{\mathbf{t}},\ \ \forall\mathbf{X}\in\partial \Omega_{t},\end{gathered} \tag{5}\] Figure 6: Typical time dependent load magnitudes: (a) Boundary heat flux in the transient heat transfer problem. (b) Applied displacement in the plastic deformation problem. where \(\mathbf{n}\), \(\bar{\mathbf{u}}\), and \(\bar{\mathbf{t}}\) denote the outward boundary normal, prescribed displacement, and prescribed traction, respectively. Under small deformation assumption, total strain is given by: \[\mathbf{\epsilon}=\frac{1}{2}(\nabla\mathbf{u}+\nabla\mathbf{u}^{T}), \tag{6}\] The small-strain formulation of plasticity was used, so the total strain is decomposed additively into its elastic and plastic parts: \[\mathbf{\epsilon}=\mathbf{\epsilon}^{e}+\mathbf{\epsilon}^{p}. \tag{7}\] For linear elastic and isotropic material under plane stress condition, the constitutive equation is: \[\begin{bmatrix}\sigma_{11}\\ \sigma_{22}\\ \sigma_{12}\end{bmatrix}=\begin{bmatrix}\frac{E}{1-\bar{\mathbf{t}}^{2}}&\frac{ \nu E}{1-\bar{\mathbf{t}}^{2}}&0\\ \frac{\nu E}{1-\nu^{2}}&\frac{E}{1-\nu^{2}}&0\\ 0&0&\frac{E}{2(1+\nu)}\end{bmatrix}\begin{bmatrix}\epsilon_{11}\\ \epsilon_{22}\\ \epsilon_{12}\end{bmatrix}, \tag{8}\] where \(E\) and \(\nu\) are the Young's modulus and Poisson's ratio. In this work, \(J_{2}\) plasticity with linear isotropic hardening was used: \[\sigma_{y}(\bar{\epsilon}_{p})=\sigma_{y0}+H\bar{\epsilon}_{p}, \tag{9}\] where \(\sigma_{y}\), \(\bar{\epsilon}_{p}\), \(\sigma_{y0}\), and \(H\) denote the flow stress, equivalent plastic strain, initial yield stress, and the hardening modulus, respectively. The material properties of the elastic-plastic material response are presented in Table 3. The material model was integrated implicitly in Abaqus/Standard [46]. The specimen was fixed on the left side and a prescribed, time-dependent displacement was applied on the right edge. Six control points were used to define the loading path. Besides the two end points at \(t=0\) and \(t=1s\), four other control points were randomly sampled from the range \([0.1,0.9]s\). The applied displacement is 0 at \(t=0\). The displacement magnitude at each control point was randomly selected such that the nominal axial strain magnitude is below 5%. Radial basis interpolation was used to interpolate the applied displacement at arbitrary time points. A typical example of the applied displacement is shown in Fig. 5(b). A total of 15000 FE simulations were generated, and the von Mises stress was stored as the ground truth labels in the NN training and is defined as: \[\bar{\sigma}=\sqrt{\sigma_{11}^{2}+\sigma_{22}^{2}+\sigma_{11}\sigma_{22}+3 \sigma_{12}^{2}}. \tag{10}\] \begin{table} \begin{tabular}{c|c c c c} Property & \(E\) [MPa] & \(\nu\) [/] & \(\sigma_{y0}\) [MPa] & H [MPa] \\ \hline Value & 2.09\(\times 10^{5}\) & 0.3 & 235 & 800 \\ \end{tabular} \end{table} Table 3: Material properties of the elastic-plastic material model Figure 7: Schematic of the dogbone specimen and the mesh used in FE simulations. The applied displacement is along the global X axis. ## 3 Results and discussion FE simulations of the transient heat transfer and plastic deformation were conducted with eight high-end AMD EPYC 7763 Milan CPU cores. All NN training and inference were conducted using a single Nvidia A100 GPU card on Delta, an HPC cluster hosted at the National Center for Supercomputing Applications (NCSA). To evaluate the model performance in the test set, two quantitative metrics were used. The first one is the relative \(L_{2}\) error, which is given by: \[\mathrm{Relative~{}L_{2}~{}error}=\frac{|f_{FE}-f_{Pred}|_{2}}{|f_{FE}|_{2}} \times 100\%, \tag{11}\] where \(f_{FE}\), and \(f_{Pred}\) denote the FE-simulated field value, and the NN-predicted field value, respectively. The second one is the commonly used \(R^{2}\) value, which is defined as: \[R^{2}=1-\frac{\sum_{i=1}^{N_{T}}{\left(f_{FE}-f_{Pred}\right)^{2}}}{\sum_{i=1} ^{N_{T}}{\left(f_{FE}-\bar{f}_{FE}\right)^{2}}}, \tag{12}\] where \(N_{T}\), \(\bar{f}_{FE}\) are the total number of test cases and the mean value of the FE-simulated field values. ### Transient heat transfer First, the three DeepONet models were trained using different fractions of the available data to investigate their sensitivity of prediction accuracy on the amount of training data. A total of four fractions were studied, they were 50%, 60%, 70%, and 80%. The results were summarized in Fig. 8. The classical 80-20 split was adopted in subsequent discussions, meaning that 80% of the data was used in training the models. 5-fold cross-validation was conducted on the best-performing model in Fig. 8 (i.e., the LSTM-based DeepONet) to test the repeatability of the model performance. The results are summarized in Table 4. Figure 8: Performance metrics for the three models trained with a different number of training data points. We compared the LSTM model with the median performance from the 5-fold cross-validation with the FNN- and GRU-DeepONets trained using an 80-20 data split (shown in Fig. 8). For the heat transfer problem, training of the FNN-, GRU- and LSTM-based DeepONets took 7210s, 18086s, and 19722s, respectively. The inference time for the three NN models compared to FE simulation time is shown in Table 5. Key performance statistics on the test set for the three models are shown in Table 6. To provide a better perspective of the error distribution over the test cases, histograms of the error distribution are depicted in Fig. 9. To show the statistical distribution of prediction error among the test cases, final temperature contours that correspond to the 0\({}^{th}\) (best case), 90\({}^{th}\) and 100\({}^{th}\) (worst case) percentile prediction error are displayed in Fig. 10 for all three DeepONet models. Results from Fig. 8 indicate that for this simple heat transfer problem, as few as 2000 data points (50% of all available data) were sufficient to achieve a prediction error of below 0.2% and \begin{table} \begin{tabular}{c|c c c} & Relative \(L_{2}\) error [\%] & Max error [\%] & \(R^{2}\) value \\ \hline FNN & 0.119 (0.069) & 1.277 & 0.99981 \\ GRU & 0.062 (0.024) & 0.265 & 0.99996 \\ LSTM & 0.061 (0.028) & 0.496 & 0.99995 \\ \end{tabular} \end{table} Table 6: Error statistics of three DeepONet models on the solidification heat transfer problem \begin{table} \begin{tabular}{c|c c c} Model & Relative \(L_{2}\) error [\%] & \(R^{2}\) value \\ \hline LSTM & 6.323\(\times 10^{-2}\) (1.383\(\times 10^{-2}\)) \({}^{1}\) & 1.000 (1.860\(\times 10^{-5}\)) \\ \end{tabular} \end{table} Table 4: 5-fold cross-validation, transient heat transfer Figure 9: Histograms showing the relative \(L_{2}\) error distributions over all test cases for the transient heat transfer problem. \(R^{2}\) value of above 0.999. The performance for the models fluctuates with an increasing number of training data points but remains below 0.2% for all cases, indicating that overfitting has not occurred. It is also clear from the data that the FNN-based DeepONet consistently under-performs compared to the GRU- and LSTM-based DeepONets, with the LSTM model giving the highest prediction accuracy in this case. However, it is worth noting that the performance difference between GRU- and LSTM-DeepONets is minimal. The results of the 5-fold cross-validation show that the performance of the model is very consistent, with minimal fold-to-fold variation. The GRU- and LSTM-DeepONet models are more computationally intensive to train than their FNN counterparts, requiring a 1.5 and 1.7 times longer training time, respectively. However, once trained, all three networks can efficiently predict the final stress history with more than 1500 times speedup compared to direct FE simulations. The added training time for the GRU and LSTM models translates to reduced prediction errors, as seen in Table 6. In an average sense, across all testing samples, both LSTM- and GRU-based DeepONets were able to lower the \(L_{2}\) error of the FNN-DeepONet by half. However, from Fig. 9, we see that all three models suffer from outliers with significantly higher prediction errors, which can be as high as 1.3% for FNN-based DeepONet. The plots in Fig. 10 provide more insight into how the prediction errors are distributed across all testing data points. The GRU- and LSTM-based models can accurately predict the temperature profile even up to the worst-case scenario. However, in the worst-case scenario, the FNN-DeepONet's prediction leads to significant error at the solidification front and the mushy zone, i.e., between Figure 10: Contour plots for the temperature distribution for different DeepONet models at different percentiles of prediction accuracy. solidus and liquidus temperatures. Considering that the thickness of solidifying shell at the mold exit is calculated from the position of the solidifying front, in this case, the result inferred by classical DeepONet (with FNN in branch) will be of significantly less value for design, optimization, and online controls of this critical steel-making process. From this example, we see that the performance of the GRU- and LSTM-based DeepONets are highly similar. However, since the GRU-DeepONet has fewer trainable parameters, it trains about 9% faster than the LSTM model. Therefore, from a computational efficiency and accuracy perspective, it appears that the GRU-DeepONet is the most effective model of the three studied in this work. ### History-dependent plastic deformation A similar training data fraction study was performed, and the results were summarized in Fig. 11. A 5-fold cross-validation was conducted on the GRU-based DeepONet (best-performing model in Fig. 11), and the results are summarized in Table 7. We compared the GRU model with median performance from the 5-fold cross-validation with the FNN- and LSTM-DeepONets. For the plastic deformation problem, training of the FNN-, GRU- and LSTM-based DeepONets took 7441s, 18145s, and 20730s, respectively. The inference time for the three NN models compared to the FE simulation time is shown in Table 8. Key performance metrics for the three models are shown in Table 9. Histograms of the error distribution are depicted in Fig. 12. To remove the effect of outliers on the X-axis scaling, the cases with relative \(L_{2}\) error greater than 5% were not shown in the histograms, and were instead counted and reported in the figure legends. Contour plots of the von Mises stress are displayed in Fig. 13 to show the spatial distribution of prediction errors. Cases correspond to the 0\({}^{th}\) (best case), 90\({}^{th}\), and 99\({}^{th}\) percentile prediction errors for all three DeepONet models. \begin{table} \begin{tabular}{c|c c} Model & Mean relative L2 error [\%] & \(R^{2}\) value \\ \hline GRU & 0.902 (0.056) & 0.998 (8.301\(\times 10^{-4}\)) \\ \end{tabular} \end{table} Table 7: 5-fold cross-validation, plastic deformation Figure 11: Performance metrics for the three models trained with a different number of training data points. For this more complex case, the worst-case predictions deserve special attention. To this end, the normalized load magnitudes and the von Mises stress contours for the worst predictions of each model are shown in Fig. 14. To further elucidate the relationship between prediction error and the mean stress magnitude, scatter plots are provided in Fig. 15. With this more challenging problem, increasing the number of data points generally improves the performance of the models, with the best prediction accuracy achieved using 80% of data in training for all three models. Again, the FNN-DeepONet shows the worst performance, with GRU-DeepONet providing the highest accuracy. Similar to the observations made in Section 3.1, the GRU- and LSTM-based models share comparable prediction accuracy, and the GRU-DeepONet demonstrated consistently accurate predictions in the 5-fold cross-validation, indicating that the results are repeatable. Both studies also show that the GRU- and LSTM-based models don't suffer from overfitting. The training time of the GRU model is 14.2% faster than the LSTM model. Once trained, all three models can infer the full-field solution at a speed at least 200 times faster than FE simulations. Table 9 shows that the GRU- and LSTM-DeepONets are about 2.5 times more accurate than the FNN-DeepONet, at the expense of a 1.5 times longer training time. However, unlike the simple case in Section 3.1, we see that the model predictions for the plasticity case have significant outliers, with worst-case errors as high as 615%. The presence of outliers is also evident in Fig. 12, where there were at least 3.4% of cases whose prediction error exceeds 5%, despite \begin{table} \begin{tabular}{c|c c c} & \multicolumn{1}{c}{FE simulation time [s]} & \multicolumn{1}{c}{Inference time [s]} & \multicolumn{1}{c}{Speed up compared to FE (X)} \\ \hline FE simulation & 21 & / & / \\ FNN & / & 3.29 \(\times 10^{-2}\) & 6.4\(\times 10^{2}\) \\ GRU & / & 8.00 \(\times 10^{-2}\) & 2.6\(\times 10^{2}\) \\ LSTM & / & 9.70 \(\times 10^{-2}\) & 2.2\(\times 10^{2}\) \\ \end{tabular} \end{table} Table 8: Computational cost of the plastic deformation problem Figure 12: Histograms showing the relative \(L_{2}\) error distributions over all test cases for the plastic deformation of the dog bone specimen. Cases with relative \(L_{2}\) error greater than 5% were not shown in the histogram and were instead reported on the legend of each plot. \begin{table} \begin{tabular}{c|c c c} & Relative \(L_{2}\) error [\%] & Max error [\%] & \(R^{2}\) value \\ \hline FNN & 2.995 (14.551) & 615.369 & 0.99542 \\ GRU & 0.847 (2.853) & 89.044 & 0.99721 \\ LSTM & 0.919 (2.723) & 70.366 & 0.99930 \\ \end{tabular} \end{table} Table 9: Error statistics of three DeepONet models on the plastic deformation problem having an overall \(R^{2}\) value of over 0.995. We purposefully did not include the worst-case contours in Fig. 13 and defer that to Fig. 14. Excluding the outliers, we see from Fig. 13 that all three DeepONet models can accurately predict the stress contours and stress concentration points in the dog-bone specimen up to 90\({}^{th}\) percentile of the prediction error. As we get closer to the outliers (99\({}^{th}\) percentile), we see that the FNN-DeepONet predictions completely missed the two stress concentration points in the resulting contour, while the GRU- and LSTM-DeepONets were able to capture the location of the stress concentration points and predict the stress magnitude relatively accurately. These findings prove that the proposed DeepONet architectures with GRU and LSTM branches can effectively encode complex, time-dependent loading histories and perform better than the traditional FNN-based DeepONet. The spatial distribution of the predicted stress field closely resembles the FE ground truth, which indicates that the proposed DeepONets combine the temporal encoding capability of the GRU and LSTM structures with the spatial encoding capability of the DeepONet architecture to improve prediction accuracy. From the worst-case prediction contours shown in Fig. 14, we see that in the worst-case, none of the three models were able to capture the location of the stress concentration points and the stress magnitude. With the exception of the worst case for the GRU model, the other two worst Figure 13: Contour plots for the Mises stress distribution for different DeepONet models at different percentiles of prediction accuracy. cases can be characterized by low stress magnitudes over the entire domain (as calculated from FE simulations). Additionally, from Fig. 13, it is worth noting that the mean stress magnitudes for the higher error cases (99\({}^{th}\) percentile) seem to be lower than the better-performing cases as well. These observations prompted us to investigate the relation between DeepONet prediction errors and the ground truth mean stress magnitude, which is reported in Fig. 15. From the results, we did notice a generally decreasing trend in the prediction error as the mean stress magnitude increases, with most of the poor predictions concentrated in cases with low ground-truth stress magnitudes. Figure 14: Normalized load magnitudes and Mises stress contours for the worst-case prediction by the three DeepONet models. Figure 15: Scatter plots of model prediction error versus the mean Mises stress over the domain as computed from the FE simulation. A linear curve fit was included in each subplot and the expression of each line is included in the legend. This observation is reasonable since the mean squared error is used as the loss function of training, therefore, regions/cases with small stress magnitudes are going to contribute less to the overall loss function, hence they do not get improved efficiently as the model trains. This could be mitigated by introducing a relative loss function definition for training and will be investigated in future works. Similar to the transient heat transfer example in Section 3.1, the GRU- and LSTM models delivered similar prediction accuracy in this example. With the 14.2% faster training time, the GRU model again emerges as a more computationally efficient choice in the two DeepONet architectures proposed in this work. ## 4 Conclusions, limitations, and future work The classical data-driven DeepONet framework with the feed-forward neural network (FNN) in the branch and trunk is effective but ignores causal relations in the input data if the inputs to the branch network are time-dependent. Realizing this limitation, we introduced two sequential DeepONet architectures with advanced, recurrent neural networks of LSTM and GRU types in the branch, and this is the most significant and novel scientific contribution of this work. By introducing sequential learning models in the branch network of the DeepONet structure, we combined the powerful temporal encoding capability of the GRU and LSTM structure in the branch with the spatial encoding capability of the DeepONet architecture, which allows the accurate prediction of full-field solution contours given a time-dependent input load. We have then focused on learning full-field temperature and stress solutions in the two highly nonlinear practical applications of thermal and mechanical types with complex random loading histories. In both cases, GRU- and LSTM-DeepONets provided significantly more accurate predictions than classical DeepONet, proving that sequential learning methods in the branch of DeepONet can universally and effectively encode loading histories regardless of the underlying physics of the problem. DeepONets with a sequential branch network were able to half the average error among all testing samples compared to FNN-DeepONet. The difference was even more profound for the plasticity example, where GRU- and LSTM-DeepONets reduced the prediction errors by 2.5 times and accurately predict the Mises stress contour up to 90\({}^{th}\) percentile of prediction error. Through the two examples, we have shown that the proposed DeepONets can be adequately trained on the current high-end GPUs within a few hours. Moreover, once the DeepONets are properly trained, they can infer accurate full-field results at least two orders of magnitude faster than classical nonlinear numerical methods. Between the two proposed DeepONet architectures, the GRU-DeepONet has fewer trainable parameters compared to the LSTM-DeepONet, while it achieved similar prediction accuracy and trained faster during training. Therefore, it is recommended to use the GRU-DeepONet over classical FNN-DeepONet and LSTM-DeepONet for predicting the full-field solution given a time-dependent load. Although our current methodology yielded high accuracy, the proposed methods have certain limitations. Currently, only the end state of the time-dependent loading is predicted instead of the full evolution history. Secondly, many data in real-world applications are inherently unbalanced. For example, for real-world engineering structures, the majority of the load histories are centered around a certain design load, with only rare loading cases (such as sudden wind loads and earthquakes) that significantly exceed the design load level. This data imbalance was not accounted for in the data generation procedure. For both example problems, we uniformly sampled the inputs from for a wide range of load magnitudes. Further re-sampling treatments [50; 51; 52] should be employed to account for the imbalance effect of the training dataset. With the improved accuracy afforded by the GRU and LSTM branch networks, the sequential DeepONet models can be used as an accurate and efficient surrogate model for FE simulations and can be used in high-fidelity multi-scale modeling, optimization, and design scenarios whenever a high number of forward evaluations with parametric histories are needed in many complex non-linear and non-equilibrium applications and processes in engineering and science. The improved prediction accuracy compared to classical FNN-DeepONets also offered more confidence when deploying the model in practical engineering applications. Furthermore, the weights and biases of the trained sequential DeepONets can be transported to laptops and even edge computing devices and used for inference without the use of GPUs or even modeling tools for instant predictions in many online control scenarios. As the capabilities and memory of GPU hardware rapidly increase, in our future work, we will modify the current DeepONet architectures for learning full solution fields from three-dimensional modeling domains as well as predicting the complete time history of the solution contours. ## Replication of results The data and source code that support the findings of this study can be found at [https://github.com/Jasiuk-Research-Group](https://github.com/Jasiuk-Research-Group). Note to editor and reviewers: the link above will be made public upon the publication of this manuscript. During the review period, the data and source code can be made available upon request to the corresponding author. ## Conflict of interest The authors declare that they have no conflict of interest. ## Acknowledgements The authors would like to thank the National Center for Supercomputing Applications (NCSA) at the University of Illinois, and particularly its Research Consulting Directorate, the Industry Program, and the Center for Artificial Intelligence Innovation (CAII) for their support and hardware resources. This research is a part of the Delta research computing project, which is supported by the National Science Foundation (award OCI 2005572) and the State of Illinois, as well as the Illinois Computes program supported by the University of Illinois Urbana-Champaign and the University of Illinois System. Finally, the authors would like to thank Professors George Karniadakis, Lu Lu, and the Crunch team at Brown, whose original work with DeepONets inspired this research. ## CRediT author contributions **Junyan He**: Methodology, Formal analysis, Investigation, Writing - Original Draft. **Shashank Kushwaha**: Methodology, Formal analysis, Investigation, Writing - Original Draft. **Jaewan Park**: Methodology, Investigation, Writing - Original Draft. **Seid Koric**: Conceptualization, Methodology, Supervision, Resources, Writing - Original Draft, Funding Acquisition. **Diab Abueidda**: Supervision, Writing - Review & Editing. **Iwona Jasiuk**: Supervision, Writing - Review & Editing.
2305.11219
MUSE-ALMA Halos XI: Gas flows in the circumgalactic medium
The flow of gas into and out of galaxies leaves traces in the circumgalactic medium which can then be studied using absorption lines towards background quasars. We analyse 27 log(N_HI) > 18.0 HI absorbers at z = 0.2 to 1.4 from the MUSE-ALMA Halos survey with at least one galaxy counterpart within a line of sight velocity of +/-500 km s^{-1}. We perform 3D kinematic forward modelling of these associated galaxies to examine the flow of dense, neutral gas in the circumgalactic medium. From the VLT/MUSE, HST broadband imaging and VLT/UVES and Keck/HIRES high-resolution UV quasar spectroscopy observations, we compare the impact parameters, star-formation rates and stellar masses of the associated galaxies with the absorber properties. We find marginal evidence for a bimodal distribution in azimuthal angles for strong HI absorbers, similar to previous studies of the MgII and OVI absorption lines. There is no clear metallicity dependence on azimuthal angle and we suggest a larger sample of absorbers are required to fully test the relationship predicted by cosmological hydrodynamical simulations. A case-by-case study of the absorbers reveals that ten per cent of absorbers are consistent with gas accretion, up to 30 per cent trace outflows while the remainder trace gas in the galaxy disk, the intragroup medium and low-mass galaxies below the MUSE detection limit. Our results highlight that the baryon cycle directly affects the dense neutral gas required for star-formation and plays a critical role in galaxy evolution.
Simon Weng, Céline Péroux, Arjun Karki, Ramona Augustin, Varsha P. Kulkarni, Aleksandra Hamanowicz, Martin Zwaan, Elaine M. Sadler, Dylan Nelson, Matthew J. Hayes, Glenn G. Kacprzak, Andrew J. Fox, Victoria Bollo, Benedetta Casavecchia, Roland Szakacs
2023-05-18T18:00:01Z
http://arxiv.org/abs/2305.11219v1
# Muse-Alma Halos XI: Gas flows in the circumgalactic medium ###### Abstract The flow of gas into and out of galaxies leaves traces in the circumgalactic medium which can then be studied using absorption lines towards background quasars. We analyse 27 \(\log\left[N(\mathrm{H\,{\textsc{i}}})/\mathrm{cm}^{-2}\right]>18.0\) H i absorbers at \(z=0.2\) to 1.4 from the MUSE-ALMA Halos survey with at least one galaxy counterpart within a line of sight velocity of \(\pm 500\) km s\({}^{-1}\). We perform 3D kinematic forward modelling of these associated galaxies to examine the flow of dense, neutral gas in the circumgalactic medium. From the VLT/MUSE, HST broadband imaging and VLT/UVES and Keck/HIRES high-resolution UV quasar spectroscopy observations, we compare the impact parameters, star-formation rates and stellar masses of the associated galaxies with the absorber properties. We find marginal evidence for a bimodal distribution in azimuthal angles for strong H i absorbers, similar to previous studies of the Mg ii and O vi absorption lines. There is no clear metallicity dependence on azimuthal angle and we suggest a larger sample of absorbers are required to fully test the relationship predicted by cosmological hydrodynamical simulations. A case-by-case study of the absorbers reveals that ten per cent of absorbers are consistent with gas accretion, up to 30 per cent trace outflows while the remainder trace gas in the galaxy disk, the intragroup medium and low-mass galaxies below the MUSE detection limit. Our results highlight that the baryon cycle directly affects the dense neutral gas required for star-formation and plays a critical role in galaxy evolution. keywords: galaxies: evolution - galaxies: formation - galaxies: kinematics and dynamics - galaxies: haloes - quasars: absorption lines ## 1 Introduction Once known as 'island universes', galaxies are no longer considered isolated systems, especially when observing the cycle of baryons within these systems (Peroux and Howk, 2020). The formation of stars is mediated by the delicate balance between outflowing and inflowing gas. Galactic winds driven by active galactic nuclei (AGN) or intense star-formation remove gas from galaxies (Veilleux et al., 2005). Some of the outflowing material condenses back onto the galaxy in the form of fountains (Fraternali and Binney, 2008; Marinacci et al., 2010; Fraternali, 2017). At the same time, the accretion of cold gas from large-scale filaments and the cooling of hot halo gas replenishes the gas reservoirs of galaxies (Keres et al., 2005; Nelson et al., 2013; Hafen et al., 2022). Satellites embedded within the halo of galaxies are also an important source of cool gas (Wang, 1993; Hafen et al., 2019). These phenomena leave their traces in the circumgalactic medium (CGM), the region that extends from the galactic disk to the intergalactic medium (IGM; Tumlinson et al., 2017; Faucher-Giguere and Oh, 2023). While the diffuse gas in the CGM can be studied in emission, it is typically detected in systems containing quasars, extreme starbursts and overdensities not representative of typical galaxies (e.g. Hayes et al., 2016; Epinat et al., 2018; Johnson et al., 2018; Chen et al., 2019; Helton et al., 2021; Burchett et al., 2021; Cameron et al., 2021). Stacking enables detections of the diffuse gas in emission, but it becomes difficult to link the detections to galaxy properties and the wider environment (Steidel et al., 2011; Momose et al., 2014; Chen et al., 2020; Dutta et al., 2023). Typical galaxies on the main sequence require exceptional exposure times to obtain a detection in emission (Zabl et al., 2021; Leclercq et al., 2022; Bacon et al., 2023). Alternatively, analysing absorption lines in the spectra of bright background sources such as quasi-stellar objects (QSOs) allows us to probe the gas in the CGM to lower column densities. While these sightlines only sample the CGM on parsec scales, there has been an abundance of recent surveys examining the galaxies around H i (Lofthouse et al., 2020; Muzahid et al., 2020; Chen et al., 2020; Berg et al., 2023, Karki et al., 2023), Mg ii (Bouche et al., 2016; Nielsen et al., 2020; Lundgren et al., 2021) and other gas phases. Early studies of strong H i Ly-\(\alpha\) absorbers associated them to the rotating disks of galaxies (Wolfe et al., 1986, 2005). Damped Ly-\(\alpha\) absorbers (DLAs) with column density \(\log[N({\rm H\,{\sc i}})/{\rm cm}^{-2}]\geq 20.3\) are expected to originate from gas within \(\sim\)20 kpc of the galaxy centre (Peroux et al., 2005; Zwaan et al., 2005; Stern et al., 2021). Since, the increasing number of galaxy-absorber systems brought by the advent of integral field spectroscopy suggest these absorbers can originate from beyond the disk. In particular, H i absorbers with lower column density found at larger impact parameters suggest different phenomena such as gas accretion and outflows are being traced. The time galaxies take to deplete their molecular gas reservoirs is found to be shorter than the Hubble time at all redshifts, meaning that galaxies must have a way to maintain a gas supply through accretion (Daddi et al., 2010; Genzel et al., 2010; Peroux and Howk, 2020; Tacconi et al., 2020; Walter et al., 2020). Some theoretical studies have attempted to separate gas accretion into cold-mode and hot-mode accretion. The former refers to the accretion of cold gas from filaments in the intergalactic medium and is expected to dominate at redshift \(z\geq 3\)(Keres et al., 2005; Nelson et al., 2013). Simulations predict that the accretion produces flows that co-rotate with the galaxy disk out to \(\sim\)100 kpc (Stewart et al., 2011, 2013; van de Voort et al., 2011) and can be traced by dense H i absorbers (van de Voort et al., 2012; Theuns, 2021). In contrast, the cooling of hot halo gas is also expected to grow gas reservoirs and dominates at lower redshifts and/or higher masses (Dekel et al., 2009, 2009). The accretion of hot gas transitions from a spherical geometry to a disk when it cools because of angular momentum conservation (Mo et al., 1998; Hafen et al., 2022). While gas accretion at \(z\leq 1\) has been probed using Mg ii absorbers (Ho et al., 2017; Martin et al., 2019; Zabl et al., 2019), studies of accretion using Ly-\(\alpha\) absorbers remain limited. In contrast to accretion, neutral gas in outflows has been ubiquitously observed in galaxies when the SFR surface density is sufficiently high (e.g., Heckman and Thompson, 2017; Hayes, 2023) and in galaxies containing an active galactic nucleus (e.g., Cicone et al., 2014, 2015). While there remains uncertainty regarding how the cool gas survives in the hot and turbulent wind environment, recent idealised simulations of the cold phase in a turbulent medium find that cold gas clouds can survive and even grow depending on their size (Gronke et al., 2022). Traditionally, outflowing gas has been observed using down-the-barrel observations by identifying blueshifted absorption against the galaxy stellar continuum (Martin, 2005; Veilleux et al., 2005; Rubin et al., 2014; Heckman and Thompson, 2017). Identifying outflows using absorbers towards background sources is a complementary method where the location of the absorbing gas is precisely measured (Bouch et al., 2012; Kacprzak et al., 2014; Schroetter et al., 2019). However, the velocity degeneracy between inflows and outflows increases the difficulty in identifying gas flows for these transverse absorption-line studies. Observations and simulations of gas outflows that emerge from the galaxy centre suggest they form an expanding biconical shape perpendicular to the galaxy disk as this is the path of least resistance (Bordoloi et al., 2011; Lan et al., 2014; Nelson et al., 2019). Hence, quasar absorbers have been interpreted to be outflowing when they are aligned with the projected minor axis of galaxies (Rahmani et al., 2018; Schroetter et al., 2019). Transverse absorption-line studies of outflowing gas at \(z>0.2\) typically use the Mg ii ion, but there have been limited studies on Ly-\(\alpha\) absorbers that directly trace the neutral hydrogen required for star-formation. On the other hand, Ly-\(\alpha\) absorbers at \(z\lesssim 1\) have a bimodal distribution in the gas-phase metallicity (Lehner et al., 2013, 2022). Here, metal-poor systems are expected to trace the accretion of cold gas, while metal-rich absorbers trace outflows, recycled or tidally stripped gas (Peroux et al., 2020; Lehner et al., 2022). Indeed, studies of high velocity clouds (HVCs) in the Milky Way supplement velocity measurements of the HVC with metallicity to distinguish between inflows and outflows (Fox et al., 2019; Ramesh et al., 2023). For extragalactic sources, the imprint of inflows and outflows in the CGM might also be seen in the bimodal distribution of azimuthal angles (\(\Phi\)) between a galaxy's major axis and the absorber sightline (where values close to 90\({}^{\circ}\) indicate the gas is located near the projected galaxy minor axis; Bouche et al., 2012; Kacprzak et al., 2012; Bordoloi et al., 2014; Schroetter et al., 2019). Naturally, it follows that there should be a metallicity dependence on azimuthal angle, where gas located towards the minor axis is metal-enriched as outflows eject metals produced in stars (Nelson et al., 2019; Peroux et al., 2020; Truong et al., 2021; van de Voort et al., 2021). However, in observations, this phenomenon remains unclear as studies have found no such trend (Peroux et al., 2016; Kacprzak et al., 2019; Pointon et al., 2019), perhaps due to uncertainties in the ionization correction and dust modelling of absorbers or poor metal-mixing in the CGM (Zahedy et al., 2019, 2021; Bordoloi et al., 2022; Sameer et al., 2022). Simulations of Mg ii absorption around galaxies find that the equivalent width (EW) of the line decreases as the azimuthal angle of the sightline increases for a fixed impact parameter (DeFelippis et al., 2021). Interestingly, Wendt et al. (2021) finds gas located near the minor axis is more dust-depleted and perhaps metal-enriched than absorbers near the major axis. More carefully selected observational samples are required to reach a consensus on the metallicity-azimuthal angle dependence, especially because the relationship is dependent on the galaxy stellar mass, impact parameter, redshift and H i column density (Peroux et al., 2020). The MUSE-ALMA Halos survey targets 32 Ly-\(\alpha\) absorbers with column densities \(\log[N({\rm H\,{\sc i}})/{\rm cm}^{-2}]>18.0\) at redshift \(0.2\lesssim z\lesssim 1.4\) (see Peroux et al., 2022, for an overview). In total, 79 galaxies are detected within \(\pm\)500 km s\({}^{-1}\) of the absorbers at impact parameters ranging from 5 to 250 kpc (Weng et al., 2022). The stellar masses of these associated galaxies have been measured from broadband imaging in Augustin et al. (in prep) and their morphologies studied in Karki et al. (submitted). Gas flows for a subsample of galaxies have already been studied in earlier works (Peroux et al., 2017; Klitsch et al., 2018; Rahmani et al., 2018, 2018; Hamanowicz et al., 2020; Szakacs et al., 2021) and our work is a continuation of these studies for the full MUSE-ALMA Halos survey. With the complete sample of absorbers and their galaxy counterparts, we examine the azimuthal dependence of metallicity in the circumgalactic medium and identify the gas flows being probed by QSO sightlines. We adopt the following \(\Lambda\)CDM cosmology: H\({}_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.3\) and \(\Omega_{\Lambda}=0.7\). ## 2 Ionized gas maps To determine the origin of the gas, that is, whether it traces inflows, outflows or other phenomena, we need to compare the ionized gas kinematics with the neutral-phase absorber kinematics. We create continuum-subtracted cubes centred on the [O ii]\(\lambda\)3727,3729, H\(\beta\)\(\lambda\)4861, [O iii] \(\lambda\)5007 and H\(\alpha\)\(\lambda\)6563 emission lines for each galaxy when available within the MUSE wavelength range (4700 to 9300 A). We use the galpak algorithm v1.32 to extract intrinsic galaxy parameters such as the inclination (\(i\)) and kinematic position angle (PA\({}_{\rm kin}\)) and velocity maps from data cubes. This forward modelling approach assumes a disk model and we refer readers to Bouche et al. (2015) for a complete description of the method. For the code to provide stable results, the signal-to-noise ratio (S/N) of the nebular line at the brightest spaxel needs to meet the requirement S/N \(>\) 3. Additionally, the algorithm is robust provided the galaxy is not too compact; the prerequisite \(R_{e}/R_{\rm PSF}>\) 1.50 should be met, where \(R_{\rm PSF}\) is half the full width at half maximum (FWHM) of the PSF and \(R_{e}\) is the effective radius. From the initial sample of 79 associated galaxies, 48 satisfy the signal-to-noise ratio and projected galaxy size criteria. We run the algorithm for all the nebular lines ([O ii], H\(\beta\), [O iii] and H\(\alpha\)). For galaxies where multiple emission lines are available, we find the measured parameters agree within 10 per cent for any given galaxy. We preferentially adopt the values returned by the H\(\beta\), [O iii] and H\(\alpha\) lines to avoid complications associated with fitting the [O ii] doublet. The maximum rotational velocity, inclination and kinematic position angle are tabulated in Table 1. We also include photometric position angles (PA\({}_{\rm phot}\)) from galfit fits of the galaxy using the HST broadband imaging in the online version of the table (Karki et al. submitted). The smoothed rotational velocity maps for three example galaxies are shown in Figure 1 and the remainder are found in Appendix A (Figures A1 to A9). Each map is accompanied by dashed lines marking projected angles of \(\pm\)30\({}^{\circ}\) away from the minor axis. The arrow points in the direction of the QSO sightline and consists of two halves coloured by the impact parameter and line of sight velocity difference between galaxy and absorber. All galaxies located at the smallest impact parameter to the absorber have their velocity maps emboldened. We provide the emission line used to fit each galaxy, identification and absorber redshift below the velocity map. For each fitted galaxy, we also display the observed flux and flux residual maps from the galpak fit. Regions of the flux map that have a white background are pixels that were masked during fitting or are outside the MUSE field of view. Pixels were masked to remove the flux contribution from the nearby QSO. The arrow below the residual maps demarcates the physical size of the cube used to fit the emission. Ten galaxy fits have significant flux residuals and can be attributed to several causes. The residuals for four galaxies in the Q1130\(-\)1449 field associated with the absorber at \(z_{\rm abs}\) = 0.3127 arise because of an extended ionized gas nebula permeating the large galaxy group (Kacprzak et al., 2010; Chen et al., 2019; Peroux et al., 2019). Galaxies Q1130\(-\)1449\(\_\)6 and Q1130\(-\)1449\(\_\)8 additionally have significant residuals due to a possible merger and outflows respectively. There are also significant flux residuals for galaxy Q0454\(-\)220\(\_\)4 due to the flux saturation of IFUs in the MUSE data from a nearby bright star. Another two galaxies (Q1211+1030\(\_\)7 and Q1229\(-\)021\(\_\)6) likely host active galactic nuclei from their positions on the [O iii] \(\lambda\)5007/H\(\beta\) versus [O ii] \(\lambda\lambda\)3727, 29/H\(\beta\) classification diagram (Lamareille, 2010; Weng et al., 2022). As galpak compares a disk model to the data using forward modelling, the flux excess is likely caused by the AGN. Finally, the significant residuals at the centre of source Q0454\(-\)220\(\_\)69 are a result of flux contamination from the QSO less than two arcseconds away. We exclude these objects with unreliable fits from further analysis and but include and note them in the figures for completeness. ## 3 Metallicity dependence on galaxy orientation Out of the initial 79 galaxies associated with 32 Ly-\(\alpha\) absorbers, 48 have kinematic position angle measurements from galpak. These measurements are complemented by an additional 19 photometric PA measurements using galfit on the HST broadband imaging (Karki et al. submitted). For galaxies with both measurements, the position angles agree within their 1\(\sigma\) uncertainties. To calculate the azimuthal angle (\(\Phi\)), we preferentially use the kinematic PA when available. We then set a restriction on the inclination \(i>30^{\circ}\) to ensure face-on galaxies with significant errors on the PA are removed from our sample. This leaves a total of 56 galaxies with robustly measured azimuthal angles associated with 22 Ly-\(\alpha\) absorbers. The remaining 23 galaxies not considered in this study because they were too faint to be modelled (12) or failed to meet the inclination requirement (11). A summary of the sample is shown in Table 2. ### Distribution of azimuthal angles We show the distribution of H i absorbers around galaxies in Figure 2. In the left histogram, galaxies with measured kinematic position angles are shown in purple, while the blue represents photometric PAs. Galaxies found at closest impact parameter to the absorber are given an opaque bar. We impose a restriction on the inclination (\(i>30^{\circ}\)) for the sample to remove face-on galaxies with large errors in their position angle and by extension, the azimuthal angle. This leaves a total of 56 galaxies. We test for bimodality by performing the Hartigan's dip test (Hartigan & Hartigan, 1985). To account for errors in the azimuthal angle, we repeat the dip test on 5,000 realisations with randomly sampled errors. The combined frequency distribution of the 5,000 iterations is depicted in the middle panel of Figure 2 and we find that the initial peak in the number count near \(\Phi=75^{\circ}\) diminishes in significance after accounting for errors in the measurement of \(\Phi\). The corresponding distribution of \(p\)-values is displayed in the right panel, where the vertical green bar shows the median \(p\)-value of the iterations is \(\lesssim 0.1\). This suggests there is marginal evidence for a bimodality in the azimuthal angle distribution of H i absorbers around galaxies. A larger sample of galaxy-absorber pairs is required to confirm this signal with more certainty. We show polar plots in Figure 3 to visualise the distribution of H i absorbers more clearly. In the left panel, we see the anti-correlation between H i column density and impact parameter (see Weng et al., 2022, for more analysis of this effect). There is also little correlation between the line of sight velocity difference between absorber and galaxy, and the polar position, as the median \(|\Delta v_{\rm LOS}|\) values in the two bins \(0^{\circ}<\Phi<30^{\circ}\) and \(60^{\circ}<\Phi<90^{\circ}\) are respectively 75 and 82 km s\({}^{-1}\). ### Metallicity dependence on azimuthal angle In Figure 4, we show the absorber metallicity against azimuthal angle using data from previous works (Peroux et al., 2011, 2012; Bouche et al., 2013, 2016) and the new results presented here from the MUSE-ALMA Halos survey. These dust-free metallicities are determined from zinc abundances of dense neutral gas. We colour each point by Figure 1: The modelled velocity, observed flux and residual maps of three randomly chosen galaxies associated with QSO absorbers that were fitted using galaxy. Each galaxy is represented by three maps. The modelled velocity maps in column one have been smoothed using repeated linear interpolation. A black dot indicates the modelled galaxy centre and two dashed lines represent 2D projected angles of \(\pm 30^{\circ}\) from the minor axis. The arrow points in the direction of the H i absorber and consists of two coloured halves. The halves are coloured by the impact parameter and line of sight velocity difference between galaxy and absorber. When pointing up, the left half corresponds to the impact parameter and the reverse is true when pointing down. We include text with the azimuthal angle (\(\Phi\)) and inclination (\(i\)) along with the impact parameter (\(b\)) and line of sight velocity difference (\(\Delta v\)) for each velocity map. Below each velocity map is the galaxy identification and emission line fitted. The observed flux maps are shown in column two. White pixels (not shown here) correspond to regions that are outside the MUSE field-of-view or were masked during fitting. The extent of the spatial co-ordinates of the cube used to model the emission and column density of the absorber are given below each residual map. Residual flux maps are found in the third column, where each pixel is coloured by the residual (data – model) normalized by the noise. The absorber and galaxy redshift along with the number of associated galaxies within \(\pm 500\) km s\({}^{-1}\) of the absorber are written below the residual map. Associated galaxies that are at the smallest impact parameter to the absorber have a bold border around the plots. the stellar mass using one dex bins from \(\log(M_{*}/M_{\odot})=8.0\) to 11.0. Result from the TNG50 simulation (Nelson et al., 2019; Pillepich et al., 2019) are plotted for an impact parameter of \(b=100\) kpc, redshift \(z=0.5\) and stellar masses \(\log(M_{*}/M_{\odot})=8.5,9.5\) and 10.5 (Peroux et al., 2020) and the shaded bands represent deviations of 1\(\sigma\). The stellar mass significantly affects the normalisation of the trend, but the metallicity difference between gas near the major and minor axes is approximately 0.3 dex for all stellar mass bins. We note that Peroux et al. (2020) find that the relationship diminishes for lower impact parameters (\(b<50\) kpc) and higher redshifts (\(z>1.5\)). Additionally, limiting the column density to \(\log[N(\mathrm{H\,{\sc i}})/\mathrm{cm}^{-2}]>17.0\) washes out the signal in the simulations because of the limited statistics. Hence, it is unsurprising that we find little evidence for a correlation between absorber metallicity and azimuthal angle given we have limited the sample to strong H i (\(\log[N(\mathrm{H\,{\sc i}})/\mathrm{cm}^{-2}]>19.0\)) absorbers with dust-free metallicity measurements. Simulations suggest that a sample of \(\sim\)100 \(\log[N(\mathrm{H\,{\sc i}})/\mathrm{cm}^{-2}]>19.0\) absorbers (where the fraction of ionised gas is expected to be small) with reliable metallicities is required to confirm gas near the minor axis is more metal-enriched than gas near the major axis. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline ID & Line & \(b\) & \(\Delta\nu\) & SFR & \(\log M_{*}/M_{\odot}\) & \(V_{\mathrm{max}}\) & \(\sigma\) & \(i\) & PA & \(\Phi\) & Flow \\ & & (kpc) & (km s\({}^{-1}\)) & (M\({}_{\odot}\) yr\({}^{-1}\)) & (km s\({}^{-1}\)) & (dex) & (deg) & (deg) & (deg) & \\ \hline & & **OQ138 - 0005\({}_{\mathrm{52}}\) square** & **1.34, zabs = 0.7821**, **log (M (H)/cm\({}^{-2}\)) = 19.81 \(\pm\) 0.08 & & & & & & \\ \hline Q0138m0005\({}_{\mathrm{5}}\)14 & [O ii] & 82 & -3.4 & 6.9 \(\pm\) 2.4 & 9.8 \(\pm\) 0.2 & 100 \(\pm\) 22 & 141 \(\pm\) 6 & 67 \(\pm\) 9 & 100 \(\pm\) 6 & 80 \(\pm\) 6 & A \\ \hline & & **OQ152 – 2001\({}_{\mathrm{5}}\) square** & **2.06, zabs = 0.383, log(M (H)/cm\({}^{-2}\)) = 18.78** & & & & & & \\ \hline Q0152m2001\({}_{\mathrm{5}}\) & H\(\alpha\) & 60 & 84 & 0.73 \(\pm\) 0.4 & 11.25 \(\pm\) 0.14 & 116 \(\pm\) 1 & 14 \(\pm\) 2 & 79 \(\pm\) 1 & 142.1 \(\pm\) 0.3 & 9.3 \(\pm\) 0.3 & In \\ Q0152m2001\({}_{\mathrm{7}}\) & H\(\alpha\) & 150 & 350 & 0.18 \(\pm\) 0.4 & 11.00 \(\pm\) 0.13 & 162 \(\pm\) 3 & 4 \(\pm\) 4 & 79 \(\pm\) 2 & 129 \(\pm\) 2 & 58 \(\pm\) 2 & \\ Q0152m2001\({}_{\mathrm{13}}\) & H\(\alpha\) & 84 & 330 & 0.10 \(\pm\) 0.4 & 10.5 \(\pm\) 0.2 & 75 \(\pm\) 6 & 5 \(\pm\) 4 & 54 \(\pm\) 6 & 153 \(\pm\) 4 & 6 \(\pm\) 4 & \\... &... &... &... &... &... &... &... &... &... &... &... \\ \hline \hline \end{tabular} \end{table} Table 1: **Summary of kinematic properties for the modelled galaxies.** The kinematic and stellar properties of galaxies within \(\pm 500\) km s\({}^{-1}\) of H i absorbers (highlighted in bold). For each modelled galaxy, the identification and emission line used for modelling is given. Velocity differences (\(\Delta\nu\)) are calculated with respect to the absorber redshift. Star-formation rates (SFRs) and limits are based on Kennicutt (1998), and we use the H\(\alpha\) empirical relation when available. At \(z\gtrsim 0.4\), we use the [O ii] \(\lambda\)3726, 3729 luminosity to estimate the SFR. Dust corrections are performed for galaxies with measured H\(\beta\) and H\(\alpha\) emission-line fluxes. Stellar mass estimates are derived from SED fitting with lephrade using HST broadband imaging (Arnouts & Ilbert, 2011, Augustin et al. in prep.). The maximum rotational velocity (\(V_{\mathrm{max}}\)), velocity dispersion (\(\sigma\)) and inclination (\(i\)) measurements are derived from galpak. We adopt the photometric position angle measured using galfit(Peng et al., 2002) on the HST photometry (Karki et al. submitted.) when the kinematic PA is not available from galpak. The azimuthal angle, \(\Phi\), is preferentially calculated using \(\mathbf{PA_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ Figure 4: The absorber metallicity as a function of azimuthal angle. Diamonds represent results from this survey and previous works (Peroux et al., 2011, 2012; Bouche et al., 2013, 2016) that measured metal abundances directly. The coloured lines are the trends predicted by the TNG50 simulation ( Péroux et al., 2020) for galaxies of different stellar masses at \(z=0.5\) and \(b=100\) kpc. The shaded regions represent a \(1\sigma\) deviation. Each galaxy is coloured by their stellar mass corresponding to the coloured lines, while the unfilled black diamonds are galaxies without \(M_{*}\) measurements. At present, there are too few H i absorbers with robust metallicities and galaxy counterparts to confirm gas near the minor axis is more metal-enriched than gas near \(\Phi=0^{\circ}\). Figure 3: Polar plots illustrating the distribution of the H i absorbers as a function of azimuthal angle and impact parameter. Diamonds represent galaxies that are found at closest impact parameter to a given absorber and each point is coloured by the H i column density (left) or the absolute value of the line of sight velocity difference between galaxy and absorber (right). Crosses show the location of associated galaxies at larger impact parameters when multiple galaxies are associated with a single absorber. Points located near \(90^{\circ}\) are found near the galaxy minor axis. The black dashed line marks an impact parameter of \(b=20\) kpc. ## 4 Characterising Gas Flow Origins Identifying broader relationships between absorber properties and the galaxy orientation is useful to characterise the global properties of the circumgalactic medium, but identifying gas flows for individual systems enables us to connect galaxy properties with the galactic baryon cycle. In this section, we examine the rotation maps of galaxies associated with individual absorbers to infer whether they originate from the galaxy disk, are inflowing or outflowing or arise from other phenomena. Previous works have already performed a kinematic analysis of several galaxy-absorber systems in the MUSE-ALMA Halos survey and we summarise it here. The two absorbers at \(z_{\rm abs}=0.3830\) and \(z_{\rm abs}=0.7802\) towards quasar Q0152\(-\)0005 were found to respectively trace inflowing and outflowing gas (Rahmani et al., 2018, 2018). In a study of the neutral, ionized and molecular gas phases, the \(z_{\rm abs}=0.633\) absorber towards Q0420\(-\)0127 is found to uncover signatures of outflows or intragroup gas (Klitsch et al., 2018). A rich galaxy group is found at \(z_{\rm abs}=0.313\) towards Q1130\(-\)1449 and the Ly-\(\alpha\) absorber appears to trace intragroup gas (Peroux et al., 2019). Finally, the absorber towards Q2131\(-\)1207 is consistent with co-rotating gas accreting onto the galaxy at lowest impact parameter to the absorber (Peroux et al., 2017; Szakacs et al., 2021). The remaining 22 absorbers with galaxy counterparts are analysed in this work and the final designation for each absorber is noted in Table 1. An individual discussion for each absorber is contained in Appendix B. ### Galaxy disks We find one case where the absorber evidently intersects the galaxy disk. Such absorbers are characterised by a strong H i absorber (\(\log[N({\rm H\,i})/{\rm cm}^{-2}]>20.0\)) found within \(\sim\)20 kpc of a galaxy (Wolfe et al., 1986; Zwaan et al., 2005; Peroux et al., 2005). The clearest example of disk gas being probed is the absorber towards Q1110\(+\)0048 at \(z_{\rm abs}=0.5604\) where the host galaxy is found at an impact parameter of 6 kpc. We find the \(\log[N({\rm H\,i})/{\rm cm}^{-2}]=20.2\) absorber at the expected velocity sign and magnitude as the modelled rotational map (see top left panel of Figure 5). For the remaining three cases, no kinematic modelling is possible for the galaxies found at impact parameters \(b<20\) kpc. Nevertheless, we also attribute these absorbers to arise from the galaxy disk given their column densities (\(\log[N({\rm H\,i})/{\rm cm}^{-2}]>20.0\)) and low impact parameters. The \(|\Delta v_{\rm LOS}|\) values for these absorbers range from 5 to 150 km s\({}^{-1}\) and are consistent with typical rotation speeds. We calculate the azimuthal angle between the QSO sightline and galaxy using the HST photometry to range from 20\({}^{\circ}\) to 40\({}^{\circ}\). Thus, we do not find the absorbers near the minor axes where outflowing gas may be expected. While we lack direct evidence the absorber is rotating with the galaxy disk without velocity maps, damped Lyman-\(\alpha\) absorbers have long been associated with the rotating disk of galaxies (Wolfe et al., 1986, 2005). More recent simulations predict a covering fraction of 50 per cent for DLAs within 0.1 \(R_{\rm vir}\) (15 kpc) of galaxies with halo mass 10\({}^{12}\) M\({}_{\odot}\) at \(z\sim 0\)(Stern et al., 2021). These findings support the idea that the strong H i absorbers at \(z<1\) in this study arise from galaxy disks, whereas DLAs at \(z\gtrsim 2\) increasingly probe the inner circumgalactic medium. ### Accretion Similar to absorbers that probe the galaxy disk, gas that is co-rotating with a galaxy can be detected by comparing the velocity of the absorber with the ionized gas velocity field. The difference is that gas accreting onto the galaxy is lower column density and can be found at larger impact parameters. We find that the \(z_{\rm abs}=0.6057\) absorber towards Q1345\(-\)0035, where the host galaxy is found at an impact parameter of 56 kpc, is consistent with infalling gas. The \(\log[N({\rm H\,i})/{\rm cm}^{-2}]=18.85\) absorber is found with the same velocity sign and magnitude to the modelled rotational map (bottom right panel of Figure 5) and is likely tracing gas in the circumgalactic medium co-rotating with the galaxy disk. Less certain cases include the \(\log[N({\rm H\,i})/{\rm cm}^{-2}]<19.1\) absorber towards Q1130\(-\)1449 at \(z_{\rm abs}=0.1906\). The impact parameter of 18 kpc suggests the gas originates from the galaxy disk, but the upper limit on the H i column density signifies the gas is perhaps accreting. We also find dense gas consistent with co-rotation for the absorber towards Q0152\(+\)0023 at \(z_{\rm abs}=0.4818\) (top right panel of Figure 5). However, it is unclear whether such cool dense gas (\(\log[N({\rm H\,i})/{\rm cm}^{-2}]=19.78\)) can be found co-rotating at impact parameters \(>100\) kpc. Importantly, the stellar mass of the galaxy Q0152\(+\)0023_20 associated with the absorber is \(\log(M_{*}/M_{\odot})=8.1\pm 0.1\) and the estimated virial radius, \(R_{\rm vir}\), is \(\sim\)70 kpc (Rodriguez-Puebla et al., 2017). Despite the alignment in velocity, the absorber is beyond the halo of the associated galaxy and it is possible that a passive, low-mass galaxy (SFR \(<0.19\) M\({}_{\odot}\) yr\({}^{-1}\), \(\log(M_{*}/M_{\odot})<8.5\)) below the MUSE detection threshold near the QSO sightline hosts the dense absorber. Within the full MUSE-ALMA Halos Survey sample of H i absorbers, we find only three cases that are consistent with gas accretion. Cold streams in the circumgalactic medium are expected to be observed in absorption because of their high densities and temperatures \(\sim\)10\({}^{4}\) K (Fumagalli et al., 2011; Faucher-Giguere et al., 2015; Hafen et al., 2017). Simulations predict that dense H i absorbers at \(z\gtrsim 3\) trace cold-mode accretion (van de Voort et al., 2012) and detections of low-metallicity Lyman-limit systems (LLSs) have been suggested to trace these inflows (Ribaudo et al., 2011; Fumagalli et al., 2011; Lehner et al., 2013). However, at lower redshifts, direct detections of gas accretion are sparse for both down-the-barrel and transverse absorption-line studies (Kacprzak et al., 2010; Martin et al., 2012; Rubin et al., 2012; Ho et al., 2017; Zabl et al., 2019). Our results echo the findings of studies without pre-selection of targets and find only \(\sim\)10 per cent of H i absorbers consistent with accretion. ### Outflows In total, we find seven absorbers that are within 30\({}^{\circ}\) of the projected minor axis of nearby galaxies. However, only three out of these seven absorbers can be confidently attributed to gas that is outflowing. One clear case of outflowing gas is seen the \(z_{\rm abs}=0.7869\) absorber towards in Q1554\(-\)203. The absorber is found at an impact parameter of 23 kpc and azimuthal angle of 62\({}^{\circ}\) (see row 2 in Figure 9). The gas is unlikely to be inflowing or associated with the galaxy disk as the velocity of the absorber is opposite in sign to the ionized \begin{table} \begin{tabular}{c c c} \hline \hline Criterion & Tool & Number & **Total** \\ \hline Kinematic modelling & galPAR & 48 \\ Photometric modelling & galfit & 19 & **79** \\ Not modelled & & 12 \\ \hline Inclination \(>30^{\circ}\) & 56 & **67** \\ Inclination \(<30^{\circ}\) & 11 & \\ \hline \hline \end{tabular} \end{table} Table 2: **Sample summary of modelled galaxies.** We consider only the sample of modelled galaxies with inclinations \(i>30^{\circ}\) to limit potential errors in the position angle measurement. gas velocity field. A line of sight velocity difference between the galaxy and absorber of \(\sim\)200 km s\({}^{-1}\) is also typical of outflows. The absorber towards Q0454+039 at \(z_{\rm abs}=1.1532\) with column density \(\log[N({\rm H\,}\rm i)/cm^{-2}]=18.59\) aligns with the minor axis (\(\Phi=62^{\circ}\)) of the nearest galaxy (see second row of Figure 11). At an impact parameter of 60 kpc and line of sight velocity difference of \(-290\) km s\({}^{-1}\), the gas is likely tracing neutral gas entrained in an outflow. While the absorber and ionized gas are both blueshifted with respect to the galaxy systemic redshift, there is a \(>250\) km s\({}^{-1}\) discrepancy between the maximum rotational velocity of the galaxy and the absorber velocity. Finally, we find the DLA at \(z_{\rm abs}=0.3950\) towards Q1229\(-\)021 is consistent with cold gas entrained in an outflow. At an impact parameter of 6 kpc and azimuthal angle of \(81^{\circ}\), the neutral gas velocity is inconsistent with the ionized gas disk. We measure a line of sight velocity difference of 70 km s\({}^{-1}\), which suggests the gas is still accelerating at this small distance from the galaxy. The remaining four absorbers that align with the minor axis possess unclear gas flow origins. The uncertainty arises from two issues: the absorber velocity is inconsistent with outflowing gas, or there are multiple galaxies at similar impact parameters to the absorber. Addressing the former concern first, we find that the absorber towards Q0138\(-\)0005 at \(z_{\rm abs}=0.7821\) has only one galaxy counterpart at an impact parameter of 80 kpc (first row of Figure 1). The absorber is aligned with the minor axis of the associated galaxy and has a zinc abundance of \([{\rm Zn/H}]=0.28\pm 0.16\). While both these properties are consistent with metal-enriched gas expelled by the single galaxy counterpart, the absorber velocity is \(<40\) km s\({}^{-1}\) from the galaxy systemic redshift. Another similar example is the absorber towards Q1229\(-\)021 at redshift \(z_{\rm abs}=0.7572\) (last row in Figure 12) which is separated by only 15 km s\({}^{-1}\) from the redshift of its galaxy counterpart. A possible explanation for these low velocities is that there is a large velocity component orthogonal to the sightline as these galaxies have inclinations \(i>65^{\circ}\). The line of sight velocity is not necessarily well-correlated with the radial velocity of the gas. Alternatively, the neutral gas may be static and not co-rotating with the halo. Figure 5: Comparisons between the galaxy rotation curve and absorber line of sight velocities for cases where the disk or gas accretion is being traced. The blue line represents the galaxy rotation curve modelled by a hyperbolic tangent and the shaded region represent the \(1\sigma\) error. We plot the velocity of the absorber relative to the systemic redshift of the galaxy as a purple diamond. A vertical hatched region is used to illustrate that the impact parameter of the absorber is beyond the limits of the \(x\)-axis. We provide the gas flow origin in text at the top-left, while the galaxy ID and absorber impact parameter are found at the bottom of each plot. In three cases, we find the absorber velocity is consistent with the rotating disk or co-rotating halo of the galaxy at nearest impact parameter. For the bottom-right case, the absorber is beyond the virial radius of the nearest galaxy and likely arises from a faint, quiescent galaxy. For the two absorbers at \(z_{\rm abs}=0.7691\) and 0.8311 towards Q1229\(-\)021, there are at least three galaxies found at the absorber redshift and at similar distances from the QSO sightline. It becomes difficult to determine the gas flow origin of the absorber as gravitational interactions between the galaxies can cause gas to be stripped. The individual cases are discussed in Appendix B and a more detailed exploration of intragroup gas is explored in the following subsection. We choose to label these four unclear cases as possible outflows. #### 4.3.1 Does the gas escape? We can determine whether the neutral gas escapes the galaxy halo by comparing the absorber velocity with the escape velocity (\(V_{\rm esc}\)). The escape velocity at a given radius, \(r\), is calculated assuming a singular isothermal sphere (Veilleux et al., 2005): \[V_{\rm esc}=V_{\rm vir}\times\sqrt{2\left(1+\ln\frac{R_{\rm vir}}{r}\right)}, \tag{1}\] where \(V_{\rm vir}\) and \(R_{\rm vir}\) are the virial velocity and radius respectively. Here, we assume the radius to be the impact parameter (\(r\approx b\)). We estimate \(V_{\rm vir}\) using the prescription in Schroetter et al. (2019) where \(V_{\rm vir}\approx 1.2\times S_{0.5}\). Here, \(S_{0.5}=\sqrt{0.5\times V_{\rm max}^{2}+\sigma^{2}}\) is the kinematic estimator and is a function of the rotational velocity and velocity dispersion, \(\sigma\)(Weiner et al., 2006). Using \(V_{\rm vir}\), we can then approximate the virial radius to be \(R_{\rm vir}\approx V_{\rm vir}/10H(z)\) where \(H(z)\) is the Hubble constant at redshift \(z\). The estimated escape velocity values are tabulated in Table 3. While the absorber velocity relative to the galaxy is less than the escape velocity for all three likely cases of outflows, we do not take into account the velocity component orthogonal to the line of sight. Hence, it is possible that the radial velocity of the outflowing neutral gas exceeds the escape velocity and will be ejected from the galaxy halo. ### Alternative phenomena Beyond the absorber origins discussed earlier in this section, there are other phenomena that may produce H i absorption around galaxies. In particular, recent studies using MUSE reveal that roughly 50 per cent of Ly-\(\alpha\) absorbers have multiple galaxies within a velocity window of \(\pm 500\) km s\({}^{-1}\)(Chen et al., 2020; Hamanowicz et al., 2020; Weng et al., 2022; Berg et al., 2023). This suggests the absorption may arise from the intragroup medium between galaxies (Gauthier, 2013; Nielsen et al., 2018; Dutta et al., 2023). It is difficult to identify intragroup gas because we require kinematic modelling for all galaxies in the overdensity to exclude gas flows. The two galaxies associated with the \(z_{\rm abs}=0.3283\) absorber towards Q1130\(-\)1449 are roughly equidistant in projected distance from the absorber (75 and 90 kpc). There is no hint of inflows or outflows from the relative velocity and geometry of the absorber. Instead, we find that the velocity of the absorber is between the two galaxy systemic redshifts which suggests we may be tracing intragroup gas with \(\log[N({\rm H\,i})/{\rm cm}^{-2}]<18.9\). There are no other clear examples of absorbers tracing gas between galaxies, but there are several cases where multiple explanations are viable such as the two absorbers towards Q1229\(-\)021 discussed in the previous section. Another important consideration is that low-mass, quiescent galaxies or satellites hosting the absorber may not be detected in our observations. Indeed, there is a possible example of such a case with the four galaxies at impact parameters of 122 to 190 kpc associated with the absorber towards Q0152+0023. We previously discussed in this section whether the gas may arise from accretion due to an alignment between the rotational velocity and absorber velocity (top right panel of Figure 5), but we noted the absorber is beyond the virial radius of the nearest galaxy. In fact, the \(\log[N({\rm H\,i})/{\rm cm}^{-2}]=19.78\) absorber is beyond the virial radii of all four galaxies This points to the hypothesis that there is a galaxy with stellar mass \(\log(M_{*}/M_{\odot})<8.6\) and SFR \(<0.24\) M\({}_{\odot}\) yr\({}^{-1}\)below the detection threshold in the MUSE data near the strong H i absorber (Weng et al., 2022). While recent works suggest that the CGM extends beyond the virial radius (Wilde et al., 2021, 2023), it is unlikely to find such dense absorbers with column densities \(\log[N({\rm H\,i})/{\rm cm}^{-2}]\approx 20\) at such large distances from galaxies. A final point is that the line of sight velocity is not necessarily a good predictor of the physical line of sight distance between an absorber-galaxy pair. In fact, simulations have shown that absorbers can be found at \(>1\) pMpc when applying a velocity cut of \(|\Delta v_{\rm LOS}|<500\) km s\({}^{-1}\)(Rahmati et al., 2015; Ho et al., 2020, 2021). The velocity difference between absorber and galaxy is influenced by peculiar motions of the gas and the Hubble flow at larger separations. This suggests that there is always the possibility of chance associations between galaxies and absorbers. Quantifying this probability is the goal of an upcoming work using the TNG50 simulations. ## 5 Discussion In this section, we discuss the incidence rate of the various gas flows in the MUSE-ALMA Halos survey. We also examine the relationships between measured galaxy properties and the origin of the gas. Additionally, we discuss the limitations of using simple geometric arguments to distinguish outflows and inflows and future improvements to this analysis by studying individual gas components. ### The origins of gas in the CGM The H i Ly-\(\alpha\) absorbers in the MUSE-ALMA Halos survey appear to trace various phenomena. In summary, out of the 32 absorbers in the survey, 27 are found to have at least one galaxy within \(\pm 500\) km s\({}^{-1}\). The 27 absorbers are comprised of four absorbers tracing the galaxy disk, three tracing accretion, four tracing outflows, two tracing gas in the intragroup medium and one likely tracing an undetected, low-mass galaxy. Six absorbers may arise from multiple phenomena while the remaining five (two) have associated galaxies without kinematic modelling (with inclinations \(i<30^{\circ}\)). While this sample of absorbers only provides limited statistics, we do find that the accretion of gas onto galaxies is difficult to trace. Whether this is caused by the criteria used to identify accretion or an intrinsic \begin{table} \begin{tabular}{c c c c c c} \hline \hline QSO & \(z_{\rm abs}\) & \(\log N({\rm H\,i})\) & \(b\) & \(|\Delta v_{\rm LOS}|\) & \(V_{\rm esc}\) \\ & & \(\log({\rm cm}^{-2})\) & kpc & km s\({}^{-1}\) & km s\({}^{-1}\) \\ \hline Q0454+039 & 1.1532 & 18.59 & 60 & 290 & 300 \\ Q1229\(-\)021 & 0.3950 & 20.75 & 6 & 70 & 80 \\ Q1554\(-\)203 & 0.7869 & \(<\)19.0 & 23 & 210 & 270 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of line of sight absorber velocity with the escape velocity for galaxy-absorber pairs consistent with outflows. The line of sight velocity of the absorber relative to the galaxy is roughly equal to the escape velocity assuming a singular isothermal sphere. Given that \(\Delta v_{\rm LOS}\) does not account for velocities orthogonal to the line of sight, it is possible that the H i absorbers trace neutral gas escaping the potential of the galaxy. While \(|\Delta v_{\rm LOS}|<V_{\rm esc}\), the radial velocity of the gas may be larger than the escape velocity. property of the accretion itself (Faucher-Giguere & Keres, 2011) is unclear, but the percentage of accreting absorbers in this work is similar to down-the-barrel studies (Martin et al., 2012; Rubin et al., 2012) and transverse absorption-line works using Mg ii(Zabl et al., 2019). We find four convincing cases of H i outflows which is proportionally far less than other surveys (e.g. Schroetter et al., 2019). However, we note that there are six other cases where the absorber kinematics and geometry are consistent with outflows, where the interpretation is unclear because of possible interactions with other galaxies in the field or the modelled inclination not meeting our threshold (\(i<30^{\circ}\)). Ultimately, we have found that strong H i Ly-\(\alpha\) absorbers probe gas in a variety of environments and of many origins in the complex circumgalactic medium. In a future work using the TNG50 simulation, we will address whether the various gas origins discussed here can be distinguished from each other using the available observables of impact parameter, metallicity, line of sight velocity difference and azimuthal angle. ### Gas flows and galaxy properties With our physical interpretations of the absorber, we examine the properties of the galaxies associated with these absorbers. In Figure 6, we show the stellar masses and star-formation rates of galaxies associated with the absorbers of various origin. Diamonds represent absorbers that likely intersect the galaxy disk, stars and squares represent outflows and inflows respectively, while crosses signify there are multiple origins for the gas. For both the stellar mass and SFR, a \(\sim\)3 dex range of values is observed. Inflows are found to be associated with galaxies with large stellar masses \(10^{10}\) to \(10^{11}\)\(M_{\odot}\), but more photometry is required to estimate \(M_{*}\) for the other galaxies. While the star-formation rate of galaxies associated with the various gas flows span two dex, a more important indicator is the star-formation rate per unit area (\(\Sigma_{\rm SFR}\)). Two of the three galaxies where the absorber likely traces outflows have a \(\Sigma_{\rm SFR}>0.1\)\(\rm M_{\odot}\) yr\({}^{-1}\) kpc-2 (Heckman, 2002, 2003). The third galaxy does not meet the threshold in \(\Sigma_{\rm SFR}\), but we note that the SFR is not corrected for dust and the starburst responsible for the galactic wind might be more localised. Currently, a larger sample is required to connect the gas flows traced by absorbers to the galaxies that host the neutral gas. A recent study of galaxies associated with Mg ii absorbers finds that galaxies with large inflow rates are located above the SFR-\(M_{*}\) main sequence (Langan et al., 2023). Using the three galaxies associated with inflows that have both SFR and \(M_{*}\) measurements, we find no such signal and most of the associated galaxies lie on the SFR-\(M_{*}\) main sequence (Karki et al. submitted). ### The azimuthal distribution of H i and metals in the CGM Outflows of gas driven by supernovae or AGN are expected to be preferentially aligned with the minor axis and collimate into a biconical shape (Veilleux et al., 2005). Both hot- and cold-mode accretion is expected to channel cool gas onto galaxies via cool flows that align with the disk (e.g. Hafen et al., 2022; Keres et al., 2005). In absorption-line studies of ions such as Mg ii and O vi in the CGM, the inflow-outflow dichotomy manifests in the bimodal distribution of azimuthal angles between the quasar line of sight and nearby associated galaxy (e.g. Bouche et al., 2012; Kacprzak et al., 2015). In this work, we also find marginal evidence for a bimodality in the distribution of H i absorbers. In principle, this bimodality in \(\Phi\) should extend to metallicity measurements as outflowing gas is typically more metal-enriched than gas accreting onto a galaxy. However, the dependence of metallicity on azimuthal angle is far less clear from observations (Peroux et al., 2016; Kacprzak et al., 2019; Pointon et al., 2019; Wendt et al., 2021). Here, we find very little evidence for a correlation between absorber metallicity and azimuthal angle for three different stellar mass bins, and a larger and more homogeneous sample is required to determine whether a relationship exists. The reasons for the absence of a distinct signal are manifold. The trend of gas-phase metallicity versus azimuthal angle depends on properties such as the impact parameter, H i column density and most significantly, the stellar mass (Peroux et al., 2020). These parameters not only affect the normalisation of the signal, but also the magnitude of the discrepancy between low and high azimuthal angles. A larger sample of absorbers reaching column densities \(\log\left[N({\rm H\,{\textsc{i}}})/{\rm cm}^{-2}\right]=13.0\) that are found at impact parameters larger than 50 kpc from their host galaxies that have measured stellar masses is required to fully test the azimuthal dependence of metallicity in the CGM. Such a sample will be difficult to construct because of the challenge in measuring gas-phase metallicity due to the uncertainties surrounding photoionization and dust modelling. Furthermore, there is the intrinsic inhomogeneity of gas properties at small spatial scales in the CGM that will require large samples to take into account. Estimates of line of sight cloud sizes range from sub-parsec to \(\sim\)100 parsec from ionization modelling (e.g. Churchill et al., 2003; Werk et al., 2014) and studies find significant ranges in metallicity for different components along a single line of sight (e.g. Zahedy et al., 2019; Nielsen et al., 2022), suggesting metals may be poorly mixed in the CGM (Peroux et al., 2018; Tejos et al., 2021). This is captured by the TNG50 predictions depicted in Figure 4, where the 1\(\sigma\) errors range \(\sim\)1 dex in metallicity. Moreover, recent work from Berg et al. (2023) suggest there is a population of low-metallicity absorbers residing in overdense regions away from galaxy halos. Indeed, the baryon cycle is more complex than a linear combination of gas being expelled out via the minor axis and accreting along the major axis; these processes interact to form the complex, multi-phase circumgalactic medium. ### The fidelity of geometric and kinematic arguments In this work, we adopt an opening angle of \(60^{\circ}\) (corresponding to \(\pm 30^{\circ}\) from the minor axis) to identify gas that is being expelled (Chen et al., 2010; Lan et al., 2014), but a diverse range of opening angles have been observed in the literature (e.g. Veilleux et al., 2001) and these angles differ depending on the observed wavelength (e.g. see the opening angles for the Circinus galaxy ranging from \(15^{\circ}\) to \(100^{\circ}\) Harnett et al., 1990; Elmouttie et al., 1995; Veilleux & Bland-Hawthorn, 1997; Curran et al., 1999). It is unclear whether several of the absorbers with azimuthal angles just below the threshold should be considered outflowing. Similarly, while inflowing gas from hot- and cold-mode accretion is expected to align in angular momentum with the disk, there has been growing evidence for the condensation of ambient gas in the halo caused by interactions between gas ejected by stellar winds and hot coronal gas (Marinacci et al., 2010; Fraternali, 2017). Known as the galactic fountain, the phenomenon is expected to cause neutral gas to 'rain' down onto the galaxy disk rather than align with the major axis. There is mounting evidence for galactic fountains in the Milky Way, with the kinematics of high- and intermediate velocity clouds consistent with a mixture of outflowing and diffuse inflowing gas (Lehner et al., 2022; Marasco et al., 2022). Beyond the local Universe, signatures of fountains flows have been found to persists at impact parameters \(b>5\) kpc (Rubin et al., 2022). Hence, the absorbers in this sample may also trace this process at \(z\sim 0.5\). Another consideration is that the strong H i Ly-\(\alpha\) absorbers in this sample are comprised of multiple components at varying velocities to the DLA redshift. Recently, photoionization modelling of individual components within H i systems has been performed to determine cloud properties such as temperature and size (Cooper et al., 2021; Zahedy et al., 2021; Nielsen et al., 2022). The individual clouds and their properties have then been related to outflows, inflows or intragroup gas. We leave the modelling of the various metal-line components and estimations of the typical cloud properties embedded in outflowing and inflowing gas to future works. ## 6 Summary and Conclusion The MUSE-ALMA Halos survey combines multi-wavelength observations of galaxies associated with 32 \(\log[N({\rm H\,{\sc i}})/{\rm cm}^{-2}]>18.0\) absorbers. In this work, we have modelled the ionized gas kinematics of 48 galaxies associated these absorbers using the forward modelling algorithm galpax to extract properties such as the rotational velocity, velocity dispersion, inclination and position angle. By determining the position and geometry of the absorber with respect to the modelled galaxies, we seek to determine the distribution of gas in the circumgalactic medium and identify the possible origins of these strong H i absorbers. To summarise, we find: 1. An excess of absorption sightlines passing near the major and minor axes of galaxies. There is marginal evidence for a bimodal distribution in azimuthal angles between galaxy and absorber after performing the Hartigan's dip test on 5,000 iterations of the data by randomly sampling the errors (\(p\approx 0.1\)). This is similar to previous studies of the Mg ii and O vi ions, suggesting inflows and outflows of gas in the CGM can also be traced by neutral hydrogen. 2. That there is little evidence for a dependency of the metallicity on the azimuthal angle for the absorbers in the MUSE-ALMA Halos survey as predicted by simulations. This suggests that gas in the circumgalactic medium is not merely a linear combination of metal-poor inflows and metal-enriched outflows and that other phenomena such as gas recycling and poor metal-mixing are significant. The results from simulations also show that the scatter in metallicities at any given azimuthal angle is comparable to the actual metallicity discrepancy at the minor and major axes. At this stage, simulation results (Peroux et al., 2020) advocate that a larger sample of \(\sim\)100 strong H i absorbers with dust-free metallicity measurements is still required to recover any signal predicted by simulations. 3. That H i absorbers have a variety of origins in the CGM. Absorbers with column densities \(\log[N({\rm H\,{\sc i}})/{\rm cm}^{-2}]>20.0\) at impact parameters of \(b<20\) kpc from the nearest galaxy are considered associated with the galaxy disk. Only 15 per cent of absorbers are found to trace the disk, suggesting other processes must account for the remaining absorbers. We find that roughly 10 per cent of absorbers are co-rotating with the halo out to distances up to 60 kpc and these are suspected to trace gas accretion. The rarity of such cases is in line with previous works. Up to \(\approx\)30 per cent of absorbers are found within \(\pm\)30\({}^{\circ}\) of the minor axis of galaxies and are consistent with outflows, but we only identify four clear cases in the sample. The remaining absorbers trace gas in the intragroup medium, low-mass galaxies below the detection limit of the MUSE data or do not have sufficient data to uncover a physical origin. Figure 6: The stellar properties of the galaxies associated with an absorber. The left plot shows the stellar mass of galaxies associated with absorbers at a given azimuthal angle. Stellar masses are derived from spectral energy distribution (SED) fitting of the HST broadband imaging (Augustin et al. in prep). Different symbols represent galaxy-absorber pairs that are found to trace inflowing, outflowing or gas in the disk while faint crosses represent pairs with ambiguous origins. Symbols with a black border are cases where the gas origin has been confidently identified, while those with lower transparency are possible cases. Each galaxy is coloured by the impact parameter from the absorber. We find absorbers are associated with galaxies that span four dex in stellar mass. On the right, we show the dust-uncorrected star-formation rate of galaxies measured using the H\(\alpha\) or [O ii] emission lines (Weng et al., 2022). We find that the galaxies associated with inflows and outflows do not differ significantly in their SFRs. The median errors in the plotted properties are shown as a cross in the bottom left of both plots. In the future, larger surveys such as the ByCycle survey (PI: Peroux) using the 4MOST instrument will enable us to compare the kinematics of galaxies with absorbers on a much larger scale (de Jong et al., 2012, 2019). The combination of high-resolution (\(R\sim 20,000\)) background QSO spectroscopy with deep and complete foreground galaxy surveys (Driver et al., 2019) will enable better constraints on how the absorber metallicity varies with azimuthal angle (Szakacs et al., submitted). While we have identified the various gas flows traced by dense H i absorbers, the future modelling of individual gas components will provide information on the gas properties of inflows and outflows (e.g. temperature, density and cloud size). The proliferation of absorber follow-up surveys that use integral field spectroscopy has led to a re-characterisation of the physical processes that absorbers trace in the circumgalactic medium and how gas is distributed with respect to galaxies. In this work, we find and emphasise that even strong Ly-\(\alpha\) absorbers trace gas flows in the circumgalactic medium. The fact that dense, neutral gas required for star-formation is seen accreting and being expelled highlights these processes have significant impacts on galaxy evolution and that the CGM plays an important role in regulating the baryon cycle. ## Acknowledgements This research is supported by an Australian Government Research Training Program (RTP) Scholarship. EMS, GK and SW acknowledge the financial support of the Australian Research Council through grant CE17010010013 (ASTRO3D). VPK and AK acknowledge partial support for GO program 15939 (PI: Peroux) provided through a grant from the STScI under NASA contract NAS5-26555 and NASA grant 80NSSC20K0887 (PI: Kulkarni). VPK also gratefully acknowledges additional support from the National Science Foundation grants AST/2007538 and AST/2009811 (PI: Kulkarni). DN acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #564 (The Cosmic Baryon Cycle from Space). This research also made use of several python packages: astropy (Astropy Collaboration et al., 2013, 2018), matplotlib (Hunter, 2007) and numpy (Harris et al., 2020). ## Data Availability Data directly related to this publication and its figures are available upon request. The catalogues for the MUSE-ALMA Halos survey have been made public with the publication of Peroux et al. (2022). The raw data can be downloaded from the public archives with the respective project codes.
2306.10457
Chiral spin liquids with projected Gaussian fermionic entangled pair states
We study the parton construction of chiral spin liquids (CSLs) using projected Gaussian fermionic entangled pair states (GfPEPSs). First, we show that GfPEPSs can represent generic spinless Chern insulators faithfully with finite bond dimensions. Then, by applying the Gutzwiller projection to a bi-layer GfPEPSs, spin-1/2 Abelian and non-Abelian CSLs are obtained for Chern number $C=1$ and $C=2$, respectively. As a consequence of the topological obstruction for GfPEPSs, very weak Gossamer tails are observed in the correlation functions of the fermionic projected entangled pair state (PEPS) ansatze, suggesting that the no-go theorem for chiral PEPS is universal but does not bring any practical limitation. Remarkably, without fine tuning, all topological sectors can be constructed showing the expected number of chiral branches in the respective entanglement spectra, providing a sharp improvement with respect to the known bosonic PEPS approach.
Sen Niu, Jheng-Wei Li, Ji-Yao Chen, Didier Poilblanc
2023-06-18T02:47:16Z
http://arxiv.org/abs/2306.10457v1
# Chiral spin liquids with projected Gaussian fermionic entangled pair states ###### Abstract We study the parton construction of chiral spin liquids (CSLs) using projected Gaussian fermionic entangled pair states (GfPEPSs). First, we show that GfPEPSs can represent generic spinless Chern insulators faithfully with finite bond dimensions. Then, by applying the Gutzwiller projection to a bi-layer GfPEPSs, spin-1/2 Abelian and non-Abelian CSLs are obtained for Chern number \(C=1\) and \(C=2\), respectively. As a consequence of the topological obstruction for GfPEPSs, very weak Gossamer tails are observed in the correlation functions of the fermionic projected entangled pair state (PEPS) ansatze, suggesting that the no-go theorem for chiral PEPS is universal but does not bring any practical limitation. Remarkably, without fine tuning, all topological sectors can be constructed showing the expected number of chiral branches in the respective entanglement spectra, providing a sharp improvement with respect to the known bosonic PEPS approach. _Introduction.--_The notion of topological phase has revolutionized our understanding of phase of matter beyond the Landau paradigm. In two-dimensional systems without time-reversal symmetry, if there exists chiral edge modes moving only in one direction, the states are dubbed as chiral topological states. The most well-known chiral state in lattice free fermion systems is the Chern insulator [1; 2], where the topology is completely characterized by the bulk Chern number \(C\) indicating the number of chiral edge modes [3; 4]. Through Gutzwiller projection on copies of Chern insulators (labeled by a spin index), a chiral spin liquid (CSL) state in the parton representation [5] can be obtained. Interestingly, in contrast to their parent chiral Chern insulators, CSLs inherit long-range topological order from the Gutzwiller projection. Hence, CSLs are bosonic variants of the fractional Quantum Hall states, and can be classified by the chiral gapless modes on the edge, or, equivalently, the entanglement spectrum (ES) [6; 7] described by (1 + 1)-dimensional Wess-Zumino-Witten (WZW) conformal field theories (CFT) [8]. For example, for two copies of half-filled Chern insulators with \(C=1\) in each copy, the projected spin state becomes the topological \(\mathrm{SU}(2)_{1}\) CSL [9; 10] which is equivalent to the bosonic \(\nu=1/2\) Laughlin wavefunction [11]. For more general cases, the topological nature of parton wave functions built from Chern insulators with higher Chern number are not clear, thus numerical methods for characterizing parton wavefunctions are desired. The projected entangled pair states (PEPS) [12] have been successfully used for investigating two-dimensional topological states, where non-chiral topological orders can be encoded by gauge symmetry exactly [13; 14; 15; 16]. However, there seems to exist a topological obstruction for PEPS to represent chiral topological states. For the case of free fermions where the corresponding PEPS representation is the Gaussian fermionic PEPS (GfPEPS), the obstruction has been proven exactly [17; 18; 19], namely, if a GfPEPS is chiral then its bulk should be gapless. For the non-Gaussian case such as those in spin systems, a series of numerical studies show that the numerically optimized chiral bosonic PEPS also have artificial (gossamer) long-range correlations in the bulk [20; 21; 22; 23]. Since interacting chiral PEPS are also likely to be subject to topological obstruction, it becomes important to scrutinize the possible artifacts of the PEPS descriptions of a true CSL. In particular, do we have some sort of universality in its description in terms of bosonic and fermionic interacting PEPS? On the other hand, there exists several subtle issues in the bosonic chiral PEPS, e.g., the existence of redundant chiral branches in the ES [24; 25; 26; 22] and the challenge in accessing the complete set of topological sectors [21]. It is interesting to see whether fermionic PEPS describes the edge theory of CSLs faithfully, which could also shed light on re-solving the problems in bosonic PEPS. For that purpose, we study generic chiral spin liquids using optimized GfPEPS parton wavefunctions constructed from a parent Chern insulator Hamiltonian [27]. _GfPEPS for Chern insulator._ --As a preliminary step before constructing Gutzwiller-projected parton wavefunctions, we investigate the GfPEPS representations for free fermion Chern insulators. One representative lattice model is the two-band Hofstadter model [28; 29] \[H= -\sum_{m,n}(t_{1}c^{\dagger}_{m+1,n}c_{m,n}+t_{1}e^{imx}c^{ \dagger}_{m,n+1}c_{m,n})\] \[-\sum_{m,n}(t_{2}e^{i(mn\pm\pi/2)}c^{\dagger}_{m\pm 1,n+1}c_{m,n})+ \mathrm{H.c.}. \tag{1}\] Here \((m,n)\) denotes the coordinates of the fermionic creation and annihilation operators, and the phases of hopping terms \(t_{1},t_{2}\) can be read from Fig. 1 (a), providing a homogeneous \(\pi/2\) flux in all triangular units. The sites can be relabeled with \(A,B\) sublattice indices as \(c^{\dagger}_{2x-1,y}=c^{\dagger}_{x,y,A}\) and \(c^{\dagger}_{2x,y}=c^{\dagger}_{x,y,B}\). At half-filling, the exact ground state is a gapped insulator with Chern number \(C=1\) for \(t_{1},t_{2}>0\). To simulate the free fermion ground state, we adopt the translation invariant particle number conserving \(U(1)\) symmetric GfPEPS ansatz parametrized by a single tensor in Fig. 1 (b) and perform variational optimization [30; 31]. The translation invariant GfPEPS at half-filling can be written as a product state in the Brillouin zone, where all \(k\) modes are determined by the single real space tensor and can not vary independently. The cost function is chosen as the expectation value of Eq. (1) [31]. Here a single tensor contains \(A,B\) physical sites of the unit-cell with a physical Hilbert space dimension \(d=2^{2}\) and virtual bond dimension \(D=2^{M}\), where \(M\) is the number of virtual fermionic modes. Thus the one-site translation (projective) symmetry in the \(x\)-direction is only approximately realized but can be improved with increasing \(M\). In the Hofstadter model, the optimized GfPEPS shows topological features when the number of virtual modes satisfies \(M\geq M_{\rm min}\) and then becomes sharper with increasing \(M\), as depicted in Fig. 2. We set \(t_{1}=1\) and focus on the parameter \(t_{2}=0.5\) with the largest band gap. Starting from \(M=1\), the energy error decreases systematically, see Fig. 2 (a). The topology of the optimized free fermion states can be deduced from the number of chiral branches in the single-particle ES \(\lambda_{\alpha}\)[32; 33], or, equivalently, from the edge spectrum \(\epsilon_{\alpha}=(e^{\lambda_{\alpha}}+1)^{-1}\) of the subsystem correlation matrix \(C^{\rm cut}\), defined as \[C^{\rm cut}_{i,j}=\!\begin{cases}\text{Tr}[|\psi\rangle\langle\psi|c^{\dagger }_{i}c_{j}],&i,j\in\text{ subsystem},\\ 0,&\text{otherwise}.\end{cases} \tag{2}\] Here \(|\psi\rangle\) is the free fermion many-body state on the whole lattice. Along \(y\) direction, we cut out a cylinder from the torus as a subsystem, and plot the correlation matrix spectrum \(\epsilon_{\alpha}\) in Fig. 2(b), where bulk states have been removed according to a numerical criterion \(|\epsilon_{\alpha}-0.5|>0.499\). The \(M=1\) state is non-chiral since both left-moving and right-moving modes exist, while for \(M\geq 2\) the dispersion of the edge mode becomes chiral and shows quantitative agreement with the exact results. We then examine the real space bulk correlation functions between \(A,B\) sublattices at distance \(x\), defined as \(\langle c^{\dagger}_{A}c_{B}\rangle=\langle c^{\dagger}_{1,y}c_{2x,y}\rangle\) (the exact correlation functions between the same sublattice always vanish). The corresponding GfPEPS results are shown in Fig. 2 (c) where, for \(M\geq 2\), the optimized states with correct topology exhibit a crossover behaviour: at short distance the correlations decay exponentially as expected until approaching a small magnitude around \(10^{-5}\), and then shows a weak long-distance gossamer tail with algebraic decay (which we have confirmed by fitting on much larger clusters). The existence of the long-distance tail can be understood from a sharp momentum space singular point as shown in the inset of Fig. 2 (a) and is consistent with the topological obstruction for GfPEPS. It is expected that the correlation functions improve as \(M\) increases although, in practice, the precision of our numerical optimization sets some limit. In Fig. 2 (d) we show the results for \(t_{2}=0.125\) exhibiting a much longer bulk correlation length and slower decay of correlations, from which one can roughly observe that the weight of the artificial gossamer tail decreases with \(M\). We find a different scenario for the optimized GfPEPS in another Chern insulator model -- the Qi-Wu-Zhang model [34]. The minimal bond dimension to observe the chiral edge is \(M=1\) but, in that case, there is no sharp singularity in momentum space and no crossover behaviour in correlation functions, akin to the family of states investigated in Refs. [17; 18; 35]. For larger bond dimensions \(M>1\) the same momentum and real space behaviours as those in the Hofstadter model are observed. The corresponding numerical results are shown in the Supplemental Materials (SM) [36]. _Gutzwiller projected spin state with \(C=1\).--_We now move to the Gutzwiller projected state, which is expected to be the \(\mathrm{SU}(2)_{1}\) CSL when \(C=1\). By construction [31; 35], we build the \(\mathrm{SU}(2)\) invariant fermionic state via stacking two copies of GfPEPSs labeled by spin \(\uparrow\) and \(\downarrow\) components, hence the tensors with virtual bond dimension \(4^{M}\) satisfy Figure 1: Schematic diagrams of (a) the Hofstadter Chern insulator model with a two-site \(A\), \(B\) unit-cell along \(x\) direction marked by the dashed line, (b) the translation invariant GfPEPS ansatz with \(A,B\) physical sites included in one tensor and (c) the spin state constructed from Gutzwiller projected GfPEPSs. Figure 2: Observables of the GfPEPS for the Hofstadter model optimized on a \(80\times 80\) torus. (a) Energy error per site versus \(M\). Inset shows energy error along the \(k_{x}\) direction path across the sharp singular point in \(k\) space with \(M=2\). (b) Edge spectrum of the correlation matrix localized at one boundary of the \(40\times 80\) cylinder cut from the torus. (c)-(d) Correlation functions for different \(t_{2}\) values. The open black circles correspond to trivial states without chirality. \(U(1)\times\mathrm{SU}(2)\) symmetry and each virtual state is labeled by both charge and spin quantum numbers. The tensor for spin state is obtained by applying the Gutzwiller projector \(P_{G}=\prod_{i}(\hat{n}_{i,\uparrow}+\hat{n}_{i,\downarrow})(2-\hat{n}_{i, \uparrow}-\hat{n}_{i,\downarrow})\) as shown in Fig. 1 (c). We choose the \(M=2\) GPEPS optimized at \(t_{2}=0.5\), and construct PEPS representation of the projected state using the fermionic PEPS approach [37; 38]. To inspect the real space correlation functions on the infinite lattice, we use the corner transfer matrix renormalization group (CTMRG) [39; 40] method, where the approximate contraction is controlled by the environment bond dimension \(\chi\), and becomes exact in the \(\chi\to\infty\) limit. The numerical results for spin-spin correlations \(\langle\mathbf{S}_{A}\cdot\mathbf{S}_{B}\rangle\) between \(A,B\) sublattices at distance \(x\) are shown in Fig. 3 (a). The correlations of the projected state decay also exponentially at short distance up to a length scale \(x\approx 5\) and then decay much slower. As the absolute value of the slope at long-distance decreases with \(\chi\), we expect the exact correlation function (which corresponds to the limit \(\chi\to\infty\)) of this \(M=2\) state decays slower than any exponential decay, similar to the correlations in CSLs represented by bosonic PEPS [20; 21; 22; 23]. The topological order of the CSL state is characterized by the bipartite ES, which can be computed on an infinite long cylinder [41]. The topologically degenerate spin states can be constructed from projected free fermion states with different boundary conditions. The flux inserted by the anti-periodic boundary condition (APBC) is realized by applying a non-contractable loop of gauge symmetry operator \(Z=\prod_{i}Z_{i}\) on the virtual space [13], where the gauge symmetry operator \(Z_{i}\) takes the form \(Z_{i}=(-1)^{n_{i}}\), as illustrated in Fig. 3(b). Here the gauge symmetry can be interpreted as the fermion parity of virtual spin-\(1/2\) particles or the number parity of singlet pairs crossing the \(i\)th virtual bond [14; 42]. The \(\mathrm{SU}(2)_{1}\) CSLs on the cylinder have two topological sectors. On a finite cylinder, the minimally entangled states (MES) [43; 44] are determined by explicitly controlling populations of edge modes in the unprojected states. Correspondingly, on the infinite cylinder we determine the MES according to the single-particle ES in the unprojected states as well as virtual space quantum numbers, based on the equivalence between edge spectrum and entanglement spectrum as implied by Eq. (2) [32; 33]. In order to control the filling of the free fermion edge modes, we plot in Fig. 3(c) the spectrum of the subsystem correlation matrix representing a (single) physical edge. The Fermi level \(\epsilon_{F}=0.5\) (\(\lambda_{F}=0\) marked by a dashed line) defines the Fermi sea state \(|\psi_{\mathrm{FS}}\rangle=\prod_{\sigma=\uparrow,\downarrow}\prod_{1-\sigma _{\alpha}<\epsilon_{F}}d^{\dagger}_{\alpha,\sigma}|\mathrm{Vac}\rangle\), where \(d^{\dagger}_{\alpha,\sigma}\) denote the bulk and edge modes with eigenvalue \(\epsilon_{\alpha}\) and spin polarization \(\sigma\). The \(\mathrm{SU}(2)_{1}\) ground states in identity (\(I\)) and semion (\(S\)) sectors can be constructed on the PBC/APBC cylinder as [45] \[|\psi_{I}\rangle=P_{G}|\psi_{\mathrm{FS}}\rangle_{\mathrm{PBC}},\] \[|\psi_{S}\rangle_{\sigma,\bar{\sigma}}=P_{G}\zeta^{\dagger}_{L, \sigma}\zeta^{\dagger}_{R,\bar{\sigma}}|\psi_{\mathrm{FS}}\rangle_{\mathrm{ APBC}}, \tag{3}\] respectively. Here \(\zeta^{\dagger}_{L/R,\sigma}\) creates the lowest particle excitation at the left/right boundary as marked by the black circle, and the superposition \(|\psi_{S}\rangle_{\uparrow,\downarrow}-|\psi_{S}\rangle_{\downarrow,\uparrow}\) forms a singlet. The two topological sectors have a total spin difference \(\Delta S=1/2\) in the horizontal virtual space, and can be distinguished by the \(y\)-direction loop operator \(P_{\mathrm{even/odd}}=(1\pm Z)/2\) that projects to the subspaces of integer spin (even charge parity) and half-integer spin (odd charge parity) in the ES, respectively. Fig. 3(d)-(e) shows numerical results of the Gutzwiller projected state on the \(N_{y}=6\) cylinder for \(I\) and \(S\) sectors obtained with PBC and APBC respectively, where the entanglement Hamiltonians [46] are built from CTMRG boundary tensors [21; 25; 47]. Numerically the integer (half-integer) sector corresponds to the fixed point of the transfer matrix with PBC (APBC), in agreement with the fact that for both cases the unprojected edge modes are half-filled (Fig. 3 (c)). The low energy levels match the prediction of \(\mathrm{SU}(2)_{1}\) WZW CFT (see SM), including the ones marked by red rectangles. We notice that compared to the previous bosonic PEPS method [24; 25], our fermionic construction yields the correct number of chiral branches in the ES. We also remark that we expect both sectors can be obtained within a fixed boundary condition as long as \(N_{y}\) is large enough with sufficient number of linear edge states as in Fig. 2(b). _Gutzwiller projected spin states with \(C=2\).--_ Above approach can be naturally generalized to parton states with arbitrary Chern number. Here we consider a \(C=2\) model [48], which turns out to be nontrivial as it shows that the topologi Figure 3: Features of projected and unprojected \(C=1\) states obtained from optimizing the Hofstadter model at \(t_{2}=0.5\) and \(M=2\). (a) Correlation functions after Gutzwiller projection, computed with various \(\chi\). (b) Gauge symmetry in the local tensor of Gutzwiller projected state. (c) Edge spectrum of the free fermion correlation matrix on a width \(N_{y}=6\) cylinder with PBC and APBC. The dashed line denotes the Fermi level of the entanglement Hamiltonian. (d)-(e) Entanglement spectra of the identity (\(I\)) and semion (\(S\)) sectors in the Gutzwiller projected states on the width \(N_{y}=6\) cylinder with PBC and APBC, respectively. CTMRG boundary tensors with \(\chi=110\) are used. cal order of a generic projected parton state depends not only on the Chern number before projection, but also on details of the wavefunctions. The family of free fermion \(C=2\) Hamiltonians \(H_{\Theta}\) have \(A,B\) sublattices in the unit-cell. At \(\Theta=0\) it takes the form \[H_{\Theta=0} = \sum_{\langle i,j\rangle_{x}}t_{1}(c^{\dagger}_{j,A}c_{i,B}+c^{ \dagger}_{j,B}c_{i,A}) \tag{4}\] \[+ \sum_{\langle i,j\rangle_{y}}t_{1}(c^{\dagger}_{j,A}c_{i,A}-c^{ \dagger}_{j,B}c_{i,B})\] \[+ \sum_{\langle\langle i,k\rangle\rangle}t_{2}e^{2i\theta_{ik}}(c^{ \dagger}_{k,B}c_{i,A}-c^{\dagger}_{k,A}c_{i,B})+\mathrm{H.c.},\] where \(i,j,k\) denotes the sites on the \(x-y\) plane and \(\theta_{ik}\) denotes the angle between next nearest neighbour sites \(i,k\). The model at \(\Theta=0\) can be viewed as two independent layers of Eq. (1) that differs by a one-site translation \(T_{x}\) along \(x\) direction: by taking \(A\) sites for even \(x\) coordinate and taking \(B\) sites for odd \(x\) coordinate one obtains the first copy, and vice versa for the second copy. Due to the \(T_{x}\) translation the entanglement spectrum along \(y\) direction cut contributed from two layers are identical but has \(\pi\) momentum difference, as shown in Fig. 4 (a) for optimized \(M=2\) GfPEPS with PBC and APBC (with a string along \(x\) direction inserted), respectively. Applying the local unitary \(U(\Theta)=\exp[\sum_{i}\Theta(c^{\dagger}_{i,A}c_{i,B}-c^{\dagger}_{i,B}c_{i,A })/2]\) that acts inside each unit-cell, the family of Hamiltonian \(H_{\Theta}=U^{-1}(\Theta)H_{0}U(\Theta)\) is obtained. At \(\Theta=0\) the two \(C=1\) layers are independent and at \(\Theta=\pi/4\) the two layers are maximally mixed. The total Chern number and free fermion entanglement spectrum do not depend on \(\Theta\) since \(U(\Theta)\) is local. After Gutzwiller projection, a topological transition emerges along the path \(\Theta\in[0,\pi/4]\) (see SM). At \(\Theta<\Theta_{c}\) the projected state is the Abelian \(\mathrm{SU}(2)_{1}\times\mathrm{SU}(2)_{1}\) since the \(\Theta=0\) gapped topological phase of the two decoupled layers should have a finite extension in parameter space. We focus on the \(\mathrm{SO}(5)_{1}\) CSL realized around the maximally mixed limit \(\Theta=\pi/4\) which was predicted by the effective field theory and verified recently by a matrix product state calculation [48; 49]. Before investigating the topological properties of this non-Abelian CSL we emphasize that correlations of the projected GfPEPS (Fig. 4(b)) show a similar crossover behavior as in Fig. 3(a), pointing towards the universality of such artifact in chiral PEPS. In the free fermion edge spectrum on the left boundary of the \(N_{y}=6\) cylinder, we denote \(\zeta^{\dagger}_{L,1,\sigma},\zeta^{\dagger}_{L,2,\sigma}\) as the first excited states with momentum difference \(\pi\) marked by the black circles in Fig. 4(a) for both PBC and APBC. For the \(\mathrm{SO}(5)_{1}\) CSL, three MESs [49] can be constructed as \[|\psi_{\mathrm{Identity}}\rangle = P_{G}|\psi_{\mathrm{FS}}\rangle_{\mathrm{PBC}},\] \[|\psi_{\mathrm{Twist}}\rangle_{\sigma,\bar{\sigma}} = P_{G}\zeta^{\dagger}_{L,a,\sigma}\zeta_{R,a,\sigma}|\psi_{ \mathrm{FS}}\rangle_{\mathrm{PBC}},\] \[|\psi_{\mathrm{Fermion}}\rangle = P_{G}\zeta^{\dagger}_{L,a,\uparrow}\zeta^{\dagger}_{L,a, \downarrow}\zeta^{\dagger}_{R,b,\uparrow}\zeta^{\dagger}_{R,b,\downarrow}| \psi_{\mathrm{FS}}\rangle_{\mathrm{APBC}}. \tag{5}\] Here \(a,b\in\{1,2\}\). For the twist sector, due to the annihilation operator \(\zeta_{R,a,\sigma}\) a single spin \(\bar{\sigma}\) edge mode is left at the right boundary. The numerical results for the projected states at \(\Theta=\pi/4\) are shown in Fig. 4 (c)-(e). With PBC and APBC, the dominant integer spin sectors which have half-filled edge modes are shown to be the identity and fermion sectors, respectively. The half-integer (odd charge parity) sector with PBC gives the twist sector. The \(\Delta k_{y}=\pi\) momentum splitting of the chiral branches originates from the existence of two edge branches shifted by \(\pi\) before projection. A key advantage of the fermionic approach is that all topological sectors can be explicitly constructed, which is not obvious to achieve within the bosonic PEPS framework [21]. The level counting of the numerically obtained ES shows a remarkable agreement with the CFT prediction, which is given in the SM. _Conclusion._--We have investigated the Gutzwiller projected Chern insulators in the GfPEPS representation. The topological obstruction for chiral GfPEPS only leads to very weak Gossamer tails in the correlation functions of projected GfPEPSs and thus brings no practical obstruction for numerical simulations. Within our framework, topological sectors can be tuned conveniently by flux insertion without explicit control of edge mode populations, and the projected GfPEPSs provide faithful descriptions of the edge spectra in both Abelian and non-Abelian CSLs, which would be interesting to further analyze via a recently proposed generalized Gibbs ensemble approach [50]. In the future, it would also be interesting to use such families of projected GfPEPS as variational manifolds to attack quantum spin models. Note that, the non Figure 4: Features of the unprojected and projected \(C=2\) states with \(M=2\) at \(\Theta=\pi/4\). (a) Edge spectrum of the free fermion correlation matrix on a width \(N_{y}=6\) cylinder with PBC and APBC. (b) Correlation functions after Gutzwiller projection computed at different \(\chi\). (c)-(e) The ES of \(\mathrm{SO}(5)_{1}\) CSL at \(\Theta=\pi/4\). In (c) and (e) PBC is used while in (d) APBC is used. For (c)-(e) CTMRG boundary tensors with \(\chi=110\) are used. Abelian \(\mathrm{SO}(5)_{1}\) CSL does not seem to have a simple description with bosonic PEPS. One natural question emerges: for spin systems, is there a difference in the representative power of bosonic PEPS and fermion PEPS? We leave this question to future research. _Acknowledgement.--_We thank Hong-Hao Tu for insightful discussions. We implement non-Abelian symmetries using the TensorKit,jl package [51]. This work was granted access to the HPC resources of CALMIP center under the allocation 2017-P1231. J.-Y.C. acknowledges support by Open Research Fund Program of the State Key Laboratory of Low-Dimensional Quantum Physics (project No. KF202207), Fundamental Research Funds for the Central Universities, Sun Yat-sen University (project No. 23qnpy60), a startup fund from Sun Yat-sen University (No. 74130-12230034), and the Innovation Program for Quantum Science and Technology 2021ZD0302100. This work was also supported by the TNTOP ANR-18-CE30-0026-01 grant awarded by the French Research Council.
2310.13860
Kubo-Anderson theory of polariton lineshape
We apply the Kubo-Anderson stochastic theory of molecular spectral lineshape to the case of polaritons formed in the collective strong coupling regime. We investigate both the fast and slow limits of the random frequency modulation of the emitter as well as the intermediate regime and show how the interplay between the characteristic timescales of the cavity and the molecular disorder is expressed in the observed polaritons lineshapes. The analytical solution obtained for the slow limit is valid for any ratio between the inhomogeneous broadening of the molecules and the Rabi splitting, especially relevant for molecular polaritons where these two quantities can be of the same order of magnitude.
Clàudia Climent, Joseph E. Subotnik, Abraham Nitzan
2023-10-20T23:52:05Z
http://arxiv.org/abs/2310.13860v1
# Kubo-Anderson theory of polariton lineshape ###### Abstract We apply the Kubo-Anderson stochastic theory of molecular spectral lineshape to the case of polaritons formed in the collective strong coupling regime. We investigate both the fast and slow limits of the random frequency modulation of the emitter as well as the intermediate regime and show how the interplay between the characteristic timescales of the cavity and the molecular disorder is expressed in the observed polaritons lineshapes. The analytical solution obtained for the slow limit is valid for any ratio between the inhomogeneous broadening of the molecules and the Rabi splitting, especially relevant for molecular polaritons where these two quantities can be of the same order of magnitude. _Introduction.--_ When the interaction between a photon and an electronic/vibrational transition is strong enough such that their rate of energy exchange exceeds that of their respective losses, new hybrid light-matter states known as polaritons are formed [1]. One of the most interesting features of this strong light-matter coupling regime is the collective interaction of an ensemble of emitters with the electromagnetic field in optical cavities. Spectroscopically, this translates in an energetic (Rabi) splitting between the two polariton modes that scales with the square root of the number of emitters [2]. This collective response and the concept of a polariton as a coherent superposition of states with many different excited molecules naturally raises a question about the possible role of disorder. An interesting spectroscopic as well as numerical observation is that, in the presence of static disorder, and for a sufficiently large Rabi splitting, the polariton linewidth does not inherit the inhomogeneous broadening of the cavity-free emitters [3; 4; 5; 6]. Instead, the polariton broadening is exclusively due to the homogeneous linewidth of both of its constituents, the cavity and emitter resonances. Several works have investigated this subject and closely related matters in the past, mostly within the context of semiconductor microcavities [7; 8; 9; 5; 4; 5; 6; 7; 8; 9; 10]. Typically, numerical simulations were carried out to investigate the effect of static disorder on the polariton linewidth, while the effect of homogeneous broadening is usually treated phenomenologically. Despite recent interest in the role of disorder in polaritonic phenomena [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45], especially within the context of molecular polaritons (where molecular transition bands are quite broad in comparison to atomic systems or semiconductors), an analytic theory capable of describing the effect of both static and dynamic disorder in the polariton lineshape is missing. In this Letter, we address this point by extending the Kubo-Anderson theory of a stochastic molecular lineshape [46; 47; 48; 49; 50; 51] to the case of many molecules that respond collectively to an optical excitation and, via the same collective response, form polaritons when interacting with resonance cavity modes. Our theory yields simple analytical results in the slow and fast limits of the disorder dynamics, and can be evaluated numerically for the intermediate case. The lineshape expression we obtain for the slow limit is valid for any ratio between the inhomogeneous broadening of the molecules and the Rabi splitting, especially relevant for molecular polaritons where the broadening due to static disorder can be a significant fraction of the Rabi splitting. _Kubo-Anderson theory of stochastic molecular lineshape.--_ The starting point of the Kubo-Anderson theory of stochastic lineshape is to model a molecular transition as a classical harmonic oscillator whose frequency randomly fluctuates about a central frequency \(\omega_{0}\) due to the interaction with a thermal environment [51]. The dynamics of such an oscillator is described by the following equation of motion. \[\dot{a}=-i(\omega_{0}+\delta\omega(t))a \tag{1}\] The main assumption of the model is that the stochastic time-dependent frequency fluctuation \(\delta\omega(t)\) caused by environmental motions is a random stationary Gaussian process characterized by an average \(\langle\delta\omega(t)\rangle=0\) and an autocorrelation function for which a common model is \(\langle\delta\omega(t)\delta\omega(t+\tau)\rangle=\Omega^{2}e^{-\tau/\tau_{c}}\) with a correlation time \(\tau_{c}=\frac{1}{2\Omega}\int_{0}^{\infty}d\tau(\delta\omega(t)\delta\omega( t+\tau))\) where \(\Omega=\sqrt{\langle\delta\omega^{2}\rangle}\) is the amplitude of the random frequency modulations. The lineshape may then be obtained by calculating the Fourier transform of the autocorrelation function of the amplitude \(a\), \(I(\omega)=\int_{-\infty}^{\infty}dt\,e^{-i\omega t}\langle a^{*}(0)a(t)\rangle\), and different physical behaviors are encountered depending on the relative magnitude of the correlation time \(\tau_{c}\) and the amplitude of the frequency modulations \(\Omega\). Analytical solutions can be obtained in two extreme limits characterized by the magnitude of the dimensionless parameter \(\alpha\equiv\tau_{c}\Omega\): (i) \(\alpha\ll 1\) represents the fast limit, that is, the situation where the dynamics of the environment is fast relative to that of the oscillator. A Lorentzian lineshape \(I(\omega)=\frac{2\Gamma}{\omega^{2}+\Gamma^{2}}\) is obtained in this limit with a half-width half-maximum \(\Gamma=\tau_{c}\Omega^{2}=\alpha\Omega\) that can be much narrower than the amplitude of the frequency modulation, a phenomenon known as motional narrowing that has been extensively investigated in NMR spectroscopy [52; 53]. [54] Note that the fast limit of the Kubo-Anderson theory is equivalent to the Markovian Bloch-Redfield theory, where it becomes clear that the intrinsic relaxation of the system (with contributions from both population relaxation and pure dephasing) is responsible for the so-called homogeneous broadening. (ii) \(\alpha\gg 1\) corresponds to the slow limit where the dynamics of the bath is slow compared to the inverse of the amplitude of the random frequency modulations. A Gaussian lineshape \(I(\omega)=\sqrt{\frac{2\pi}{\Omega^{2}}}e^{-\frac{\kappa^{2}}{2\Omega^{2}}}\) is obtained in this case, characterized by a width \(\Omega\) whose inhomogeneous character stems from the fact that each oscillator in an ensemble will experience different frequency shifts because of the slow dynamics of the environment, i.e., every oscillator will experience a "different" environment. _Model for polariton lineshape--_ The Kubo-Anderson solution of the lineshape of randomly modulated molecules treats a single molecule interacting with the radiation field and takes an average over an ensemble of such molecules. A naive extension to the molecule-in-cavity problem would be to consider an ensemble of systems, each comprising a single molecule and a cavity mode. However, such an extension of the original model would not be physical because a single-molecule is not sufficient to reach the strong coupling regime and thus such a model would not correspond to any realizable experimental situation. By contrast, a physically sound extension of the Kubo-Anderson model consists of a cavity photon (\(a_{c}\)) coupled to \(N\) molecules (\(a_{j}\)), essentially a Tavis-Cummings model [55] with modulated molecular transition frequencies. Note that the (static) disordered Tavis-Cummings model has recently been investigated via the Green's function approach [30; 36; 37; 38]. Here we follow Anderson and Kubo and investigate both the static and dynamic disorder cases and the transition between them. We represent the molecules by classical harmonic oscillators which, under driving by an incident radiation field \(Fe^{-i\omega t}\)[56] evolve according to [57] \[\dot{a}_{c}(t) =-i\omega_{c}a_{c}-iu\sum_{j}^{N}a_{j}-\kappa a_{c}+iFe^{-i\omega t} \tag{2}\] \[\dot{a}_{j}(t) =-i(\omega_{j}+\delta\omega_{j}(t))a_{j}-iua_{c}-\gamma a_{j}\] where \(\omega_{c}\) is the photon frequency, \(\omega_{j}\) is the time-independent molecular transition frequency, \(\delta\omega_{j}(t)\) is the random frequency modulation of the molecular transition, \(u\) is the single-molecule coupling strength, and \(\kappa\) and \(\gamma\) are the dampings of the photon and molecules, respectively. [58] In the following we focus on the on-resonance situation with \(\omega_{0}\equiv\omega_{c}=\omega_{j}\), where the cavity response in the strong-coupling regime is characterized by two polariton peaks separated by the (collective) Rabi splitting \(\Omega_{R}=2\sqrt{N}u\). In steady state, the solutions oscillate with the driving frequency, i.e., \(a_{c}(t)=\bar{a}_{c}(t)e^{-i\omega t}\), \(a_{j}(t)=\bar{a}_{j}(t)e^{-i\omega t}\), so the equations of motion become \[\dot{\bar{a}}_{c}(t) =-i\bar{\omega}_{0}\bar{a}_{c}-iu\sum_{j}^{N}\bar{a}_{j}-\kappa \bar{a}_{c}+iF \tag{3a}\] \[\dot{\bar{a}}_{j}(t) =-i(\bar{\omega}_{0}+\delta\omega_{j}(t))\bar{a}_{j}-iu\bar{a}_{ c}-\gamma\bar{a}_{j} \tag{3b}\] where we have defined \(\bar{\omega}_{0}\equiv\omega_{0}-\omega\). The average total energy of the system \(\langle E(t)\rangle=\omega_{0}\langle a_{c}^{*}a_{c}\rangle+\sum_{j}^{N} \omega_{0}\langle a_{j}^{*}a_{j}\rangle\)[59] satisfies in steady-state \[\Big{\langle}\frac{dE}{dt}\Big{\rangle}=\Big{\langle}\frac{dE}{dt}\Big{\rangle} _{in}+\Big{\langle}\frac{dE}{dt}\Big{\rangle}_{out}=0 \tag{4}\] allowing to identify the pumping and damping contributions as \[\Big{\langle}\frac{dE}{dt}\Big{\rangle}_{in} =i(F\langle\bar{a}_{c}^{*}\rangle-F^{*}\langle\bar{a}_{c}\rangle) \tag{5a}\] \[\Big{\langle}\frac{dE}{dt}\Big{\rangle}_{out} =-2\kappa\langle|\bar{a}_{c}|^{2}\rangle-2\gamma\sum_{j}^{N}\langle |\bar{a}_{j}|^{2}\rangle \tag{5b}\] The absorption lineshape may be obtained by evaluating, for instance, Eq. (5a), as a function of the incident frequency \(\omega\). To this end we only need to find \(\langle\bar{a}_{c}\rangle\). For a single molecule outside the cavity, this approach leads to the familiar Kubo-Anderson result. [60] We proceed by integrating Eq. (3b) \[\bar{a}_{j}(t)=\bar{a}_{j}(t_{0})e^{-i\bar{\omega}_{0}(t-t_{0})- \gamma(t-t_{0})-i\int_{t_{0}}^{t}\delta\omega_{j}(t^{\prime\prime})\,dt^{ \prime\prime}} \tag{6}\] \[-iu\int_{t_{0}}^{t}dt^{\prime}\,e^{-i\bar{\omega}_{0}(t-t^{ \prime})-\gamma(t-t^{\prime})-i\int_{t^{\prime}}^{t}\delta\omega_{j}(t^{ \prime\prime})\,dt^{\prime\prime}}\bar{a}_{c}(t^{\prime})\] where the first term corresponds to the transient and only the second contributes to the steady-state solution. Using it in Eq. (3a) we find \[\dot{\bar{a}}_{c}(t)=-(i\bar{\omega}_{0}+\kappa)\bar{a}_{c}+iF\] \[-u^{2}\sum_{j}^{N}\int_{-\infty}^{t}dt^{\prime}\,e^{-i\bar{ \omega}_{0}(t-t^{\prime})-\gamma(t-t^{\prime})}e^{-i\int_{t^{\prime}}^{t} \delta\omega_{j}(t^{\prime\prime})dt^{\prime\prime}}\bar{a}_{c}(t^{\prime}) \tag{7}\] Also, at steady state (i.e., \(t\rightarrow\infty\)), \(\bar{a}_{c}(t^{\prime})\) can be taken outside the integral by the following argument: When \(t^{\prime}\) is large (i.e., \(t^{\prime}\to t\)), \(\bar{a}_{c}(t^{\prime})\) is a constant, while when \(t^{\prime}\) is small (i.e., \(t^{\prime}\rightarrow-\infty\)), the term vanishes. This leads to \[\dot{\bar{a}}_{c}(t)=-(i\bar{\omega}_{0}+\kappa)\bar{a}_{c}+iF\] \[-u^{2}\bar{a}_{c}\int_{-\infty}^{t}dt^{\prime}\,e^{-i\bar{ \omega}_{0}(t-t^{\prime})-\gamma(t-t^{\prime})}\sum_{j}^{N}e^{-i\int_{t^{ \prime}}^{t}\delta\omega_{j}(t^{\prime\prime})dt^{\prime\prime}} \tag{8}\] Irrespective of the timescale of the frequency modulation, \[\sum_{j}^{N}e^{-i\int_{t^{\prime}}^{t}\delta\omega_{j}(t^{\prime\prime})dt^{ \prime\prime}}\approx N\big{\langle}e^{-i\int_{t^{\prime}}^{t}\delta\omega_{j}(t^ {\prime\prime})dt^{\prime\prime}}\big{\rangle} \tag{9}\] is a reasonable approximation for large \(N\), leading to \[\dot{\bar{a}}_{c}(t)=-(i\bar{\omega}_{0}+\kappa)\bar{a}_{c}+iF\] \[-Nu^{2}\bar{a}_{c}\int_{-\infty}^{t}dt^{\prime}\,e^{-i\bar{\omega}_ {0}(t-t^{\prime})-\gamma(t-t^{\prime})}\Big{\langle}e^{-i\int_{\nu}^{t}\delta \omega_{j}(t^{\prime\prime})dt^{\prime\prime}}\Big{\rangle}, \tag{10}\] and because at steady state \(\langle\dot{\bar{a}}_{c}\rangle=0\), it follows that \[\langle\bar{a}_{c}\rangle=\] \[\frac{iF}{i\bar{\omega}_{0}+\kappa+Nu^{2}\int_{-\infty}^{t}dt^{ \prime}\,e^{-i\bar{\omega}_{0}(t-t^{\prime})-\gamma(t-t^{\prime})}\Big{\langle} e^{-i\int_{\nu}^{t}\delta\omega_{j}(t^{\prime\prime})dt^{\prime\prime}}\Big{\rangle}} \tag{11}\] Knowing that \(\Big{\langle}e^{-i\int_{\nu}^{t}\delta\omega_{j}(t^{\prime\prime})dt^{\prime \prime}}\Big{\rangle}\) is a function of \(t-t^{\prime}\) in the present model [60], we have \[\langle\bar{a}_{c}\rangle=\frac{iF}{i\bar{\omega}_{0}+\kappa+Nu^{2}\int_{0}^{ \infty}dt\,e^{-i\bar{\omega}_{0}t-\gamma t}\phi(t)} \tag{12}\] with \[\phi(t)=\Big{\langle}e^{i\int_{0}^{t}\delta\omega_{j}(t^{\prime})dt^{\prime}} \Big{\rangle} \tag{13}\] which can be evaluated as in the Kubo-Anderson work [47; 48]. Eq. (12) will be our starting point to investigate the two limiting cases where the molecular transition is either homogeneously or inhomogeneously broadened, as well as the intermediate regime. As a final remark, note that in order to obtain a general handleable expression like Eq. (12) leading to analytical results for both the fast and slow limits (vide infra), it was crucial to use Eq. (9) before taking ensemble averages. If otherwise we had set \(\bar{\bar{a}}_{c}=0\) in Eq. (8) for the slow case and taken the average over realizations before using Eq. (9), we would have obtained a far more complex expression for \(\langle\bar{a}_{c}\rangle\) in the slow limit that would have required some approximation in order to be solved. _Fast limit._-- In the fast modulation limit \(\phi(t)=e^{-\Gamma t}\)[47; 48]. Eq. (12) then leads to \[\langle\bar{a}_{c}\rangle=\frac{iF}{i\bar{\omega}_{0}+\kappa+\frac{Nu^{2}}{i \bar{\omega}_{0}+\gamma_{m}}} \tag{14}\] where \(\gamma_{m}=\gamma+\Gamma\) is the total relaxation rate of the molecule, with pure dephasing rate \(\Gamma\) and lifetime broadening \(\gamma\). From the driving term in Eq. (5a) we find the spectrum to have a Lorentzian profile \[I(\omega)=|F|^{2}\frac{2\kappa|i\bar{\omega}_{0}+\gamma_{m}|^{2}+2\gamma_{m} Nu^{2}}{|(i\bar{\omega}_{0}+\kappa)(i\bar{\omega}_{0}+\gamma_{m})+Nu^{2}|^{2}} \tag{15}\] with the poles located at \[\omega=\omega_{0}-\frac{i}{2}(\gamma_{m}+\kappa)\pm\sqrt{Nu^{2}-\Big{(}\frac {\gamma_{m}-\kappa}{2}\Big{)}^{2}} \tag{16}\] By assuming \(\sqrt{N}u\gg(\gamma_{m}-\kappa)/2\), which is reasonable since we are interested in the collective strong coupling regime, we find the two polariton peaks at \(\omega_{0}\pm\sqrt{N}u-\frac{i}{2}(\gamma_{m}+\kappa)\), where they are split by the collective Rabi splitting \(\Omega_{R}\) and each peak inherits half of the original broadening of the cavity and molecular resonances. In particular when \(\gamma_{m}=\kappa\), Eq. (15) becomes \[I(\omega)=|F|^{2}\Bigg{(}\frac{\gamma_{m}}{(\bar{\omega}_{0}-\sqrt{N}u)^{2}+ \gamma_{m}^{2}}+\frac{\gamma_{m}}{(\bar{\omega}_{0}+\sqrt{N}u)^{2}+\gamma_{m}^ {2}}\Bigg{)} \tag{17}\] _Slow limit._-- A shortcut to explore the effect of static disorder on polariton broadening is to use the fact that in the slow modulation limit, \(\delta\omega_{j}\) are time-independent. Hence, \(\hat{a}_{c}=0\) and \(\hat{\bar{a}}_{j}=0\), so from Eq. (3) we have \[\bar{a}_{c}=\frac{iF}{i\bar{\omega}_{0}+\kappa+\sum_{j}^{N}\frac{u^{2}}{i(\bar{ \omega}_{0}+\delta\omega_{j})+\gamma}} \tag{18}\] and the lineshape for a given realization (using Eq. (5a) without the ensemble average) is \[I(\omega)=|F|^{2}\frac{2\kappa}{\Big{(}\bar{\omega}_{0}-\sum_{j}^{N}\frac{u^{ 2}}{\bar{\omega}_{0}+\delta\omega_{j}}\Big{)}^{2}+\kappa^{2}} \tag{19}\] where for the sake of simplicity we have neglected homogeneous broadening (\(\gamma=0\)). To make progress we expand the denominator for \(\delta\omega_{j}/\bar{\omega}_{0}\ll 1\) (which is satisfied in the vicinity of the polariton frequencies where \(\bar{\omega}_{0}\sim\pm\sqrt{N}u\) for strong enough coupling) and find \[I(\omega)=|F|^{2}\frac{2\kappa}{\Big{(}\bar{\omega}_{0}-\frac{Nu^{2}}{\bar{ \omega}_{0}}+\frac{u^{2}}{\bar{\omega}_{0}^{2}}W_{N}\Big{)}^{2}+\kappa^{2}} \tag{20}\] where we have defined \(W_{N}\equiv\sum_{j}^{N}\delta\omega_{j}\). This random number is characterized by the average \(\langle W_{N}\rangle=0\) and variance \(\langle\delta W_{N}^{2}\rangle=N\langle\delta\omega_{j}^{2}\rangle\). To understand the effect of static disorder on the position and broadening of the peaks we must analyze the zeros of the following term. \[\bar{\omega}_{0}-\frac{Nu^{2}}{\bar{\omega}_{0}}+\frac{u^{2}}{\bar{\omega}_{0} ^{2}}W_{N}=0 \tag{21}\] We proceed to solve the above expression for \(\bar{\omega}_{0}=\bar{\omega}_{0}^{0}+\Delta\bar{\omega}_{0}\) where \(\bar{\omega}_{0}^{0}=\pm\sqrt{N}u\) and the effect of static disorder is contained in \(\Delta\bar{\omega}_{0}\). To lowest order in \(\Delta\bar{\omega}_{0}\) we find \[\Delta\bar{\omega}_{0}=-\frac{u^{2}}{2\bar{\omega}_{0}^{0}}W_{N} \tag{22}\] The variance of this term represents the effect that static disorder has on the broadening and is given by \[\langle\delta\Delta\bar{\omega}_{0}^{2}\rangle=\Big{(}\frac{u^{2}}{2\bar{\omega }_{0}^{0}}\Big{)}^{2}\langle\delta W_{N}^{2}\rangle\sim\frac{\langle\delta \omega_{j}^{2}\rangle}{N} \tag{23}\] We see that \(\langle\delta\Delta\bar{\omega}_{0}^{2}\rangle^{1/2}\) scales like \(1/\sqrt{N}\), confirming that in the collective regime, polaritons are immune to broadening due to static disorder for sufficiently large Rabi splitting. Note that the \(1/\sqrt{N}\) scaling result was recently obtained with a more involved treatment [36]. A general expression for the lineshape can be obtained using \(\phi(t)=e^{-\frac{t}{2}\Omega^{2}t^{2}}\)[47; 48] in Eq. (12) [60] which leads to \[I(\omega)=|F|^{2}\frac{2(\kappa+\tilde{\gamma})}{(\bar{\omega}_{0}+\Delta)^{2} +(\kappa+\tilde{\gamma})^{2}} \tag{24}\] where \(\tilde{\gamma}(\omega)\equiv Nu^{2}\sqrt{\frac{\pi}{2\Omega^{2}}}e^{-\frac{ \bar{\omega}_{0}^{2}}{2\Omega^{2}}}\) and \(\Delta(\omega)\equiv-\tilde{\gamma}\text{Erfi}\big{[}\frac{\bar{\omega}_{0}}{ \sqrt{2\Omega^{2}}}\big{]}\), with Erfi denoting the imaginary error function, where for simplicity we have disregarded the intrinsic homogeneous broadening \(\gamma\). This lineshape is Lorentzian and in addition to the cavity broadening \(\kappa\) there is one of molecular origin \(\tilde{\gamma}\). Note that Eq. (24) is valid for any ratio of the Rabi splitting and inhomogeneous broadening, \(\Omega_{R}/\Omega\), and therefore can describe the lineshape of molecular polaritons for the important and common case that the inhomogeneous broadening of the molecular species is a considerable fraction of the Rabi splitting. In the limit when \(\Omega_{R}/\Omega\gg 1\), \(\tilde{\gamma}\) vanishes at the polariton frequencies and only the cavity broadening remains [60], in agreement with Eq. (20) and Eq. (23). In Fig. 1 we plot analytical results for the spectrum in the slow modulation limit with varying degree of Rabi splitting with respect to the amplitude of the random frequency modulation, \(\Omega_{R}/\Omega\). We see that while outside the cavity the spectrum has a Gaussian profile, the polariton lineshape in the strong coupling regime is much narrower, with a Lorentzian lineshape whose width is reduced as the Rabi splitting increases relative to the homogeneous broadening. Note that for \(\Omega_{R}/\Omega\approx 2\) the two polariton peaks are already visible despite the Rabi splitting being only twice the inhomogeneous broadening. This is a common situation in molecular polaritons, for instance, in [61] the molecular band had a Gaussian-like profile with a FWHM\(\sim 530\) meV, which corresponds to \(\Omega\sim 300\) meV, and the Rabi splitting was \(\sim 600\) meV. Also, note that for \(\Omega_{R}/\Omega\approx 4\) (pink line) the narrowing in the lineshape is already quite noticeable. We mention in passing that for small \(\Omega_{R}/\Omega\), the two polariton bands arise from many eigenstates with a small cavity photon contribution, and not two clean polaritons, as already extensively discussed in the literature [21; 23; 44; 61]. _Intermediate regime.--_ We here explore the intermediate regime where the correlation time of the random frequency modulations is comparable to the inverse of their amplitude, i.e., \(\alpha\equiv\tau_{c}\Omega\approx 1\). To investigate the transition between the slow and fast limits we numerically calculate the spectrum with Eq. (5a) and Eq. (12). In this general case the ensemble averaged quantity is given by [48; 62] \[\phi(t)=\exp\Big{[}-\alpha^{2}\Big{(}\frac{t}{\tau_{c}}-1+e^{-t/\tau_{c}} \Big{)}\Big{]} \tag{25}\] with \(\alpha\equiv\tau_{c}\Omega\) determining the transition between the fast (\(\alpha\ll 1\)) and slow (\(\alpha\gg 1\)) limits. In Fig. 2 we plot numerical results for varying parameter \(\alpha\) that controls the timescale of the frequency modulations relative to their amplitude. In all cases the spectrum smoothly transitions from the dynamic to the static disorder limit. For the two limits, \(\alpha=0.02\) and \(\alpha=50.0\), we also plot (in black dashed lines) the analytical spectrum which overlaps with the numerical results. While outside the cavity the lineshape is very different in the fast and slow limits (Lorentzian vs Gaussian), such a difference is reduced inside the cavity as \(\Omega_{R}/\Omega\) increases. For instance, in Fig. 2(c), the lineshape is pretty narrow regardless of the \(\alpha\) parameter. Also, in these plots we can see that the frequency of the polariton peaks in the slow limit is slightly larger than that in the fast limit (\(\pm 2\sqrt{N}u\)), reflecting the effect of \(\Delta\) in Eq. (24). Only when \(\Omega_{R}\gg\Omega\) do the polariton frequencies for the slow limit coincide with those of the fast limit. _Conclusions.--_ In this Letter we have applied Kubo-Anderson's theory of the stochastic lineshape to a model problem of coupled, driven and damped (classical) harmonic oscillators describing polaritons formed in the strong coupling regime. We have derived analytic expressions for the polariton lineshape in the limits of fast and slow disorder of the molecular transition frequency and numerically explored the intermediate regime as well. Our theory predicts that polaritons inherit half the original homogeneous broadening of the cavity and molecu Figure 1: (a) Analytical spectrum for the slow modulation limit in the non-cavity case [60] and for the cavity case (Eq. (24)) with increasing ratio between the Rabi splitting and the amplitude of the random frequency modulation, \(\Omega_{R}/\Omega\), where \(\Omega_{R}=2\sqrt{N}u\). Parameters: \(\omega_{0}=0\), \(\kappa=0.02\), \(\gamma=0\), \(\Omega=0.1\), \(\alpha=50\). lar resonance while static disorder does not contribute to their broadening for large enough Rabi splitting, in agreement with experimental observations and previous numerical calculations. Our results also provide an analytical expression for the polariton lineshape valid for any degree of static disorder relative to the Rabi Splitting, which is especially relevant within the context of molecular polaritons where the homogeneous broadening of the molecular transition can be a significant fraction of the Rabi splitting. C.C. thanks the Vagelos Institute for Energy, Science, and Technology (VIEST) for a postdoctoral fellowship that initially supported this work. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101029374. This work has been supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award No. DE-SC0019397 (J.E.S.). The research of A.N. was supported by the U.S. National 303 Science Foundation (grant no. CHE1953701).
2305.10107
Euclid preparation. XXIX. Water ice in spacecraft part I: The physics of ice formation and contamination
Molecular contamination is a well-known problem in space flight. Water is the most common contaminant and alters numerous properties of a cryogenic optical system. Too much ice means that Euclid's calibration requirements and science goals cannot be met. Euclid must then be thermally decontaminated, a long and risky process. We need to understand how iced optics affect the data and when a decontamination is required. This is essential to build adequate calibration and survey plans, yet a comprehensive analysis in the context of an astrophysical space survey has not been done before. In this paper we look at other spacecraft with well-documented outgassing records, and we review the formation of thin ice films. A mix of amorphous and crystalline ices is expected for Euclid. Their surface topography depends on the competing energetic needs of the substrate-water and the water-water interfaces, and is hard to predict with current theories. We illustrate that with scanning-tunnelling and atomic-force microscope images. Industrial tools exist to estimate contamination, and we must understand their uncertainties. We find considerable knowledge errors on the diffusion and sublimation coefficients, limiting the accuracy of these tools. We developed a water transport model to compute contamination rates in Euclid, and find general agreement with industry estimates. Tests of the Euclid flight hardware in space simulators did not pick up contamination signals; our in-flight calibrations observations will be much more sensitive. We must understand the link between the amount of ice on the optics and its effect on Euclid's data. Little research is available about this link, possibly because other spacecraft can decontaminate easily, quenching the need for a deeper understanding. In our second paper we quantify the various effects of iced optics on spectrophotometric data.
Euclid Collaboration, M. Schirmer, K. Thürmer, B. Bras, M. Cropper, J. Martin-Fleitas, Y. Goueffon, R. Kohley, A. Mora, M. Portaluppi, G. D. Racca, A. D. Short, S. Szmolka, L. M. Gaspar Venancio, M. Altmann, Z. Balog, U. Bastian, M. Biermann, D. Busonero, C. Fabricius, F. Grupp, C. Jordi, W. Löffler, A. Sagristà Sellés, N. Aghanim, A. Amara, L. Amendola, M. Baldi, C. Bodendorf, D. Bonino, E. Branchini, M. Brescia, J. Brinchmann, S. Camera, G. P. Candini, V. Capobianco, C. Carbone, J. Carretero, M. Castellano, S. Cavuoti, A. Cimatti, R. Cledassou, G. Congedo, C. J. Conselice, L. Conversi, Y. Copin, L. Corcione, F. Courbin, A. Da Silva, H. Degaudenzi, A. M. Di Giorgio, J. Dinis, F. Dubath, X. Dupac, S. Dusini, S. Farrens, S. Ferriol, M. Frailis, E. Franceschi, M. Fumana, S. Galeotta, B. Garilli, W. Gillard, B. Gillis, C. Giocoli, S. V. H. Haugan, H. Hoekstra, W. Holmes, F. Hormuth, A. Hornstrup, K. Jahnke, S. Kermiche, A. Kiessling, M. Kilbinger, T. Kitching, M. Kunz, H. Kurki-Suonio, S. Ligori, P. B. Lilje, I. Lloro, E. Maiorano, O. Mansutti, O. Marggraf, K. Markovic, F. Marulli, R. Massey, E. Medinaceli, S. Mei, Y. Mellier, M. Meneghetti, E. Merlin, G. Meylan, M. Moresco, L. Moscardini, E. Munari, R. Nakajima, S. -M. Niemi, J. W. Nightingale, T. Nutma, C. Padilla, S. Paltani, F. Pasian, V. Pettorino, S. Pires, G. Polenta, M. Poncet, L. A. Popa, F. Raison, A. Renzi, J. Rhodes, G. Riccio, E. Romelli, M. Roncarelli, E. Rossetti, R. Saglia, D. Sapone, B. Sartoris, P. Schneider, A. Secroun, G. Seidel, S. Serrano, C. Sirignano, G. Sirri, J. Skottfelt, L. Stanco, P. Tallada-Crespí, A. N. Taylor, I. Tereno, R. Toledo-Moreo, I. Tutusaus, E. A. Valentijn, L. Valenziano, T. Vassallo, Y. Wang, J. Weller, A. Zacchei, J. Zoubian, S. Andreon, S. Bardelli, P. Battaglia, E. Bozzo, C. Colodro-Conde, M. Farina, J. Graciá-Carpio, E. Keihänen, V. Lindholm, D. Maino, N. Mauri, N. Morisset, V. Scottez, M. Tenti, E. Zucca, Y. Akrami, C. Baccigalupi, M. Ballardini, A. Biviano, A. Blanchard, A. S. Borlaff, C. Burigana, R. Cabanac, A. Cappi, C. S. Carvalho, S. Casas, G. Castignani, T. Castro, K. C. Chambers, A. R. Cooray, J. Coupon, H. M. Courtois, J. -G. Cuby, S. Davini, G. De Lucia, G. Desprez, S. Di Domizio, H. Dole, J. A. Escartin, S. Escoffier, I. Ferrero, L. Gabarra, K. Ganga, J. Garcia-Bellido, K. George, F. Giacomini, G. Gozaliasl, H. Hildebrandt, J. J. E. Kajava, V. Kansal, C. C. Kirkpatrick, L. Legrand, P. Liebing, A. Loureiro, G. Maggio, M. Magliocchetti, G. Mainetti, R. Maoli, S. Marcin, M. Martinelli, N. Martinet, C. J. A. P. Martins, S. Matthew, M. Maturi, L. Maurin, R. B. Metcalf, P. Monaco, G. Morgante, S. Nadathur, A. A. Nucita, L. Patrizii, J. E. Pollack, V. Popa, D. Potter, M. Pöntinen, A. G. Sánchez, Z. Sakr, A. Schneider, M. Sereno, A. Shulevski, P. Simon, J. Steinwagner, R. Teyssier, J. Valiviita
2023-05-17T10:15:37Z
http://arxiv.org/abs/2305.10107v2
# Euclid preparation. XXIX. Water ice in spacecraft part I: ###### Abstract Context: A potential of the X-ray Observatory (X-ray) and the (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-ray) (X-X-ray) (X-ray) To derive a calibration and decontamination strategy, we need to understand the link between the amount of ice in the optics and its effect on the data. There is little research about this, possibly because other spacecraft can decontaminate more easily, quenching the need for a deeper understanding. In our second paper, we quantify the impact of iced optics on _Euclid_'s data. Space vehicles, Space vehicles: instruments, Telescopes, Molecular processes, Solid state: volatile ## 1 Introduction _Euclid_ will survey \(15\,000\,\mathrm{deg}^{2}\) of extragalactic sky (Euclid Collaboration: Scaramella et al., 2022) during its nominal six-year mission (Laureijs et al., 2011; Racca et al., 2016). To achieve its cosmology science goals with measurements of weak lensing and galaxy clustering, _Euclid_ must maintain pristine image quality, and a relative spectrophotometric flux accuracy of about 1% in its optical and near-infrared (NIR) channels. _Euclid_ will observe from the Sun-Earth Lagrange point L2, which offers exceptional thermal stability and a well-known space environment. Yet, even at L2 _Euclid_ will degrade over time due to space weathering. Radiation damage, dust, and meteoroid1 impacts will degrade protective thermal blankets and their efficacy (Engelhart et al., 2017; Plis et al., 2019), which can change the electronical and optical performance. Detectors directly suffer from radiation that increases the charge transfer inefficiency of charge-coupled devices (CCDs; Massey et al., 2014), and they may decrease the quantum efficiency of some photo-diode architectures (Sun et al., 2020; Crouzet et al., 2020). Dust and meteoroids increase the scattering and transmission loss of optical elements through surface pitting (Rodmann et al., 2019; McElwain et al., 2023); ionising radiation has a similar effect on optical surfaces, although on smaller physical scales (Roussel et al., 2016; Simonetto et al., 2020). These environmental factors are well known at L2. _Euclid_'s calibration program is well suited to account and correct for them, yielding accurate and consistent survey data. Atomic oxygen, the prime cause for spacecraft degradation in low-Earth orbits (Banks et al., 2003; Palusinski et al., 2009; Samwell, 2014), is fortunately not a problem at L2. Footnote 1: The IAU discouraged the use of the term ‘micrometeoroid’ beginning in 2017. Dust particles are smaller than \(30\,\mathrm{\SIUnitSymbolMicro m}\), and meteoroids are larger. However, space weathering is not the only adversary. Ongoing contamination also degrades the performance of optics, solar panels, thermal control, and other sub-systems (e.g. Green, 2001; Zhao et al., 2009; Smith et al., 2012; Hui et al., 2022). We distinguish between particulate and molecular contamination, with the latter being composed of volatile (for example H\({}_{2}\)O and CO) and non-volatile substances, such as polymers. In this paper, we mostly focus on molecular contamination by water ice. In terms of prevention and minimisation of contamination, _Euclid_ is the best-designed spacecraft by the European Space Agency (ESA) to date. Water from material outgassing is expected to be the only relevant source of contamination, possibly forming thin ice films on optical surfaces throughout the mission. On ground, contamination is an inherent part of construction and launch, and subject to contamination control plans (Kimoto, 2017; Luey et al., 2018; Patel et al., 2019; Abeel et al., 2022). In the vacuum of space, contamination is driven by material outgassing (Chiggiato, 2020). Quartz crystal microbalances (QCMs) can be used to detect surface contamination down to a few \(10^{-9}\,\mathrm{g}\,\mathrm{cm}^{-2}\)(Dirri, 2016). In the case of water, this corresponds to a molecular monolayer with a 10% filling factor. Solar System exploration missions require additional decontamination prior to launch to preserve the pristine states of the bodies they visit, and those of any samples returned to Earth (Willson et al., 2018; Chan et al., 2020). Even though spacecraft materials can be degassed (baked out) to reduce outgassing, water and other trace materials are recaptured until launch, on timescales of days (Scialdone, 1993) and down to seconds (Postberg et al., 2009). The outgassing rate depends, among other factors, on the material's molecular structure, chemical composition, surface finish, coatings, temperature, and mass and mobility of the dissolved contaminants. Outgassing rates for spacecraft materials are usually determined at room temperature, and must be extrapolated to cryogenic conditions. This extrapolation is highly uncertain due to the considerable dependence of diffusion and sublimation coefficients on temperature. Nano-scale restructuring processes in the materials during cool-down also play a role. Accurate contamination forecasts are therefore hard and require considerable effort well beyond the scope of this paper (e.g. Brieda et al., 2022, for the _James Webb_ Space Telescope; JWST). To counter contamination, temperatures in many spacecraft can be increased locally - for example for a single lens - or globally to sublime volatile contaminants. A global decontamination, however, implies a major thermal shock to the spacecraft; it may alter electronic and optical properties, and may even lead to additional contamination (e.g. Haemmerle and Gerhard, 2006; Liebing et al., 2018). In the case of _Euclid_, on-board heating power is insufficient for a full decontamination; partial Sun exposure of the external telescope baffle is required, implying further risks. A full decontamination cycle for _Euclid_ lasts about one month, including warm-up, cool-down, and recalibration, and only \(1-2\) days are spent at maximum temperature to allow the sublimates to find their way out of the spacecraft. Given a mission duration of six years, this is a very costly procedure. Volatile and non-volatile molecular contamination has caused throughput losses of 20% and more in some Earth-observation satellites, posing a substantial challenge for accurate and consistent long-term environmental and climate monitoring. To this end, the Global Space-Based Intercalibration System (GSICS; Goldberg et al., 2011) has established terrestrial and bright celestial targets as a reference, used by numerous Earth-observation satellites for cross-calibration and correction. Yet surprisingly, little is known about the effect of iced optics on astrophysical observations. Perhaps this is because local decontamination comes as an easy fix in many spacecraft, readily and frequently applied whenever necessary, or because their calibration requirements are more relaxed; we give examples in Sect. 2 and links to to other works in Appendix A. _Euclid_, however, cannot heat individual optical elements alone, nor does it carry internal QCMs to monitor contamination directly. To maintain a spectrophotometric accuracy of 1% throughout its lifetime, _Euclid_ has to rely on its own survey and self-calibration data. In this way, we can detect and correct for water ice until a decontamination is required. In this context we need to understand (1) the physical properties of thin ice films, (2) their surface topography, (3) their formation on optical substrates, and (4) their temporal evolution in space conditions. We also need to investigate the outgassing and sublimation fluxes in _Euclid_, and how accurately they can be known in advance. These points are addressed in the present paper. We present a comprehensive analysis of ice contamination in spacecraft from the bottom-up perspective. This has allowed us to capture, understand, and counter _Euclid_'s response to ice contamination. In Sect. 2 we summarise the molecular contamination experienced by other spacecraft, building a picture of what _Euclid_ might encounter. In Sect. 3 we review the types of water ice that exist in a vacuum at cryogenic temperatures, how they transform into each other, and how their structure depends on the wetting properties of the substrates. In Sect. 4 we review the physics of diffusion, sublimation, and adsorption of water molecules. We also built a simple transport model to estimate the water exchange between surfaces, and thus the contamination rate in _Euclid_'s payload module (PLM). In Sect. 5 we present results about outgassing from _Euclid_'s thermal vacuum tests, and we conclude in Sect. 6. In our second paper we investigate the optical properties of thin ice films and their impact on the spectrophotometric data taken by _Euclid_. Specifically, we look at absorption, interference, scattering, polarisation, apodisation, and phase shifts, with each uniquely influencing _Euclid_'s spectrophotometric data. We have developed strategies to detect, monitor, and - if possible - correct for these effects. Only then are we in a position to determine how much ice _Euclid_ can tolerate on its optics to achieve its cosmological science, and when a decontamination is in order. ## 2 Lessons learnt from other spacecraft Material outgassing (Chiggiato 2020) has troubled spacecraft already in the _Mercury_, _Gemini_ and _Apollo_ programs (Leger & Bricker 1972). Numerous experiments were dedicated to it, such as on the _Mir_ space station (Wilkes & Zwiener 1999), the Midcourse Space Experiment (MSX, Uy et al. 1998), and the International Space Station (Palusinski et al. 2009). Astrophysical spacecraft have added further insight into contamination and its impressively broad spectrum of effects. Solar System exploration missions are particularly useful, often carrying pressure sensors and mass spectrometers to analyse the interplanetary gas and dust, and thus also the spacecraft's own outgassing constituents. In this section, we summarise the lessons learnt from some missions with a well-documented outgassing record. These are of great importance for our preparation of suitable calibration and decontamination plans. Appendix A has a list of references and short summaries for a larger number of astrophysical and Earth-observation satellites. ### Multi-layer insulation (MLI) thermal blankets Spacecraft have both hot and cold sides, in particular in the inner Solar System, and are wrapped in MLI (Cepeda-Rizo et al. 2021) blankets to ensure stable operating temperatures. Further thermal shielding may be needed internally to accommodate instrument needs, for example in _Euclid_ the Near Infrared Spectrometer and Photometer (NISP, Maciaszek et al. 2016) has its own blanket. MLI consist of multiple - often ten or more - thin sheets of a high-performance polymer such as Kapton - a polyimide - coated with aluminium or gold (Fig. 1). Outer layers may be carbon charged to suppress optical straylight ('black Kapton'; used for NISP). The individual MLI sheets are physically separated by a thin netting to minimise contact and thus thermal conductivity. To avoid rupture due to the rapid depressurisation during launch, the MLI may have venting holes or is perforated. Similar to other polymers, Kapton - in particular its amorphous versions - can trap large amounts of water due to its high gas solubility (e.g. Yang et al. 1985; Sharma et al. 2018); the dissolved water then also has great mobility (Chiggiato 2020). After degassing at 125\({}^{\circ}\) C for 24 h in a vacuum, Kapton quickly recaptures 0.6-0.7% of its initial total mass in terms of water, during 24 h at 20\({}^{\circ}\) C and in 55% relative humidity (see the National Aeronautics and Space Administration's outgassing database2). Further water intake appears to be stopped after this period (Scialdone 1993). Due to its common application in spacecraft, MLI is arguably the most important source of water contamination; it may also release other contaminants due to space weathering (Chen et al. 2016). Venting perforations - if present - facilitate contamination further, and the MLI may not deplete even after a decade in space (see below). Footnote 2: [https://outgassing.nasa.gov/](https://outgassing.nasa.gov/) For completeness, we note that MLI is not the only possible carrier of water and other contaminants in spacecraft. Noteworthy are honeycomb structures (Epstein & Ruth 1993), often comprising an aluminium core with carbon-fibre reinforced polymers (CFRP) that - depending on their design - might trap a considerable volume of water. While water is the most frequent contaminant, other substances such as carbonates may be more troublesome for specific instruments. For _Euclid_, water is expected to cause 90-95% of the overall transmission loss due to molecular and particulate contamination. We concentrate on water from Sect. 3 onwards, with a short excursion in Sect. 4.6.4 where we address contamination from _Euclid_'s hydrazine thrusters. ### Hubble Space Telescope In its early years, the _Hubble_ Space Telescope (HST) carried the Wide Field and Planetary Camera WFPC1, and its successor WFPC2. Both cameras suffered greatly from contamination in the UV (MacKenty et al. 1993; Holtzman et al. 1995). Photopolymerisation was the cause for the heavy non-volatile con Figure 1: Structure of a MLI thermal blanket, the main source of water contamination in spacecraft. Figure credit: John Rosise of Aerospace Educational Development Program (AEDP), CC BY-SA 2.5 license tamination of WFPC1 (Tveekrem et al., 1996; Lallo, 2012), and the reservoirs of these contaminants depleted within 3 years. WFPC2 was contaminated mostly by water, resulting in typical flux losses of 1% day\({}^{-1}\) at wavelengths 170-215 nm. It was thermally decontaminated on average every 28 days between 1993 until at least 2001 (Baggett et al., 2001). The contamination rate slowly decreased by a factor of 2 during this time, and later on WFPC2 was decontaminated every 49 days (Gonzaga et al., 2010). WFPC2 contamination estimates at wavelengths \(\lambda>600\) nm have poor signal-to-noise ratio (S/N), since WFPC2's UV science cases required decontamination before flux losses became evident at longer wavelengths. In the F555W filter - corresponding to the blue end of _Euclid_'s IE passband - the mean flux loss between 1993 and 1998 was 1.2 \(\pm\) 0.3% month\({}^{-1}\)(Baggett & Gonzaga, 1998). More complex wavelength dependencies were found at longer wavelengths, partially attributed to different contaminants and their intrinsic diffusion-sublimation timescales. The Wide-Field Camera 3 (WFC3), installed in 2009, has a throughput loss of up to 0.3% year\({}^{-1}\) in the ultraviolet and visible (UVIS) channel, not attributed to contamination (Shanahan et al., 2017). The infrared (IR) channel loses about 0.1% year\({}^{-1}\), likely due to photopolymerisation of contaminants (Bohlin & Deustua, 2019). More details about HST contamination and control can be found in Clampin (1992), Baggett et al. (1996), and Baggett & Gonzaga (1998). ### ACIS / Chandra The _Chandra_ X-ray observatory has been operated since 1999. Since the detectors in the Advanced CCD Imaging Spectrometer (ACIS) are sensitive to optical wavelengths, an optical blocking filter (OBF) is used. Plucinsky et al. (2018) show that the optical thickness at X-ray wavelengths has been monotonically increasing - and slowly stabilising - during the first seven to eight years of the mission, due to contamination of the OBF. In 2010, a phase of increasingly rapid contamination began that is still ongoing at different speeds for different atomic species (Plucinsky et al., 2020). Plucinsky et al. (2018) argue that the initial stabilisation could be due to depletion of the contaminants' reservoirs, while the observed acceleration beginning a decade later came as a surprise. Plausibly, radiation damage (Engelhart et al., 2017; Plis et al., 2019) or mechanical dust and meteoroid breakdown of the MLI led to an increase in internal temperatures, activating out-gassing sources that were dormant previously. The steep temperature dependence of sublimation and diffusion (Sect. 4) supports this scenario. The slow-down in contamination since 2017 can be explained by the near depletion of the contaminants, by an increased sublimation from the OBF due to higher temperatures, or both. The atomic composition of the contaminants is available from their X-ray absorption edges. The dominant species is carbon, followed by oxygen and fluorine. Their deposition rate and spatial distribution has changed over time, indicating that multiple contamination sources are at play. Contamination has been active in _Chandra_ for almost two decades. Similar contamination effects have been observed in the X-ray Multi-Mirror Mission's (XMM-_Newton_) European Photon Imaging Camera Metal Oxide Semi-conductor cameras (EPICMOS), and also the Reflection Grating Spectrometer (RGS; Plucinsky et al., 2012). More details are given in the official calibration release documents3. Footnote 3: [https://xmmweb.esac.esa.int/docs/documents/CAL-SRN-0390-2-2.pdf](https://xmmweb.esac.esa.int/docs/documents/CAL-SRN-0390-2-2.pdf) [https://xmmweb.esac.esa.int/docs/documents/CAL-SRN-0305-1-0.pdf](https://xmmweb.esac.esa.int/docs/documents/CAL-SRN-0305-1-0.pdf) [https://xmmweb.esac.esa.int/docs/documents/CAL-SRN-0314-1-0.pdf](https://xmmweb.esac.esa.int/docs/documents/CAL-SRN-0314-1-0.pdf) ### Cassini #### 2.4.1 Cosmic Dust Analyzer (CDA) _Cassini_ launched in 1997 and reached Saturn in 2004. It carried the Cosmic Dust Analyzer (CDA), which measured mass, speed, direction, and chemical composition of cations. The latter were extracted from the gas and plasma cloud created by the impact of particles on a rhodium target plate, liberating contaminants as well (Postberg et al., 2009). During _Cassini_'s cruise phase, the CDA was contaminated by rocket exhaust fumes, outgassing, Solar wind, and the interplanetary medium. Beginning in 2000, after _Cassini_'s last inner Solar System fly-bys, the CDA was decontaminated by heating the target plate to 370 K for 8 h every few months. This removed volatile contaminants such as hydrocarbons and water ice. Postberg et al. (2009) identified H\({}^{+}\) and C\({}^{+}\) as the dominant contaminants with O\({}^{+}\) at lower levels, but they could not unambiguously locate their origin. Direct hydrocarbon contamination of the target plate has been considered unlikely. Plausibly, hydrocarbons elsewhere in the spacecraft were photolysed by the UV background, and the broken-down constituents formed an amorphous, non-volatile carbon-rich layer on the target plate. This particular contaminant likely formed prior to 2000 while _Cassini_ was still in the inner Solar System. Any halogen contaminants, such as Cl\({}^{-}\) and F\({}^{-}\), remained undetected since they were propelled away from the detector. Contamination in the CDA mass spectra was taken into account until the end of the mission in 2017 (e.g. Altobelli et al., 2016). #### 2.4.2 Narrow Angle Camera (NAC) _Cassini_'s NAC was decontaminated at 30\({}^{\circ}\) C every six months for 14 h during the cruise phase. Until the Jupiter fly-by in late 2000, the NAC detector was kept warm at 0\({}^{\circ}\) C to minimise radiation damage by means of continuous annealing (Dale & Marshall, 1991; Holmes-Siedle et al., 1991; Bassler, 2010). Contamination was absent in the Jupiter images taken with a detector temperature of \(-\)90\({}^{\circ}\) C, but in 2001 a considerable haze appeared (Fig. 2). It contained about 30% of the total flux at 827 nm and 80% of the flux at 316 nm. This was surprising because the haze manifested within a few days after a decontamination. The main difference to the earlier 12 decontamination cycles was that the latest one heated the NAC from \(-\)90\({}^{\circ}\) C to \(+\)30\({}^{\circ}\) C, whereas all prior cycles went from 0\({}^{\circ}\) C to \(+\)30\({}^{\circ}\) C. We know from Earth-orbiting satellites that shadow passages can release considerable amounts of water and particulates (see also Sect. 2.7), due to rapidly changing temperatures and related mechanical stresses; this is known as 'orbital thermo-cycling'. Haemmerle & Gerhard (2006) argue that a similar effect caused the NAC contamination, concluding that a decontamination can cause contamination if executed too quickly. To recover NAC it had to be decontaminated twice: Once during a careful slow heating for seven days to \(-\)7\({}^{\circ}\) C, which removed most of the haze that was likely due to water vapour. A remaining haze in the UV images was cleared by a another seven day decontamination to +4\({}^{\circ}\) C, probably due to very small particulates or molecular contamination other than water. Similarly, reoccurring contamination events were observed with the optical navigation camera onboard STARDUST (Bhaskaran et al. 2004). ### XMM-Newton Optical Monitor Similar to _Chandra_, the X-ray Multi-mirror Mission (XMM-_Newton_) has experienced considerable molecular contamination of its X-ray imaging and spectroscopy cameras (Schartel et al. 2022). The most likely contaminants are hydrocarbons, and other contaminants are suspected as well. Their origin is not well understood, and contamination has continuously increased over 22 years since launch. Of particular interest to us is the Optical Monitor (OM), observing in the 180-700 nm range (Mason et al. 2001). The in-orbit commissioning of the OM showed a chromatic throughput loss of 16% to 56% compared to pre-launch expectations, with the largest losses occurring in the UV (Kirsch et al. 2005; Schartel et al. 2022). This contamination is attributed to non-volatile hydrocarbons, as the OM detector is kept at 300 K, and the entire optics at 290 K (Stramaccioni et al. 2000); surface contamination by water does not persist in a vacuum at these temperatures (Sect. 4.3). Contamination of the OM has increased since, in parallel to an expected degradation of the detector's photocathode, which causes additional throughput losses up to 2.8% year\({}^{-1}\)(Kirsch et al. 2005). Notably, the OM's point-spread function (PSF) appears unaffected1 by the increasing contamination, and a chromatic aurence from scattering as in the contaminated _Cassini_ NAC images (Fig. 2) seems absent. Therefore, absorption by organic non-volatile contaminants is the most likely explanation for the observed throughput loss. The UV/Optical telescope (UVOT) onboard the _Swift_ Gamma-ray observatory inherited the OM design with improved contamination control (Roming et al. 2005), and has shown little impact from contamination since (Poole et al. 2008; Breeveld et al. 2010; Kuin et al. 2015). Footnote 1: [https://xmmweb.esac.esa.int/docs/documents/CAL-TN-0019.pdf](https://xmmweb.esac.esa.int/docs/documents/CAL-TN-0019.pdf) ### Genesis Genesis was a sample return mission probing the Solar wind, exposing ultra-clean sample containers for 850 days at Lagrange point L1. The containers were purged with dry nitrogen from assembly until launch to minimise on-ground contamination. Upon their recovery, the containers showed pervasive stains from material outgassing, composed of H, C, N, O, Si, and F. The root contaminants were not identified, but plausibly contained hydrocarbon, siloxane, and fluorocarbon components that were either vacuum pyrolysed, or polymerised by the UV background, or both (Burnett et al. 2005; Calaway et al. 2006); for the effect of UV-photofixation of contaminants, see also Roussel et al. (2016). As for the possible sources, Burnett et al. (2005) list among others seals and locking elements, the electroplated gold concentrator, sealants and greases, residual films from pre-flight storage containers or processing, and residue from anodisation processing. ### Midcourse Space Experiment (MSX) MSX was launched in 1996 into a Sun-synchronous orbit at 903 km altitude and inclination of 99\({}^{\circ}\), carrying a total of ten contamination monitoring instruments; among others a neutral mass spectrometer (NMS) and a total pressure sensor (TPS) to analyse its gaseous surroundings, and five QCMs to investigate film depositions on external and internal surfaces (Uy et al. 1998). MSX was operated for 12 years. The NMS data show a pressure decrease with time \(t\) as \(t^{-1}\) for the first few days, then slowing down to approximately \(t^{-0.5}\) over the next six months (Uy et al. 2003), which corresponds to a 1/e decay time of \(t_{\rm e}=45\) days. The TPS data shows a \(t^{-0.6}\) dependence (\(t_{\rm e}=30\) days) over the first six months. This pressure evolution is attributed to the sublimation of superficial water ice, followed by diffusion and sublimation of absorbed water. After the end of its initial ten month cryogenic phase, the MSX was inclined on a yearly basis by 30\({}^{\circ}\) towards the Sun to heat its baffle and primary mirror (Uy et al. 1998). Even after six years, these Sun exposure tests were always accompanied by a 100-fold increase in TPS water vapour pressure from the sudden illumination of MLI that otherwise remained in the shadow; the pressure peaks even increased with every repetition of this test. Uy et al. (2003) conclude that the MLI acts as a deep water reservoir and continuous source of contamination over many years, and that it is difficult to deplete. The MLI was also found to react very quickly to even small changes in the solar illumination angle. Uy et al. (2003) and Wood et al. (2003) also report numerous high-pressure transients unrelated to changes in solar exposure. These could be caused by rupturing, meteoroid impacts, and stress-release events due to orbital thermo-cycling, and are evidenced by an increasing particle density in the spacecraft's local environment (orbital degradation). The QCM results are described by Wood et al. (2003). During the initial, ten month long cryogenic phase, the contamination layers grew up to 16 nm thick, depending on which parts of the spacecraft were in the QCMs' field of view. The internal QCM showed the highest contamination, mostly from Ar - used as a cryogen - and O, whereas H\({}_{2}\)O and CO\({}_{2}\) were not detected. During the baffle's Sun-exposure tests following the cryogenic phase, up to 20 nm of water were deposited on the internal QCM, indicating that the cold baffles had trapped considerable amounts of water. The water began to evaporate noticeably once the QCM was heated to 150 K, and was gone once 165 K were exceeded. Of the external QCMs, the ones facing the solar panels showed the highest rate of contamination during the first two years of the mission, followed by incomplete sublimation over the next three years, indicating the presence of non-volatile contaminants. Figure 2: Effect of ice on the point-spread function (PSF). _Left panel_: _Cassini_ / NAC image of the star Maia (Pleiades) taken in the broadband CL1/GRN filter (\(\lambda_{\rm eff}=568\) nm) before the contamination event. _Middle panel_: Bright star \(\alpha\) PSA in the same filter, after the contamination event. _Right panel_: Colour image of Maia in filter combinations UV2/UV3 (\(\lambda_{\rm eff}=316\) nm; blue), CL1/GRN (green), and IR2/IR1 (\(\lambda_{\rm eff}=827\) nm; red), showing the chromaticity of molecular scattering. Figure credit: Adapted from Haemmerle & Gerhard (2006). ### ROSINA / Rosetta Rosetta carried the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA), consisting of two mass spectrometers (RTOF and DFMS), and the Comet Pressure Sensor (COPS). Rosetta was launched in 2004 and arrived at comet P67 in 2014. ROSINA was active during extended periods of the cruise phase and two asteroid fly-bys, in particular also to understand contamination by outgassing (Schlappi et al., 2010). The initial desorption of water from Rosetta's surfaces had a \(1/e\) decay time of 30 days and could be detected for the first 200 days of the mission (Fig. 3). Once this source depleted, diffusion-sublimation became the dominant source in both DFMS and COPS data. After three years, the pressure around Rosetta had stabilised at \(3\times 10^{-11}\) mbar. For comparison, the typical pressure in interplanetary space at these heliocentric distances is considered to be a few \(10^{-12}\) mbar (Postberg et al., 2009) and below. The mass spectrometers did not have any direct line of sight to structural parts of the spacecraft, whereas the pressure sensor had a nearly full solid-angle field of view. Schlappi et al. (2010) show that the pressure sensors and mass spectrometers reacted mostly to return flux from self-scattering. In other words, Rosetta did travel in its own gas cloud, dense enough such that backscattering caused contamination elsewhere on the spacecraft. Similar to MSX, ROSINA found the gas pressure to be highly dependent on the spacecraft's Sun attitude (Schlappi et al., 2010). This was noticed during the asteroid fly-bys, when Rosetta was reoriented to keep the target in sight and to protect some instruments from direct Sun exposure. The sudden illumination of structural parts that had been in the shadow for years resulted in the pressure exceeding \(10^{-8}\) mbar within a few tens of seconds after a reorientation. Likewise, the chemical composition of the gas phase changed, an effect that was also observed after the switch-on of previously dormant instruments. Outgassing from suddenly exposed, previously unilluminated components can cause a considerable acceleration of the spacecraft. For example, when OSIRIS-REx exposed its sample-return capsule to the Sun on its outbound journey, the acceleration exceeded that by Solar radiation pressure by one order of magnitude (Sandford et al., 2020). Schlappi et al. (2010) report 146 different chemical constituents in the ROSINA outgassing data, from hydrocarbons, PAHs, C-O, N-O and C-O compounds to S, F, and Cl. The dominant species detected by DFMS are H\({}_{2}\)O, followed by CO, N and CO\({}_{2}\). Hydrocarbon compounds may originate from polycarbonates (structural parts) and solvents, nitrogen-bearing compounds from adhesives, epoxies, coatings, and structural parts. Halogens point at brazing and lubricants, structure and tapes. Curiously, the RTOF spectra were dominated by F followed by H\({}_{2}\)O. The high fluorine detection has been explained by a F-bearing lubricant used in the antenna, which is Sun-lit and closer to RTOF than to DFMS - neither of which has direct lines of sight to the spacecraft. Again, this shows that contaminants evaporated into space can re-contaminate the spacecraft elsewhere through backscattering (see also Bieler et al., 2016). Schlappi et al. (2010) estimate that several hundred grams of nonmetallic material and water outgas every year from Rosetta. ### Gaia Gaia is similar to _Euclid_, in the sense that it is a wide-field astrophysical survey mission. Its mirrors and telescope structure are made of silicon carbide (SiC; Bougoin & Lavenac, 2011), as are _Euclid_'s (Bougoin et al., 2019). SiC is known for its high strength, hardness, thermal conductivity, and low thermal expansion. Gaia had an industry forecast of very low water contamination. However, water heavily contaminated the optical system, leading to early and rapid transmission loss that required prompt decontamination (Fig. 4). A total of six decontaminations were needed over 2.6 years to reach a quasi-stable state (see also Gaia Collaboration: Prusti et al., 2016). As of now, no clear consensus has been achieved about the nature and origin of the contamination. Possibly, there is a contamination path from the service module (SVM) to the PLM, even though the two are separated by a single-layer insulation (SLI, as is the case for _Euclid_, see Figs. 14 and 15 in the Appendix). Contamination is spatially and temporally variable across Gaia's focal planes, and it appears to have switched between Gaia's two mirror systems Figure 4: Throughput loss for Gaia’s telescopes since the beginning of operations. Initially, a rapid loss of \(0.06\,\mathrm{mag\,day^{-1}}\) was observed. A total of six decontaminations (indicated by vertical lines) were required over 2.6 years to reach a nearly stable state. Minor discontinuities in the data are artefacts due to an incremental calibration strategy. Figure 3: Pressure around Rosetta due to water outgassing, showing that spacecraft travel for many years in their own gas cloud. Initially, desorption from surfaces is the dominant source, and in this case detected for up to 200 days after launch. Diffusion-sublimation then supports the cloud for years after, with a pedestal from decomposition due to UV- and particle radiation. The outgassing rate appears fairly independent of heliocentric distance. Typical interplanetary pressure is a few \(10^{-12}\) mbar and below. Figure reproduced from Schläippi et al. (2010). (Riello et al., 2021). We note that the Gaia PLM is fully covered in MLI, very close to the optical surfaces5. Footnote 5: Photo of the MLI wrapping the Gaia optics and structure. Gaia carries internal laser interferometers to monitor its optical alignment. One of the most important lessons for _Euclid_ is that Gaia's SiC structure did not exactly resume its previous alignment after a decontamination. Moreover, slow and continuous focus drifts are seen over years after the last decontamination (Mora et al., 2016). This implies that a decontamination of _Euclid_ requires a careful check of the PSF calibration. Being 10-20 K warmer than _Euclid_, water in Gaia's MLI is more mobile and the outgassing rate considerably higher (see Sect. 4), but it is not at all clear whether this can explain Gaia's initial high transmission loss. Higher temperatures also mean higher sublimation fluxes, beneficial if the ice is located already on optical surfaces, but detrimental if located on - or still in - other surfaces from where it can contaminate optics. Given Gaia's completely different design, we cannot conclude whether _Euclid_'s lower temperature puts it at an advantage or disadvantage compared to Gaia, and on what timescales. _Euclid_'s design benefited considerably from the Gaia experience. ### Take-home points The main contamination lessons are: (i) water is the most common contaminant for spacecraft operating in or near cryogenic conditions; it is found on - and in - numerous materials, with MLI being the most important reservoir due to its large area and high solubility of water in it; (ii) contamination reservoirs deplete very slowly, and in the worst case will be active during _Euclid_'s entire life; (iii) contamination rates, chemical composition, and location are time variable, given the depletion of some reservoirs and the activation of others, for example due to temperature changes; (iv) spacecraft travel in their own gas cloud with sufficient gas pressure for backscattering, that is molecules evaporating into outer space can recontaminate the spacecraft elsewhere; (v) the chemical composition of the gas cloud is spatially variable, with water being dominant on the shadow side, and decomposed substances at the spacecraft's Sun-illuminated side; (vi) the pressure and chemical composition of the gas cloud respond within seconds to small changes in the spacecraft's Sun attitude and to instrument operations such as a switch-on; (vii) water re-absorption on ground is both hard to avoid despite cleaning and degassing efforts, and hard to track for estimates of the absolute amount of water re-absorbed; (viii) hydrocarbons and non-volatile organic compounds can considerably reduce the optical throughput by means of absorption. ### Pertinent technical details about Euclid For better understanding of the remainder of this paper, we provide here some technical details of _Euclid_'s PLM. A schematic layout of the optical configuration - a three-mirror anastigmat Korsch design (Korsch, 1977) - is shown in Fig. 5; more details are given in Venancio et al. (2014). Mirrors M1, M2, and M3 are powered mirrors, whereas FoM1 to FoM3 are flat. The dichroic plate separates the near-infrared from the optical wavelengths for simultaneous observations with VIS and NISP. The silver coatings on mirrors M1, M2, M3 and FoM3 have additional layers for chemical and physical protection. The designs of these protective layers were not disclosed to us. Usually, they are complex, see for example Sheikh et al. (2008) for the _Kepler_ Space Telescope, and also Garoli et al. (2020). The topmost layer is of great importance for the formation and structure of ice films, as we discuss next in Sect. 3. The entire layer stack is relevant for the optical properties of contaminating ice films, which we will show in our second paper. The folding mirrors FoM1 and FoM2 have a high-performance dielectric coating stack including layers of gold, to provide a wavelength cut-off below 0.42 \(\mu\)m. More details about the stacks were not disclosed by industry. The dichroic element and the NISP filters have alternating layers of Nb\({}_{2}\)O\({}_{5}\) and SiO\({}_{2}\). The coatings on the fused silica NISP lenses might include TiO\({}_{2}\). Jointly, the mirrors and the dichroic plate provide a complex chromatic selection function that defines the passbands - and out-of-band blocking - for the VIS and NISP instruments (for details, see Euclid Collaboration: Schirmer et al., 2022). Relevant for ice formation are also the in-flight temperatures of the optical and structural elements in the PLM. An estimate of the expected temperatures is given in Table 1. Exact values are difficult to predict from thermal modelling, and the actual temperatures might deviate by a few kelvin. Small changes in temperature may have a large impact on contamination, as we show in Sect. 4. To this end, we use a 'warm' case for comparison. The warm case is not realistic; it is a part of the thermal analysis, showing that _Euclid_'s temperature control systems can keep the spacecraft within operational limits even in unusual conditions. Figure 5: Schematic view of optical surfaces in _Euclid_. The telescope cavity contains M1, M2 and the baffle, and the instrument cavity (box) the remainder of the PLM. Figure credit: D. Filleul, Airbus Defence and Space. A high-resolution 3D rendering of the instrument cavity and a photograph of the real flight hardware are shown in Figs. 1 and 2. ## 3 Water ice types in spacecraft conditions The rest of this paper focuses on the effects of water, the most common - and for _Euclid_ - most important contaminant. Water shows complex behaviour in its solid and liquid phases. This is attributed to four hydrogen bonds available to a water molecule to connect to its neighbours, and the two lone electron pairs of oxygen forcing the molecule into its bent shape. A water molecule is 0.28 nm in size. Depending on temperature and pressure, water can form at least 20 different types of ice (Gasser et al., 2021; Rosu-Finsen et al., 2023). The formation and structure of thin water ice films on nanometer and micrometer scales has been very actively researched (see Salzmann, 2019, for a review). However, to the best of our knowledge, this has never been studied in the context of contamination of astrophysical observatories. Given _Euclid_'s extraordinary calibration requirements, we need to understand ice evolution at a molecular level and how the numerous related physical processes lead to measurable effects in _Euclid_ data. In Sects. 3.1 to 3.4, we introduce the various types of ice forming in spacecraft, that is in a high vacuum and for very low deposition rates. In laboratory experiments, thin ice films are usually deposited with 0.01-100 nm min\({}^{-1}\). Even the lowest rate of 0.01 nm min\({}^{-1}\) is 2-4 orders of magnitude (or more) higher than what _Euclid_ might experience in flight (Sect. 4.6). Yet for example 0.1 nm min\({}^{-1}\) are well applicable, since the latent heat released by adsorption of water molecules is rapidly dissipated in bulk ice (Brown et al., 1996), and eventually in the substrate before the next molecules are deposited. The thickness of laboratory ice films ranges from a few A - that is incomplete monolayers - to several \(\mu\)m. In Sects. 3.5 and 3.6 we review how the surface topography of the ice depends on the substrate. Studies of molecular contamination in the material sciences and by industry usually parameterise thin-film deposits in units of surface density; likewise for deposition, condensation, and sublimation fluxes. For our purposes, we parameterise ice films in terms of their thickness, which is more directly linked to their optical properties that we study in our second paper. For practical purposes we approximate that 1 nm \(\propto 1\times 10^{-7}\) g cm\({}^{-2}\). The scanning tunnelling microscope (STM) and atomic force microscope (AFM) data of ice surfaces shown in this section are available upon informal request. The surface-height profiles are encoded in ASCII x,y,z format. ### Amorphous ice (\(T\lesssim 120\) K) Amorphous or non-crystalline ice, also called amorphous solid water or vitreous ice, is characterised by the absence of coherent crystal structures down to scales of individual water molecules. In a vacuum, it exists in three states, with the two coldest ones being highly porous at a molecular level (Fig. 6). For reviews about amorphous ices see for example Limmer and Chandler (2014), He et al. (2019), and Cao (2021). #### 3.1.1 High-density amorphous ice I\({}_{\rm{Al}}\) (\(T<30\) K) High-density amorphous ice I\({}_{\rm{Al}}\)6 forms when water vapour is deposited at temperatures below \(T=30\) K (Jenniskens and Blake, 1994). It has a typical density of 1.15 g cm\({}^{-3}\)(Cao, 2021) and has the least structured state of all ice types. Between 30-70 K, one of the hydrogen bonds in ice I\({}_{\rm{Al}}\) breaks, irreversibly transforming ice I\({}_{\rm{Al}}\) into low-density, amorphous ice I\({}_{\rm{Al}}\) on timescales of a day (Schriver-Mazzuoli et al., 2000). Footnote 6: The 19 known types of ice are labelled with Roman Numerals I to XIX. Ice types in spacecraft are all variants of type I. Ice I\({}_{\rm{Al}}\) will not be found in _Euclid_ because temperatures are above 80 K (see Table 1). It may be present in other spacecraft such as the _James Webb_ Space Telescope, where temperatures reach below 40 K (Lightsey et al., 2012; Wright et al., 2015). #### 3.1.2 Low density amorphous ice I\({}_{\rm{Al}}\) (\(30\,K\lesssim T\lesssim 120\) K) Low density amorphous ice I\({}_{\rm{Al}}\) is created by vapour deposition between 30 K and 110 to 120 K, the upper limit depending on the deposition rate (Sect. 3.3). The density of ice I\({}_{\rm{Al}}\) is 0.94 g cm\({}^{-3}\), neglecting variations in porosity. Porosity itself is parameterised by the internal surface area per mass, and for I\({}_{\rm{Al}}\) is typically 150 \begin{table} \begin{tabular}{l c c c} \hline Common path & \(T_{\rm{nominal}}\) & \(T_{\rm{warm}}\) & \(T_{\rm{decont}}\) \\ \hline M1 & 117 K & 123 K & 220 K \\ M2 & 104 K & 111 K & 289 K \\ FoM1 & 123 K & 128 K & 220 K \\ FoM2 & 122 K & 126 K & 221 K \\ M3 & 122 K & 129 K & 220 K \\ Dichroic & 122 K & 126 K & 221 K \\ \hline \hline NISP path & & & \\ \hline Corrector lens & 130 K & 131 K & 218 K \\ Filter / Grism & 133 K & 133 K & 206 K \\ Camera lenses & 132 K & 132 K & 204 K \\ Detector & 95 K & 95 K & 200 K \\ \hline VIS path & & & \\ \hline FoM3 & 118 K & 123 K & 220 K \\ Detector & 152 K & 156 K & 270 K \\ \hline Structural & & & \\ \hline Baffle & 100 K & 108 K & 205 K \\ PLM baseplate & 120 K & 125 K & 207 K \\ \hline \end{tabular} 3 \end{table} Table 1: Temperatures of PLM elements for nominal operating conditions, a ‘warm’ comparison case, and for decontamination mode. Figure 6: Typical structure of high-density (_left panel_) and low-density (_right panel_) amorphous ice. Red dots represent the oxygen atoms, and small white circles the hydrogen atoms. Hydrogen bonds are indicated by dashed lines. Coherent structures are absent. Figure adapted from Belosludov et al. (2008); see also He et al. (2019). 500 m\({}^{2}\) g\({}^{-1}\)(Mitlin & Leung, 2002). Ice I\({}_{\rm al}\) can be thought of as an open network of water molecules, where all pores are directly connected to the top surface (He et al., 2019), independent of the thickness of the ice. The top surface of ice I\({}_{\rm al}\) is very rough at the nanometer scale when compared to crystalline ice (Fig. 7). The large surface area of amorphous ice facilitates astrochemical processes (Watanabe & Kouchi, 2008; Gudipati & Castillo-Rogez, 2013). Amorphous ice is distinguished from crystalline ice by its large surface area and by the high internal vapour pressure at highly curved surface elements (Nachbar et al., 2018, 2018). This enhances the sublimation flux by factors 2-100 compared to crystalline ice at the same temperature (Sect. 4.3.3). Yet the absolute sublimation flux at temperatures where ice I\({}_{\rm al}\) can form is very low (Sect. 4.3). In _Euclid_, ice I\({}_{\rm al}\) can occur on the NISP detectors (95 K), the external baffle (100 K), and the secondary mirror (M2; 104 K). It will remain amorphous during the mission (Fig. 8). On the NISP detector, ice I\({}_{\rm al}\) would modulate the quantum efficiency through interference effects (Holmes et al., 2016) and possibly severely affect the pixel response non-uniformity (PRNU); we address these effects in our second paper. Elsewhere in the PLM at \(T\gtrsim 120\) K (Table 1), ice I\({}_{\rm al}\) would crystallise within a few days or weeks. However, these parts of the PLM are usually not cold enough to form amorphous ice in the first place. 1.3 Restrained amorphous ice I\({}_{\rm ar}\) and onset of crystallisation (120 \(K\lesssim T\lesssim 160\) K) When amorphous ice is heated to 120-140 K, or water vapour deposited at these temperatures at a high rate, surface reorganisation starts to collapse the internal pores (Hessinger et al., 1996) and reduces the number of 'dangling bonds', that is unsatisfied OH bonds. The resulting modified state is called restrained amorphous ice I\({}_{\rm ar}\); the transformation cannot be reversed by means of cooling. At these temperatures, nanocrystals begin to form in the amorphous phase through nucleation, and grow into crystalline clusters (Kouchi et al., 1994; Nachbar et al., 2018). For 3D simulations of the transition process from amorphous ice to crystalline ice see He et al. (2019). Amorphous ice is meta-stable with respect to crystallisation (Fig. 8). Even at temperatures as low as 80 K it will eventually anneal into stacking disordered ice (Sect. 3.2.1), albeit on geological timescales. Depending on the heating rate and deposition speed, crystallisation in laboratory experiments is observed mostly between 120-160 K (La Spisa et al., 2001; Mitlin & Leung, 2002; Mastrapa et al., 2013; He et al., 2022). Amorphous constituents in the crystalline phase are uncommon above 160 K (Kuhs et al., 2012), and do not survive 175-180 K for more than a few hours. Crystallisation cannot be reversed by cooling. Annealing of amorphous ice does not necessarily result in the same crystalline structures as depositing water at higher temperatures when crystalline ice forms directly (Hessinger & Pohl, 1996). In _Euclid_, ice I\({}_{\rm ar}\) will occur only intermittently when heating cold surfaces covered with ice I\({}_{\rm al}\) to their decontamination temperature (Table 1). Otherwise, it would crystallise on timescales of days to a few months. ### Crystalline ice In crystalline water ice, the oxygen atoms of six water molecules connect via hydrogen bonds to form corrugated hexagons. These hexagons merge into extended, 2-dimensional corrugated bilayers, which can be stacked in two ways: without rotation, forming cubic ice I\({}_{\rm c}\), and by rotating every other bilayer by 180deg, forming hexagonal ice I\({}_{\rm h}\) (Fig. 9). The hexagonal stacking order is energetically preferred over the cubic stacking order. #### 3.2.1 Stacking disordered ice I\({}_{\rm sd}\) (120 \(K\lesssim T\lesssim 160\) K) Cubic ice I\({}_{\rm c}\) was first described by Konig (1943) and wrongly thought to exist at a macroscopic scale at 120-160 K. It is now known that at these temperatures the ice consists of cubic and hexagonal layers, interlaced in a complex non-random fashion described as'stacking disordered ice' I\({}_{\rm sd}\)(Kuhs et al., 2012). Pure cubic ice exists essentially only in nanocrystals and in ice films a few nanometer thick (Kuhs et al., 2012; Thirmer & Nie, 2013; Figure 8: Annealing time for amorphous ice I\({}_{\rm al}\) to reach different fractions of crystallisation, using the Kouchi et al. (1994) formalism that is based on kinetic theory of crystallisation. The shaded box shows the relevant time and temperate ranges for _Euclid_. The crystallisation speed can be greatly accelerated in the case of epitaxial growth on suitable substrates (Dohnálek et al., 2000). Figure 7: Comparison of crystalline and amorphous ice topography. _Left panel_: STM image of a polycrystalline ice film, average thickness 6 nm, grown at 145 K on Pt(111). Surface steps of bilayer height (0.37 nm) are easily resolved. _Right panel_: Same, for a 6 nm thick amorphous ice film grown at 100 K on Pt(111), revealing high surface roughness at nanometer scales. Two surface steps are visible in the otherwise atomically flat Pt(111) substrate, replicated by the amorphous ice film. Data originally taken by Thirmer & Bartelt (2008). Malkin et al., 2015; Nachbar et al., 2018). At a macroscopic level, pure cubic ice was created only recently by del Rosso et al. (2020). Stacking disordered ice I\({}_{\rm sd}\) is meta-stable and forms via vapour deposition between 120-185 K. There is a large number of crystal defects and stacking faults in ice I\({}_{\rm sd}\), requiring specific energies to be healed (Hondoh, 2015): At \(T=130\) K, the least stable defects heal in about one week, whereas the timescale of most other defects exceeds one year. At 140 K, simple defects heal in one day, and within 1 h at 150 K. The transformation from ice I\({}_{\rm sd}\) to ice I\({}_{\rm h}\) speeds up considerably at 175 K and above (Kuhs et al., 2012; Hondoh, 2015; del Rosso et al., 2020). Cubic sequences disappear within 1 h when ice I\({}_{\rm sd}\) is heated to 210 K, and they are essentially absent above 240 K. In _Euclid_, all mirrors are at or below \(T=120\) K (Table 1). The transformation of any ice I\({}_{\rm sd}\) deposits to ice I\({}_{\rm h}\) is therefore negligible on mission timescales. #### 3.2.2 Hexagonal ice I\({}_{\rm h}\) (\(T\gtrsim 120-185\) K) Hexagonal ice I\({}_{\rm h}\) forms from ice I\({}_{\rm sd}\) upon heating (Sect. 3.2.1), or via vapour deposition at high rates (\(\gtrsim 1\) nm s\({}^{-1}\)) at \(T>185\) K. It can also form by slow (0.1 nm min\({}^{-1}\)) vapour deposition at temperatures as low as 120 K in ultra-high vacuum (\(p\sim 3\times 10^{-11}\) mbar; see Thurmer and Nie, 2013). Once formed, ice I\({}_{\rm h}\) is stable against cooling at least down to \(T=5\) K. Rosu-Finsen et al. (2023) show that ice I\({}_{\rm h}\) can be mechanically transformed into a previously unknown, medium-density amorphous ice; we do not consider this further as this process does not happen in _Euclid_. On Earth, all naturally occuring ice is hexagonal, apart from very cold high-altitude cirrus clouds, where ice I\({}_{\rm c}\) may be found. The physical properties of ices I\({}_{\rm h}\), I\({}_{\rm sd}\), and I\({}_{\rm c}\) are similar (Bertie et al., 1969; Kuhs et al., 2012; Mastrapa et al., 2013) for the purposes of the current paper, so we do not distinguish between them. However, the optical properties do show smaller differences in the refractive index (He et al., 2022) that could be relevant for modelling effects in the data (see our second paper). ### Deposition rate and crystallinity Whether vapour deposition initially leads to amorphous or crystalline ice depends on temperature, film thickness, and deposition rate. The latent heat released upon adsorption facilitates surface diffusion of water molecules and thus their settlement into energetically preferred configurations. With very high deposition rates, ice I\({}_{\rm sd}\) is formed initially, but dissipation of the latent heat is impeded by the low thermal conductivity of I\({}_{\rm sd}\)(Cuppen et al., 2022), and crystallisation occurs. See also He et al. (2022), Cao (2021), Watanabe and Kouchi (2008), La Spisa et al. (2001), and Kouchi et al. (1994). For low deposition rates and \(T\gtrsim 120\) K, water molecules can settle into crystalline structures before being disturbed by other incoming molecules (Kouchi et al., 1994; Thurmer and Barteli, 2008). At 105-120 K, ice films may be amorphous, crystalline, or a mixture of both. At 100 K and below, they are always amorphous even when grown very slowly (0.1 nm min\({}^{-1}\); La Spisa et al., 2001; Thurmer and Barteli, 2008). In _Euclid_, deposition rates are anticipated to be very low. We expect crystalline ice at \(T\gtrsim 120\) K, amorphous ice at \(T\lesssim 110\) K, and a mixture for the range \(T\sim 110\)-120 K. ### Amorphisation through irradiation Crystalline ice can be amorphised by proton, heavy ion, and UV irradiation, which dissociate (photolyse) water molecules (Raut et al., 2008; Fama et al., 2010; Rothard et al., 2017). The freed hydrogen atoms diffuse through the crystal and recombine with the fragments of other dissociated molecules, thus breaking down the crystalline structure. Irradiation experiments have shown that amorphisation processes become effective only at 70 K and below (Kouchi and Kuroda, 1990; Mastrapa and Brown, 2006). Typical timescales range between one year to several \(10^{5}\) years, depending on environment and ice thickness (see also Dartois et al., 2013, 2015). Temperatures in the _Euclid_ PLM are above 80 K. At L2, irradiation-induced compaction and amorphisation of crystalline ice is negligible. ### Wetting of surfaces and growth of ice films So far we have reviewed ice types alone. We now shift our focus to the substrate-water interface and its important influence on ice films growing on a substrate. #### 3.5.1 Energetic needs of the substrate-water interface In general, surface atoms of a clean solid do not have all their bonding requirements fulfilled. Eventually, molecules in the surrounding gas phase are adsorbed due to van der Waals forces, covalent binding or electrostatic attraction, releasing latent heat in the process. When water molecules adsorb on a substrate ('wetting'), they settle into energetically preferred locations determined by the surface's topography and electronic configuration. Above 40 K, water molecules have enough energy for surface diffusion and form hydrogen bonds with neighbouring water molecules. The topography of these superficial water structures depends on the energetic needs of the substrate material; it can vary widely between 1D filaments, isolated clusters surrounded by 'dry' substrate, and 2D contiguous films (wetting monolayers). At higher temperatures, water molecules may partially dissociate forming a mix of H, OH, and H\({}_{2}\)O. For a comprehensive introduction and review see Hodgson and Haq (2009) and Bjorneholm et al. (2016). Once enough water is deposited for more than a monolayer (a wetting layer with a thickness of one molecule), the energetic constraints of the substrate-water interface need to be balanced with those of the water-water interface (Thurmer et al., 2014; Lin et al., 2018; Maier, 2018). This results in a complex restructuring of the water-substrate interface that depends on Figure 9: Side views of four stacked corrugated bilayers of ice. The dots are the oxygen atoms, connected by hydrogen bonds. Thicker dots are higher up in the stack. The _left panel_ shows cubic ice I\({}_{\rm c}\). The _right panel_ shows hexagonal ice I\({}_{\rm h}\), where every second bilayer is rotated by 180\({}^{\circ}\) around its surface normal axis; \(h_{\rm ice}\) refers to the bilayer height. Figure credit: Thürmer and Nie (2013). the substrate's lattice constant, structure, electronic needs, and how water molecules in direct contact with the substrate orient themselves. The effects may reach just a few layers into the ice, or well beyond 100 layers (25-40 nm thickness). Density functional theory can predict these structures for a given substrate, yet the case of water remains difficult (Tamijani et al., 2020). #### 3.5.2 Influence of the substrate on ice film topography We now compare wetting layers on two atomically flat, close-packed, and monocrystalline surfaces. We choose Pt(111) and Ni(111), two well-studied surfaces that illustrate the strong influence of the substrate on the growing ice films; the (111) tuple is the Miller index, describing the orientation of the atomic lattice exposed at the surface. On Pt(111), a contiguous monolayer is formed at first. Further deposition of water results in 50-150 nm wide crystallites surrounded by the monolayer. The crystallites have flat-top surfaces and heights of 2-3 nm (7-10 layers). Further deposition makes the crystallites grow mostly laterally and coalesce with their neighbours, maintaining an intact wetting layer in between. Eventually, all crystallites have merged, forming a contiguous polycrystalline film (Fig. 10). Therein, crystallites over-grow each other, leading to the preferential formation of hexagonal ice \(\mathrm{I_{h}}\) at temperatures as low as 115-140 K (see Fig. 10, and Thurmer & Nie, 2013). On Ni(111), instead of a monolayer, the wetting layer is two molecules thick (bilayer). The emerging crystallites are much taller than those on Pt(111) and have smaller diameters of 30-60 nm. At a mean film thickness of 2.5 nm - when on Pt(111) a continuous film has formed - the crystallites on Ni(111) are still well isolated, covering just 15% of the surface (Fig. 11). This is attributed to a larger driving force for dewetting, presumably due to a lower surface energy of the wetting bilayer, or due to an increased energy of the interface between the crystallites and Ni(111). There are no high-resolution microscopy data for thicker films of ice on Ni(111) available at this point. However, based on comparison with yet another close-packed metal surface, Ru(0001), we predict with some confidence that the trend of ice films on Ni(111) being much rougher than those of equal mean thickness on Pt(111), will persist up to at least 100 molecular layers, if not indefinitely. The gas adsorption experiments by Haq & Hodgson (2007) for Ru(0001) have revealed that although the crystallites cover already 50% of the surface at a mean film thickness of 2.5 nm, it takes about 90 layers for the ice to fully coalesce. We thus infer that ice on Ni(111) will not coalesce for thicknesses up to at least 100 layers and remain much rougher than on Pt(111). Quoting Maier (2018): 'On metal surfaces, the adsorption energy of water is comparable to the hydrogen bond strength among water molecules. Therefore, the delicate balance between competing water-water and the water-metal interactions leads to a rich variety of structures that form at the interface between water and seemingly simple, flat metal surfaces.' Thurmer et al. (2014) conclude similarly: 'Even for simple atomically flat close-packed metal substrates, the question of how water wets is surprisingly difficult. The delicate balance between optimising water-water bonding and water-metal interaction, the effect of the metal lattice constant, and [...] the possibility of water dissociation, all contribute to a complexity that renders predictions of water layer structure unfeasible. Density functional theory [...] is not yet able to find the lowest-energy configuration of a water layer on a metal substrate reliably.' ### Impossibility to predict ice topography for Euclid's optical surfaces For _Euclid_, the situation is exacerbated, as most coating materials have not been disclosed to us by industry (see Sect. 2.11). Wetting experiments were conducted for crystalline metal oxides such as Al\({}_{2}\)O\({}_{3}\)(Tamijani et al., 2020) and TiO\({}_{2}\)(He et al., 2009), common optical coating materials. However, this does not help us, even if these materials were actually used in _Euclid_. Figure 10: Growth of crystalline ice on atomically flat Pt(111). The temperature and the mean film thickness are indicated. Shown is the relative height of each film, as the absolute height is difficult to assess for the thickest film that does not expose the substrate anymore. _Left panel_: 2–3 nm (7–10 layers) high, flat-top crystallites appear in the wetting monolayer (dark blue), imaged with an STM. _Middle panel_: Further deposition causes the crystallites to grow laterally and overlap each other (STM). _Right panel_: A thick ice film that would not conduct sufficient electricity anymore from the substrate to the tip of an STM; an AFM was used instead. More details can be found in Thürmer & Bartelt (2008), Thürmer & Nie (2013), and Thürmer et al. (2014), who also took these data. For comparison, _Euclid_’s SiC mirrors have a typical surface roughness of 0.9–1.1 nm. First, the wetting process is highly dependent on the crystal planes (Miller indices) exposed at the surface, which we do not know in general. Second, vapour deposition of metal and semiconductor oxides generally results in amorphous and polycrystalline films (Kazmerski, 2012) that are also not atomically flat. Third, while the topography of a substrate is often replicated in dense optical coating layers (Trost, 2015), this does not hold for contaminating ice films. There, long-range forces from crystallisation and the substrate-water interface control the topography on nano- and micrometer scales, together with growth spirals over substrate-surface steps (Thurmer & Bartelt, 2008) and shadowing effects during deposition (Labello, 2011). ### Conclusions for ice in Euclid NISP detectors, M2, and external baffle: These are the only places where low-density amorphous ice may form. Only if deposition occurs already during cool-down at \(T\gtrsim 120\) K, crystalline ice is expected, with a top amorphous layer from further contamination. All other optical surfaces: Polycrystalline ices I\({}_{\mathrm{ad}}\) and I\({}_{\mathrm{h}}\) are expected. Their exact nano-scale crystalline composition is not relevant for _Euclid_ data. However, long-range forces in polycrystalline ice films determine the surface topography on scales of 100 nm and above, and may thus have a noticeable impact on optical scattering and wavefront errors. These are difficult to model and predict, and it is not a priori clear how amorphous and crystalline ices manifest in the data. Crystalline ices have a narrow absorption line at 1.65 um that would be detectable in heavier contamination scenarios (see our second paper). Internal and external processes: Annealing and irradiation can break down the nanoscopic structure of ice films. They are highly inefficient at 90-120 K and can be ignored for _Euclid_. The structure of ices are mostly stable: Ice films are predominantly modified by sublimation and further deposition. Mechanical surface restructuring will occur from dust and meteoroid pitting on M1 (Grun et al., 1985; Evans, 2000). The topography of ice films cannot be predicted: The energetic needs of the substrate-water interface and the water-water interface are very complex. Also, we do not know the composition of the top-most coating layer on most surfaces. ## 4 Contamination and decontamination modelling A single thermal decontamination cycle for _Euclid_ takes about 18 days, not counting subsequent recalibrations. Estimates of the contamination rate are thus of great interest for mission planning. Outgassing is driven by bulk diffusion of dissolved molecules in a substrate, followed by their sublimation. Our knowledge uncertainties of these processes limit the accuracy of contamination models; an estimate of a single decontamination per year could quickly become several per year, or none at all. Sophisticated codes exist to compute outgassing and contamination rates (e.g. Brieda & Laugharn, 2020; Zitouni & von Germersheim, 2020). They were applied for example to compute the contamination of the JWST during its initial 180 days in flight, accounting for JWST's complex unfolding sequence (Brieda et al., 2022). In this section we aim much lower, developing an understanding of the dynamics of molecular contamination to inform our calibration strategy. We break down the contamination process into the underlying basic physics and geometry, and develop a transport model for the water exchange Figure 11: Effect of the substrate on ice topography, for an average amount of 2.5 nm ice. _Top panel_: On Ni(111), a wetting bilayer (dark blue) is formed, in which isolated crystallites grow in height that cover 15% of the surface area. _Middle panel_: On Pt(111), crystallites quickly over-grow each other, forming a contiguous film. The wetting monolayer (dark spots) is still exposed in a few places. _Bottom panel_: Height profiles measured along the horizontal lines shown in the upper panels. The standard deviation of the height distribution for Ni(111) is ten times that of Pt(111). The profile of the Ni(111) crystallites is convolved with the width of the STM’s scanning tip; in reality, the walls of the crystallites are more vertical. To directly compare the height profiles, we plot the absolute height above a substrate mean reference, whereas in Fig. 10 we show the relative heights. The data for these plots were taken by Thürmer et al. (2014), who also inspired this figure. between surfaces in _Euclid_. The main result is shown in Table 2, listing estimated contamination rates for the optical surfaces in _Euclid_. These are indicative only and highly uncertain. To arrive at these values, we need sublimation and condensation rates (Sects. 4.2 and 4.3), the vapour pressure from sublimed ice in _Euclid_'s cavities (Sect. 4.4), the effect of geometry on the sublimation and condensation rates between two surfaces facing each other (Sect. 4.5), and lastly geometrical models of the PLM to compute the water exchange flux between surfaces (Sect. 4.6). Finally, in Sect. 4.7 we provide an overview of the thermal decontamination procedure. Of equal interest is the impact of contamination on the data, which ultimately drives how often we have to decontaminate _Euclid_. This will be addressed in the second paper. ### Methodologies Here as in Sect. 3 we make extensive use of literature in the material sciences, outside the astronomical context. For better understanding we summarise basic measurement principles. The water update of a material can be determined using dynamic gravimetric vapour-sorption7 experiments, where a material is exposed to various degrees of relative humidity (e.g. Sharma et al., 2018). Fourier-transform infrared (FTIR) spectroscopy is another method to measure the absorption or emission of water (e.g. Scherillo et al., 2014). These experiments are typically conducted at room temperature or higher. Footnote 7: The term ‘sorption’ refers to the uptake of a substance by some material at (i) the material’s surface (adsorption), and (ii) by integration into the material’s atomic structure (absorption), without distinguishing between these processes. The surface- and bulk-diffusion coefficients can be determined from transport models that describe the dynamic mass balance determined by sorption experiments. An alternative is laser-induced thermal desorption (LIDT) coupled with mass spectrometry, possibly using different isotopologues such as H\({}_{2}^{16}\)O and H\({}_{2}^{18}\)O ice as in Brown and George (1996). Different methodologies are available to measure sublimation and condensation rates, that is the change of ice-film thickness. The change in mass can be tracked by depositing ice films directly on cryogenic QCMs (e.g. Sack and Baragiola, 1993). Alternatively, the film thickness is determined directly using interference fringe counts in a reflected laser beam, or using FTIR reflection-absorption spectroscopy, exploiting the very strong absorption line of water ice at 3 \(\upmu\)m (as e.g. in Ghesquiere et al., 2015); details about water-ice absorption are presented in our second paper. ### Diffusion The first Fick law relates the diffusion flux, \(\@vec{j}_{\mathrm{d}}\), of absorbed particles to the spatial gradient of their concentration, \(c\)(see also Chiggiato, 2020). In one dimension, \[j_{\mathrm{d}}(l,t)=-D\,\frac{\partial c(l,t)}{\partial l}\, \tag{1}\] where \(l\) and \(t\) are space and time, respectively. The diffusion coefficient \(D\) is described by an Arrhenius-type law, \[D=D_{0}\exp\left(-\frac{E_{\mathrm{d}}}{k_{\mathrm{B}}\,T}\right). \tag{2}\] Here, \(D_{0}\) is the pre-exponential factor, \(k_{\mathrm{B}}\) Boltzmann's constant, and \(E_{\mathrm{d}}\) the diffusion activation energy. The constants \(D_{0}\) and \(E_{\mathrm{d}}\) depend on mass of the absorbed molecules, their size, and on the nanoscopic structure of the substrate. Using Kapton and ice as examples, we show that \(D\) is highly sensitive to these parameters: In amorphous Kapton, \(E_{\mathrm{d}}=0.2\) eV (Yang et al., 1985) and \(D\) may vary by a factor of 3 depending on the orientation of the polymers, the thickness of Kapton, and the presence of aggregates (Yang et al., 1986). This, and the absorption of water by Kapton, was further studied by Sharma et al. (2018), who find \(E_{\mathrm{d}}=0.3\)-0.4 eV, and that \(D\) can change by a factor of 10, depending on the addition of aggregates. We note that lowering \(E_{\mathrm{d}}\) from 0.40 eV to 0.39 eV at 120 K - a typical _Euclid_ temperature - increases \(j_{\mathrm{d}}\) by a factor 2.6. Thus \(j_{\mathrm{d}}\) is highly susceptible to measurement errors of \(E_{\mathrm{d}}\) and to the addition of aggregates. Next, we consider the mobility of dissolved water molecules in ice. In amorphous ice I\({}_{\mathrm{d}}\), the porous structure greatly facilitates diffusion jumps of water molecules, resulting in a low \(E_{\mathrm{d}}=0.08\)-0.25 eV (Ghesquiere et al., 2015). The mean-square displacement of a particle due to bulk diffusion is given by \[\langle(\Delta l)^{2}\rangle=D\,t\,. \tag{3}\] Accordingly, and using the computations in Ghesquiere et al. (2015), it would take a water molecule \(\sim\) 0.5 s to cross an amorphous ice film of 10 nm thickness at 120 K. In crystalline ice I\({}_{\mathrm{h}}\), this would take 120 s. Brown and George (1996) report even lower diffusion rates for ice I\({}_{\mathrm{h}}\), finding \(E_{\mathrm{d}}=0.7\) eV at 160 K. Using the Arrhenius law to compute the respective \(D\) at 120 K, we find that bulk diffusion is essentially incapacitated (see also Labello, 2011) in ice I\({}_{\mathrm{h}}\) in _Euclid_, at least on mission timescales. This means that an existing film of amorphous ice on Kapton does not slow down the diffusion flux \(j_{\mathrm{d}}\) from Kapton at all, nor from any other substrate in _Euclid_. Water molecules easily reach the top of the ice surface where they eventually sublime, unless they get more permanently integrated into the bulk amorphous ice. Therefore, amorphous ice films should grow continuously by substrate diffusion from below and by deposition on top. Contiguous crystalline ice films, on the other hand, act as an effective diffusion barrier with \(E_{\mathrm{d}}=0.7\) eV. Considering the lower sublimation energy of water (\(E_{\mathrm{sub}}=0.45\)-0.53 eV, Sack and Baragiola, 1993; Feistel and Wagner, 2007; Shakeel et al., 2018), any water flux emanating from a surface contaminated with crystalline ice is due to sublimation of this ice, and not due to substrate outgassing. Yet, efficient diffusion channels from the substrate to the surface of the bulk ice could still exist, for example along fault lines and domain walls in polycrystalline ice, or if the surface roughness is very high - such as on Ni(111) - exposing large areas of thin wetting layers (Sect. 3.5). The take-home message is that estimates of the outgassing rates in a spacecraft are highly uncertain at low temperatures: (i) They depend strongly on the substrate's nanoscopic structure and aggregates. (ii) Small temperature changes of a few kelvin result in an order-of magnitude change in \(j_{\mathrm{d}}\). Temperatures of spacecraft sub-systems are difficult to estimate prior to launch and may change over time due to radiation damage and mechanical erosion of the insulation. (iii) Small measurement errors in \(E_{\mathrm{d}}\) at the percent level change \(j_{\mathrm{d}}\) by a factor of a few. (iv) Outgassing databases use measurements at room temperature because of the much higher signal and simpler non-cryogenic experimental setup. Extrapolations down to 120 K span many orders of magnitude in \(j_{\mathrm{d}}\) and ignore all restructuring processes at a microscopic level and below that might occur during cool-down from thermal contraction. For example, thermal stress-induced micro-fractures likely caused the sudden contamination of _Cassini_ / NAC (see Sect. 2.4.2, and Haemmerle and Gerhard, 2006). Accurate diffusion and outgassing forecasts for _Euclid_ are therefore not feasible. However, we can still build a sublimation model, knowing that contiguous crystalline ice films act as effective diffusion barriers. Therefore we adopt a 'glacial' scenario, in which all surfaces in _Euclid_ are already contaminated by crystalline ice films. The model shall be stationary, that is we do not consider self-depletion by sublimation. Such a glacial scenario could be the case immediately after launch, or after a long period without decontamination, that is a worst-case scenario. We recall that amorphous ice deposited at 120 K crystallises within a few weeks to months (Fig. 8). We use this glacial model to forecast the change of ice thickness and the amount of water escaping into space through the telescope front aperture. Since the model ignores diffusion, it cannot forecast the contamination rate of an initially uncontaminated spacecraft, nor the depletion times of the various water reservoirs. ### Sublimation-condensation rates #### 4.3.1 General approach with the Hertz-Knudsen equation Deposition and sublimation happen simultaneously, and their rate is commonly described by the Hertz-Knudsen equation from classical kinetic gas theory. In the case of equal temperature \(T\) of a substrate and its surrounding gas phase, we have \[j_{\rm s}(T)=\sqrt{\frac{m}{2\pi k_{\rm B}\,T}}\,\,\left[\sigma_{\rm s}\,p_{ \rm sat}(T)-\sigma_{\rm c}\,p(T)\right]\,\,. \tag{4}\] Here, \(j_{\rm s}(T)\) is the sublimation flux (in kg m\({}^{-2}\) s\({}^{-1}\)), \(m\) the mass of the subliming molecule, \(p_{\rm sat}(T)\) the equilibrium saturation-vapour pressure for which sublimation and deposition rates are equal, and \(p(T)\) the pressure in the gas phase. The sublimation and condensation coefficients, \(\sigma_{\rm s}\) and \(\sigma_{\rm c}\), are the fractions of molecules that subliming and re-condense (backscatter) upon reaching the surface; they are difficult to determine accurately. Persad & Ward (2016) derive a quantum-mechanical formulation for \(j_{\rm s}(T)\), but its computation requires knowledge of the local curvatures of the substrate-gas interface, which are not known for ices on _Euclid_'s surfaces. The back-scattering term \(\sigma_{\rm c}\,p(T)\) accounts for sublimed molecules that immediately redeposit again on the surface after collisions with other sublimed water molecules in the vapour phase. This is negligible for _Euclid_, where the mean free path length is thousands of kilometres (Sect. 4.4). Subliming molecules hit other surfaces and stick to them (Sect. 4.4) before colliding with other molecules in the gas phase, and thus \(\sigma_{\rm c}\,p=0\) in Eq. (4). #### 4.3.2 Theoretical and empirical estimates for hexagonal ice To evaluate Eq. (4) for a vacuum, we can replace \(\sigma_{\rm s}\,p_{\rm sat}(T)\) with the sublimation pressure \(p_{\rm sub}(T)\). Wagner et al. (2011) derive \(p_{\rm sub}(T)\) for a planar surface of monocrystalline ice I\({}_{\rm h}\) based on the thermodynamics of the sublimation zone, valid from 50 K to \(T_{\rm t}=273.16\) K, \[p_{\rm sub}(T)=p_{\rm t}\,\exp\left[\mathcal{T}^{-1}\sum_{i=1}^{3}\,a_{i} \mathcal{T}^{h}\right]\,\,, \tag{5}\] where \(\mathcal{T}=T/T_{\rm t}\), \(p_{\rm t}=611.657\) Pa, and \[a_{1}=-0.212\,144\,006\times 10^{2}, b_{1}=0.333\,333\,333\times 10^{-2},\] \[a_{2}=\phantom{-}0.273\,203\,819\times 10^{2}, b_{2}=0.120\,666\,667\times 10^{1},\] \[a_{3}=-0.610\,598\,130\times 10^{1}, b_{3}=0.170\,333\,333\times 10^{1}.\] Frequently used is Murphy & Koop (2005), also based on thermodynamic considerations. We rewrite their result as \[p_{\rm sub}(T)=\exp\left[c_{1}+c_{2}/\mathcal{T}+c_{3}\ln\left(\mathcal{T} \right)+c_{4}\,\mathcal{T}\right]\,\,\mathrm{Pa} \tag{6}\] with \(c_{1}=29.3577\), \(c_{2}=-20.9521\), \(c_{3}=3.53068\), and \(c_{4}=-1.98951\). This agrees to within 0.3% with Wagner et al. (2011) in the 90-210 K range, and hereafter we collectively refer to Eqs. (5) and (6) as the WMK models. The resulting sublimation fluxes \(j_{\rm s}(T)\) are shown in Fig. 12, expressed as a loss rate for the ice film thickness. But how accurate are these theoretical models? The surface roughness of polycrystalline ice I\({}_{\rm h}\) (Sect. 3) enlarges the effective surface area and increases the sublimation flux. Surface roughness also means larger nano-scale surface curvatures, thus higher internal vapour pressure (Andreas, 2007; Nachbar et al., 2018, 2018) and higher sublimation. The WMK models are in very good agreement with the sublimation fluxes measured by Woronowicz & Meadows (2012) at \(T=120\)-140 K to understand the effect of ice on the thermal performance of the JWST sunshield. They also match the data from Bryson et al. (1974), but only down to a temperature of 140 K, where the sublimation flux begins to exceed the WMK models by factors 2-4. A similar trend is seen in the data from Figure 12: Sublimation-flux models for amorphous and crystalline (hexagonal) ice. Overlaid are various measurements. The model for amorphous ice is shown up to 120 K by the dashed pink line; it is obtained by shifting the Murphy & Koop (2005) curve by 3 K to lower temperatures. Sack & Baragiola (1993), where for \(T\leq 140\) K the measurements shortly after deposition8 showed sublimation rates temporarily increased by factors 2-5. This is explained by more volatile amorphous constituents that have not yet annealed into a more stable crystalline form upon heating the ice films to their desired temperature. Footnote 8: A clear statement is missing in Sack & Baragiola (1993), but from their description we estimate about 15–90 min between deposition and measurement, depending on chosen deposition temperature and thermal warm-up time. We note that the values measured by Sack & Baragiola (1993) systematically exceed the WMK models by respective factors of 1.4 and 2.1 at \(T=180\) K and \(T=140\) K (see Fig. 12). This is also seen in measurements done by ESA to scale _Euclid_'s decontamination mode (Szmolka & Bras 2021, private communication; red dots in Fig. 12). A first explanation is that the measurements were done too soon after deposition, when the restrained amorphous ice or stacking-disordered ice still experience considerable annealing, in particular if the deposition rates were high (Sects. 3.1.3 and 3.3, and Sack & Baragiola 1993; Pratte et al. 2006; Smith et al. 2011; Rosu-Finsen et al. 2022). Indeed the measurements by Woronowicz & Meadows (2012) showing lower sublimation fluxes were done over 40-60 h, compared to 15 min for Sack & Baragiola (1993); no information about this is given in Bryson et al. (1974). A second explanation is that different coatings on the QCMs affected the ices' surface topography (Sect. 3.5) and thus altered the sublimation flux. The QCM used by Sack & Baragiola (1993) was coated with gold; the QCM9 used by ESA for our tests was also gold-plated; Woronowicz & Meadows (2012) did not comment on possible coatings of their QCM. Sack & Baragiola (1993) accounted for this by including an effective surface-area factor in their fit. Footnote 9: A CrystalTek Cryo QCM, [https://crystaltekcorp.com/products/cqcm](https://crystaltekcorp.com/products/cqcm) Sack & Baragiola (1993) fitted their measured sublimation flux (in molecules m\({}^{-2}\) s\({}^{-1}\)) with a semi-empirical model, \[\Phi_{\rm SB93}(T)=a\,T^{3.5}\,\exp\,\left(\frac{-E_{\rm sub}}{k_{\rm B}\,T} \right)\,, \tag{7}\] where \(a=1.82\times 10^{25}\,\rm molecules\,m^{-2}\,s^{-1}\,K^{-3.5}\) is a constant prefactor. The model is shown as the blue line in Fig. 12 for \(E_{\rm sub}=0.45\) eV. We note that changing \(E_{\rm sub}\) to 0.46 eV makes this fit consistent with the WMK models within 20% in the 120-160 K range. Considerable discrepancies below 120 K arise because \(E_{\rm sub}\) is actually temperature-dependent: Feistel & Wagner (2007) compute that the sublimation enthalpy \(E_{\rm sub}\) decreases by 0.008 eV from 140 K to 90 K, which has a pronounced effect on the sublimation curve. We conclude that Eq. (7) is less suitable to accurately describe sublimation fluxes over a very large temperature range that extends below 120 K, and that the WMK models are preferred. #### 4.3.3 Estimates for amorphous ice Amorphous ice can absorb large quantities of gas thanks to its porosity (Talewar et al. 2019), making it an important constituent in the colder parts of the Solar System (Guilbert-Lepoutre 2012). Large amounts of gas can be released when the pores of amorphous ice collapse during crystallisation. This may enlarge the sublimation rate of amorphous ice by many orders of magnitude down to 50 K (e.g. Notesco et al. 2003; Bar-Nun et al. 2007; Drobyshev et al. 2007; Prialnik & Jewitt 2022), a phenomenon referred to as'molecular volcano' (May et al. 2013) seen mostly in amorphous ice exceeding several micrometer thickness. Even small fractions of a few percent of absorbed trace gases can increase the sublimation rate of water substantially. This is not relevant for _Euclid_, where ice films are expected to be thinner (Sect. 4.6) and decontamination will occur sooner (second paper). Sublimation measurements of pure amorphous ice at temperatures below 120 K are difficult due to limited instrumental sensitivity. Kouchi (1987) show that the saturation vapour pressure in amorphous ice depends strongly on the deposition temperature and the rate of deposition, and estimate it to be 10-100 times higher than in crystalline ice (see also Sack & Baragiola 1993). More recent work suggests that the sublimation flux of amorphous ice is enhanced by a factor of ten or less compared to crystalline ice, once annealing effects immediately after deposition have settled. Fraser et al. (2001) compute that amorphous ice has a 4.7 times shorter half-life time compared to crystalline ice at 120 K, increasing to 7.6 times at 90 K. Smith et al. (2011) measure the desorption rates at 137-150 K for amorphous and crystalline ice. Using their estimates of \(E_{\rm sub}\) and ignoring its temperature dependence (Feistel & Wagner 2007), we extrapolate to lower temperatures and find that the sublimation rate of amorphous ice increased by factors 3.3 and 5.1 at \(T=120\) and 90 K, respectively. Nachbar et al. (2018b) find a factor 2-3 increase of the saturation vapour pressure at 130 K for amorphous ice on flat gold and copper substrates, with an upward trend towards lower temperatures. Hence the sublimation flux of amorphous ice gradually increases over that of crystalline ice for decreasing temperatures. Given the uncertainties just outlined, we estimate the sublimation flux for amorphous ice by shifting the WMK models - that is Eqs. (5) and (6) - by 3 K to lower temperatures, \[j_{\rm s}^{\rm amorph}(T)=j_{\rm s}^{\rm crystal}(T+3\,K)\,\,. \tag{8}\] This results in respective factors 8.4 and 3.4 enhancement of the sublimation flux at 90 K and 120 K, and is shown by the dashed pink line in Fig. 12. We assume that below 110-115 K any ice deposits in _Euclid_ are amorphous and will remain amorphous (Fig. 8), applicable to M2, the external baffle, and the NISP detectors, all of which are at \(T<110\) K (see Table 1, and Sect. 3.1.2). We summarise that the sublimation flux is a very steep function of temperature (Fig. 12). Estimates for various PLM components are given in Table 1 using Eqs. (6) and (8) for operational and decontamination temperatures. The actual sublimation fluxes in _Euclid_ might deviate by a factor of a few, depending on the substrates and the in-flight temperatures. ### Vapour pressure in Euclid cavities The last information we need for our water transport model is whether the pressure in the sublimate is negligible. Indeed, the molecules are in free molecular flow, that is they travel along straight lines between point of sublimation and point of adsorption without collision. This is shown as follows. The probability \(f_{\rm stick}\) of a water molecule to adhere to an ice surface upon impact - the'sticking coefficient' - has been analysed by Batista et al. (2005) and Gibson et al. (2011); see also Suliga et al. (2020). Dependencies on kinetic energy, impact angle, surface topography, and temperature can be safely ignored in _Euclid_ conditions, resulting in high values of \(f_{\rm stick}=0.98\)-1.00. This is because the energy transfer from the impinging molecule to molecules of equal mass in the bulk ice is maximal, and because the kinetic energy quickly dissipates in the bulk ice (Brown et al., 1996). Thus the molecules are effectively removed from the gas phase upon surface contact in _Euclid_. We adopt a conservative \(f_{\rm stick}=0.97\), measured at 120 K and \(p=10^{-10}\) mbar by Brown et al. (1996). To estimate the gas pressure and mean free path length, we approximate _Euclid_'s telescope cavity with a cylinder (Table 6 and Fig. 15 in the Appendix). We also assume that the cavity wall is in thermal equilibrium with the gas phase - which is incorrect (Sect. 4.5) - but has no practical implications for our deduction of the mean free path length. The wall of the cylinder (_Euclid_'s external baffle) has a temperature of 100 K and its bottom (PLM baseplate and M1) 120 K. All surfaces are assumed to be iced. Using Eqs. (6) and (8), the total sublimation flux into the cylinder is \(n_{\rm sub}=3.24\times 10^{13}\) molecules s\({}^{-1}\), 99.9% of which coming from the warmer bottom. The escape fraction, \(f_{\rm esc}\), through the front telescope aperture on direct paths is 3.5% (Appendix 6). We adopt a typical distance of \(s=1.0\) m, travelled by a water molecule before its adsorption, with a mean velocity of \(\langle v\rangle=374\) m s\({}^{-1}\), that is the mean of the Maxwell-Boltzmann distribution at 120 K. The number \(N\) of molecules in the cylinder at any time is then \[N=n_{\rm sub}\,\frac{s}{\langle v\rangle}\,(1-f_{\rm esc})\,\sum_{k=0}^{ \infty}\,(1-f_{\rm stick})^{k}=n_{\rm sub}\,\frac{s}{\langle v\rangle}\, \frac{1-f_{\rm esc}}{f_{\rm stick}}, \tag{9}\] where the rapidly converging sum represents the molecules that do not stick after \(k\) surface impacts. With these conditions we have \(N=8.6\times 10^{10}\) molecules in the cylinder at any time. In an ideal gas, the pressure is then \(p=3.1\times 10^{-13}\) mbar and the mean free path length is 167 000 km, using 0.28 nm for the diameter of the water molecule. Therefore, the gas in the telescope cavity is in free molecular flow; sublimed molecules travel in straight lines from their point of sublimation to their point of impact, where they stick. The realisation of free molecular flow implies that the sublimate is not in thermal equilibrium with the mechanical surfaces, and that its velocity distribution is dominated by the processes in the surface-gas interface (Sect. 4.5). Any effects from the resulting non-Maxwellian velocity distribution are negligible for the conclusion of free molecular flow. ### Computation of incident water flux We consider the total flux \(\Phi_{\rm tot}(T)\) of water molecules subliming from a surface element dS (Fig. 13), in units of molecules m\({}^{-2}\) s\({}^{-1}\), \[\Phi_{\rm tot}(T)=m\,j_{\rm s}(T)\,. \tag{10}\] Here, \(m=2.99\times 10^{-26}\) kg is the mass of a water molecule, and \(j_{\rm s}(T)\) is computed from Eqs. (4) and (6) for crystalline ice; for amorphous ice, we use Eq. (8). The dependence of the emitted flux on the angle \(\theta_{\rm S}\) with respect to the surface normal is commonly described as \(\cos\theta_{\rm S}\)(Knudsen cosine law, see also Greenwood, 2002). Lower-resolution experimental data initially supported this, as was shown by Bryson et al. (1974) for H\({}_{2}\)O and CO ice at _Euclid_ temperatures and by Padowitz & Sibener (1989) for NO ice. This is questionable though, given the complex surface topography of ice (Sects. 3.5 and 3.6); newer experiments suggest angular dependencies that are considerably more - or less - focused (Todorov & Bloch, 2017, and references therein). Closely related to the violation of the Knudsen cosine law is the fact that the velocity distribution of subliming particles can be sub- or super-Maxwellian; this is a consequence of the complex short- and long-range atomic forces at play in the desorption processes and in the surface-gas interface (Kann & Skinner, 2016). In the absence of experimental data providing more realistic angular and velocity distributions for the sublimates in _Euclid_, we revert to the Knudsen cosine law and assume that the sublimate and the cavity are in thermal equilibrium. This and the free molecular flow established in Sect. 4.4 allow us to treat the problem in analogy to the photon emission of a luminous surface10. Footnote 10: In principle, a code that computes the radiative heat exchange between surfaces can also compute contamination, by replacing the photon flux with the sublimation flux (as e.g. in Biécha et al., 2022). Accordingly, the flux \(f\) (in molecules s\({}^{-1}\)) received by a surface element d\(A\) from the sublimating surface element dS is \[f(x,\theta,T)={\rm d}S\,\Phi_{0}(T)\,x^{-2}\cos\theta_{\rm S}\cos\theta_{\rm A }\,{\rm d}A\,, \tag{11}\] where \(\theta_{\rm S}\) and \(\theta_{\rm A}\) are the respective angles to the surface normal vectors, \(x\) is the distance between the two surface elements, and \(\Phi_{0}(T)\) is the peak sublimation flux emitted at \(\theta_{\rm S}=0\). We compute \(\Phi_{0}(T)\) by determining the total sublimation flux emitted by the unit area into the hemisphere above, \[\Phi_{\rm tot}(T)=\int\limits_{0}^{2\pi}\int\limits_{0}^{\pi/2}\Phi_{0}(T)\cos \theta_{\rm S}\sin\theta_{\rm S}\,{\rm d}\theta_{\rm S}\,{\rm d}\varphi=\pi \,\Phi_{0}(T). \tag{12}\] Here, we assumed azimuthal symmetry in the angle \(\varphi\). ### Contamination forecasts The telescope cavity consists of mirrors M1, M2, and the external telescope baffle, and is directly exposed to open space at the telescope's front aperture. The instrument cavity is mounted on the PLM baseplate and is located below M1, containing folding optics and the instruments (see Fig. 5). The instrument cavity is connected to open space only through the bore hole in M1, limiting the capability of water escape. #### 4.6.1 Telescope cavity In Appendix 4.6 we introduce a cylindrical model of the telescope cavity to compute the contamination rates for M1 and M2 (see Table 2), using the formalism developed earlier in this section. In this model, M1 can be contaminated by ice subliming from the interior wall of the external baffle, from M2, and from a 'front Figure 13: Parametrisation of the sublimation geometry. The water flux is emitted by the surface element dS and received at the surface element d\(A\). The blue vectors represent the respective surface normal vectors. For details, see Sect. 4.5. ring' that reduces the telescope aperture. Likewise, M2 can be contaminated by sublimation from the baffle, from M1, and from a 'back ring', that is the structural parts visible between M1 and the baffle wall. We compute the contamination for the nominal temperatures and the warm comparison case (Table 1). In flight, temperatures are expected to stay within a few kelvin of the nominal case. The following are some of our findings for the nominal temperatures and the glacial scenario: (i) 99.6% of the water escaping through the front aperture is subliming from M1 and the back ring. (ii) 11% (6%) of the ice subliming from the baffle (M1) escape the telescope cavity on direct paths, the rest will redeposit. (iii) M1 slowly decontaminates at \(-0.33\,\mathrm{nm}\,\mathrm{month}^{-1}\). Despite being very cold, M2 will contaminate only slowly at \(+0.13\,\mathrm{nm}\,\mathrm{month}^{-1}\). (iv) The thickness variation of the ice on M1 and M2 is about 1% or less and thus very uniform (Fig. 2). #### 4.6.2 Instrument cavity In Appendix E we present a hemispherical model to compute contamination rates in the instrument cavity (Table 2). In a hemispherical model, the flux of water is incoming from the \(2\pi\,\mathrm{sr}\) solid angle above the point under consideration and is independent of the hemisphere's radius. For a simple estimate we can thus ignore the much more complex geometry of the instrument cavity (Figs. 1 and 2), as long as the solid angle is filled with emitting surfaces at the same temperature. For nominal operating temperatures and the glacial scenario we find: (i) If the NISP optics are initially free of ice, then they will stay free of ice. A surface in the NISP optics will effectively sublime \(101\,\mathrm{nm}\,\mathrm{month}^{-1}\) since it is comparatively warm. (ii) The NISP optics can decontaminate themselves during the time between launch and the arrival at Lagrange point L2, unless they get initially contaminated with more than \(100\,\mathrm{nm}\,\mathrm{per}\) surface. (iii) The NISP detectors will accumulate a substantial amount of \(10\,\mathrm{nm}\,\mathrm{month}^{-1}\), as they are considerably colder than their environment. (iv) Any contamination on FoM1 will remain unchanged. (v) FoM2, M3, the dichroic, and in particular FoM3 will accumulate ice. (vi) The VIS detector effectively decontaminates itself, as it is much warmer than its environment. (vii) Actual contamination rates are highly sensitive to temperature changes as small as 1-2 K. We note that the above statements about the NISP optics remaining free of ice only hold if the optics are exposed to the instrument cavity. In the as-built instrument this is not the case. The NISP optics, the filter wheel, and the grism wheel are encapsulated in a SiC box that has very small venting holes, only. NISP itself is wrapped in MLI (Fig. 2, effectively forming a closed system with its own contamination dynamics. #### 4.6.3 Industry forecast by Airbus Defence and Space (ADS) ADS has performed a molecular contamination analysis of the PLM, using 3D geometric models and a distribution of various outgassing materials and molecular species. The expected water contamination ranges from \(0.1\,\mathrm{nm}\,\mathrm{month}^{-1}\) for the telescope cavity, to \(1\,\mathrm{nm}\,\mathrm{month}^{-1}\) for the instrument cavity. This is about the same order of magnitude as the water exchange from sublimation in our stationary glacial model (see Table 2). We note that the ADS estimates are subject to the same uncertainties as outlined in Sects. 4.2 and 4.3, and could be a factor of a few (or more) higher or lower. More quantitative estimates about the actual uncertainty cannot be made with the data at our hands. #### 4.6.4 Ammonia contamination from thruster firings In this subsection we deviate shortly from water ice. _Euclid_ carries \(137.5\,\mathrm{kg}\) of pure hydrazine propellant (\(\mathrm{N_{2}H_{4}}\)), sufficient for a L2 halo-orbit insertion, a six-year mission, a potential 1-2 year mission extension, and an end-of-life insertion into a heliocentric graveyard orbit (Racca et al. 2016). Halo-orbit correction moneuveres are carried out every four weeks during a reserved \(6\,\mathrm{h}\) window (Euclid Collaboration: Scaramella et al. 2022). Thruster firings will in general contaminate a spacecraft through expansion of the supersonic flow in a vacuum (Chen et al. 2000; Detleff & Grabe 2011; Lee 2017; Yargin et al. 2017). Some of _Euclid_'s hydrazine thrusters are shown in Fig. 5. Thales Aleenia Space - who built _Euclid_'s SVM - have modelled _Euclid_'s thruster contamination and found it to be negligible, but no details could be communicated that would allow us to independently verify their conclusions. Therefore, here we make a simple worst-case estimate of the expected contamination, and confirm that it is negligible. Hydrazine is a monopropellant - that is it does not need an oxidiser - with the following two reactions when pushed through the catalytic bed of a thruster (Price & Evans 1968; Makled & Belal 2009), \[3\,\mathrm{N_{2}H_{4}} \longrightarrow 4\,\mathrm{NH_{3}}+\mathrm{N_{2}}\qquad\mathrm{and} \tag{13}\] \[4\,\mathrm{NH_{3}} \longrightarrow 2\,\mathrm{N_{2}}+6\,\mathrm{H_{2}}. \tag{14}\] The first reaction is fast and exothermic and happens at the beginning of the catalyst bed, whereas the second reaction is slow and endothermic and occurs at the end of the catalyst bed. For thruster purposes the second reaction should be suppressed, that is as much \(\mathrm{NH_{3}}\) as possible should be preserved to achieve a hot exhaust jet with high specific impulse (Pakdehi et al. 2019). The fraction of unspent \(\mathrm{NH_{3}}\) is controlled by the thruster design11. Footnote 11: A comparatively cold but very gas-rich stream emerges if most \(\mathrm{NH_{3}}\) is spent, in which case the catalytic chamber serves as a gas producer for various different technical purposes. \begin{table} \begin{tabular}{l c c c} \hline Common to & \(\mathrm{d}z/\mathrm{dr}\) (\(T_{\mathrm{nominal}}\)) & \(\mathrm{d}z/\mathrm{dr}\) (\(T_{\mathrm{warm}}\)) \\ VIS and NISP & \([\mathrm{nm}\,\mathrm{month}^{-1}]\) & \([\mathrm{nm}\,\mathrm{month}^{-1}]\) \\ \hline & WMK & Sack\(\pm\) (1993) & WMK \\ \hline M1 & \(-0.33\) & \(-1.1\) & \(-4.0\) \\ M2 & \(0.13\) & \(+0.3\) & \(+1.1\) \\ FoM1 & \(+0.08\) & \(-0.16\) & \(-14\) \\ FoM2 & \(+1.4\) & \(+3.4\) & \(+0.50\) \\ M3 & \(+1.4\) & \(+3.4\) & \(-25\) \\ Dichroic & \(+1.4\) & \(+3.4\) & \(+0.50\) \\ \hline NISP path & & & \\ \hline NISP lenses & \(-101\) & \(-238\) & \(-89\) \\ Detector & \(+9.9\) & \(+27\) & \(+22\) \\ \hline VIS path & & & \\ \hline FoM3 & \(+3.6\) & \(+9.6\) & \(+9.3\) \\ Detector & \(-44\,000\) & \(-79\,000\) & \(-122\,000\) \\ \hline \end{tabular} 1 \end{table} Table 2: Total contamination rates for the nominal and warm case (Table 1). For the worst-case L2 halo-orbit correction manoeuvre we assume the following: 1 kg of hydrazine is used to achieve a velocity change of \(\Delta v=0.5\) m s\({}^{-1}\), the latter being a worst-case assumption by _Euclid_'s flight-dynamics team; all NH\({}_{3}\) is preserved, that is a maximum of 1.42 kg of NH\({}_{3}\) are produced; and the entire rarefied backflow (Fig. 14) from the thruster's jet gets deposited uniformly on all _Euclid_ surfaces. To estimate the amount of NH\({}_{3}\) that could contaminate _Euclid_, we must determine the fraction of mass contained in the backflow. We digitised12 the measurements in figure 28 of Detleff & Grabe (2011), showing the particle flux density in the supersonic flow, and reproduce it in our Fig. 15. We then approximate the flow as Footnote 2: Using WebPlotDigitizer (Rohatgi, 2022) \[\log_{10}\left[\begin{array}{c}p(\theta)\\ 1\,\mathrm{m}^{2}\,\mathrm{s}\end{array}\right]=\left\{\begin{array}{ll}-2.30 \,|\theta|+24.25\,,&\mathrm{for}\ |\theta|\leq 0.75\,\pi\ \ (135^{\circ})\\ 18.85\,,&\mathrm{for}\ 0.75\,\pi<|\theta|<\pi\.\end{array}\right.\] Here, \(\theta\) is the ejection angle with respect to the nozzle's axis and \(p(\theta)\) is the particle flux per area measured in m\({}^{-2}\) s\({}^{-1}\). Assuming radial symmetry around the nozzle's ejection axis, we integrate over the particle flux density and find that the backflow (\(|\theta|>\pi/2\)) contains 0.65% of the total mass ejected. This translates to 9.2 g of NH\({}_{3}\) in the backflow. We ignored any mass-segregation effects (Price & Evans, 1968), that is NH\({}_{3}\) and N\({}_{2}\) are homogeneously distributed in the flow. Approximating _Euclid_ with a cylinder of 4.5 m height and 3.1 m width, it would have a surface area of 59 m\({}^{2}\), of which 1.2 m\({}^{2}\) are for the M1 mirror. Assuming NH\({}_{3}\) is uniformly distributed over this surface, M1 would then accumulate 0.19 g of NH\({}_{3}\). Solid NH\({}_{3}\) has a density of 0.9 g cm\({}^{-3}\) at 100 K (Satorne et al., 2013), similar to the density of crystalline water ice; the NH\({}_{3}\) layer would be 169 nm thick. Brown & Bolina (2007) show that the desorption rate of solid NH\({}_{3}\) in a vacuum at 100-120 K is 6-8 orders of magnitude higher than that of water ice (see also Zhang & Paige, 2009). Hence this layer of NH\({}_{3}\) would sublimote in about 4 h at 110 K (see also Fig. 12). That is consistent with Dawes et al. (2007) and references therein, who report the occurrence of multilayer desorption of NH\({}_{3}\) at temperatures of 100 K and above. In reality, not all of the backflow will deposit on _Euclid_, and only a very small fraction will enter the telescope aperture that faces away from the thrusters' nozzle axes (see Fig. 15). Any NH\({}_{3}\) deposits from orbit maintenance will have sublimed before science operations resume. N\({}_{2}\) and H\({}_{2}\) ices from hydrazine breakdown cannot form on _Euclid_ due to its comparatively high temperature. We have not considered unspent hydrazine, which may constitute 1% of the mass in the exhaust (Chen et al., 2000), but we note that hydrazine is dissociated by UV-photons with wavelengths shorter than 250 nm (Vaghiani, 1993). ### Decontamination procedure The thermal decontamination scheme for _Euclid_ foresees temperatures of 140-270 K, using heaters and partial Sun exposure. The thermal cycle alone takes about 18 days. During the first three days, the spacecraft's decontamination heaters are turned on with full power demand. Because of _Euclid_'s compact design, it can generate only a limited amount of heating power from its on-board solar cells. Therefore, on the fourth day, the telescope's solar aspect angle13 will be reduced to 45\({}^{\circ}\). This allows the external telescope baffle to reach a temperature of up to 200 K. Only a small part of the baffle is directly exposed to the Sun, but since it is made of aluminium, which conducts heat easily, the parts remaining in the shadow will also decontaminate. The demanded heater power is reduced during this time, acknowledging the reduced effective area of the solar panel. Footnote 13: The solar aspect angle is the angle between _Euclid_’s viewing direction and the Sun. During routine operations it is kept between 87\({}^{\circ}\)–120\({}^{\circ}\) to maximise thermal stability (Euclid Collaboration: Scaramella et al., 2022). The telescope stays at full decontamination power for two days. The sublimation itself takes a few minutes only once maximum temperatures are reached: 0.23 um and 3.6 um of ice sublimote per second at 200 K and 220 K, respectively. While the ice may evaporate rapidly, additional time is required to give sublimates a chance to find their way out of the cavities and leave the spacecraft. For example, according to our model only 6% of the water molecules evaporating in the telescope cavity escape on direct paths, the rest will undergo numerous redeposition and sublimation cycles before eventually escaping. Furthermore, the high decontamination temperatures result in a decreased sticking coefficient (Kossacki et al., 1999; Batista et al., 2005; Gibson et al., 2011; Brieda et al., 2022) and a massively increased Figure 14: Schematic view of the flow regime of a typical small chemical or cold-gas thruster on a spacecraft, expanding into vacuum. The backflow might contaminate the spacecraft. Figure credit: Dettleff & Grabe (2011). Figure 15: Measured particle flux density across the flow of a cold-gas thruster. Data taken from Dettleff & Grabe (2011). The red line shows our piece-wise fit, symmetric around \(\theta=0\), and extrapolated to the edges of the \([-\pi,+\pi]\) range. evaporation rate. Consequently, the mean free path length might become comparable to - or even smaller than - the size of the spacecraft, in which case pressure effects would have to be taken into account for more accurate evacuation-time estimates. Six days after beginning of the decontamination, the spacecraft is restored to a nominal solar aspect angle, and a controlled cool-down begins. The latter takes approximately twelve days, keeping optical elements warmer than their surroundings so that any water residuals can condense on colder surfaces. A series of recalibration steps is executed as soon as the instruments have reached operational temperatures. A possible optical realignment and further on-sky recalibrations can only occur once the telescope optics are stable again. The duration of a full decontamination cycle including all recalibrations is expected to last up to 25 days and potentially longer. We expect essentially all superficial ice to evacuate from the telescope cavity during a full decontamination cycle. The situation for the instrument cavity is different, as the opening to the telescope cavity is comparatively small and the geometry of the instrument cavity complex. In particular in NISP, which is mostly enclosed in MLI, water has reduced escape capabilities and could eventually recontaminate the detector. Given a clear indication of ice on the NISP detectors - for example modulated quantum efficiency, spectral absorption or structures in the flat fields, as elaborated in Holmes et al. (2016) and in our second paper - a partial decontamination only for NISP detectors could be considered. This would mean a smaller thermal disturbance to the telescope than a full decontamination, yet preliminary thermal considerations indicate that 3-4 days would still be required. Other than the NISP detectors, surgical decontamination of individual components is not possible with _Euclid_, as the mounted optics and the PLM baseplate are fully constructed in SiC (see also Fig. 15). Any heat applied locally to an optical element would quickly propagate within the instrument cavity to other areas due to the high thermal conductivity of SiC, thus introducing a global thermal state change. Fine temperature control of optical elements in _Euclid_ is not possible, as most heater controllers operate in an on-off fashion, providing full power when on. To mitigate uncertainties in contamination, _Euclid_ will undergo an immediate post-launch thermal decontamination, being kept at 200-273 K for four days. We expect that all water trapped on surfaces will desorb, and that a large fraction of it evacuate the telescope and instrument cavities. ## 5 Results from thermal vacuum tests ### Vacuum tests of the PLM In 2021 the _Euclid_ PLM underwent extensive vacuum tests for 60 days at a pressure of \(10^{-6}\) mbar, simulating space conditions in a vacuum chamber at the Centre Spatial de Liege (CSL), Belgium. To cool down to its operational temperatures, the PLM must see a colder object, provided by a liquid helium shroud, which itself sits in a nitrogen shroud. Once the chamber was evacuated, everything was kept at ambient temperature for 4.5 days for initial outgassing of all components in the chamber. _Euclid_ was then cooled down and kept at operating temperatures for 30 days. Afterwards, a full decontamination was run (11 days), followed by another cool-down to operating temperatures (9 days) before the final-warm-up. Witness samples for non-volatile organic contaminants were placed inside the PLM's instrument and telescope cavities. These contaminants are heavier than water and outgas at higher temperatures. No organic contamination could be found after the tests. This confirms the efficient bake-out of all components during construction, in a vacuum and at temperatures of \(80^{\circ}\) C to \(120^{\circ}\) C, much higher than _Euclid_'s decontamination temperatures. While heavier organic compounds might still be dissolved in some materials after bake-out, they are not expected to outgas in flight at cryogenic temperatures, nor during the thermal decontamination that reaches at most room temperatures. We are thus confident that _Euclid_ will not be contaminated by organic species; water ice remains the only concern. No signatures of contamination - from water ice or else - were detected in the test data taken by the cold PLM instruments, for example NISP flat fields and images of an artificial star. However, the test was short compared to _Euclid_'s in-orbit life; slowly growing ice films could simply not have had enough time to become thick enough for detection during the test. Moreover, the in-flight calibration observations in zero gravity and with low background, with the optics at its full performance, will be much more powerful in detecting contamination, as we show in our second paper. During the vacuum tests the same type of witness samples were placed inside the shrouds. A residual gas analyser faced the helium shroud MLI from nearby, as it must sample the contamination plume from a close distance. No emission from the helium shroud was detected, and the witness samples remained clean. While these measurements did not probe the PLM, they show that the vacuum tests were nominal from a contamination perspective. ### Vacuum tests of the fully assembled spacecraft In 2022 the fully integrated spacecraft was tested in space conditions for another month at Thales Alenia Space in France (TAS-F). At this point in time, with the PLM being handed over by ADS to TAS-F, the entrance aperture of the PLM was sealed to avoid particulate contamination of interior surfaces. No measurement probes and witness samples were allowed anymore inside the PLM. A QCM placed in the chamber showed no excess contamination during the test in comparison to a reference blank run without the flight hardware. For more details about the vacuum tests see for example Poidomani et al. (2020). ## 6 Conclusions and outlook This paper is the first in a series of two about water-ice contamination processes in spacecraft, and _Euclid_ specifically. To the best of our knowledge, this is the first presentation of the subject from a first-principles perspective. We review the outgassing and contamination records of a dozen different spacecraft and instruments, and we conclude that contamination is a highly dynamic and very long-lived process. The dominant reservoir of water in spacecraft such as _Euclid_ is the MLI used for thermal insulation. In worst-case conditions, it will take years for the MLI to fully dry up. Consequently, we expect molecular contamination to be active throughout _Euclid_'s six-year mission duration (Sect. 2), with a forecast of low water contamination overall, albeit with considerable uncertainty. To better understand the contamination process of optical surfaces themselves, and ultimately the performance impact on the data (evaluated in our second paper), we have reviewed the current knowledge of the creation of thin ice films on different substrates. We find that the structure and topography of the ice films is highly dependent on the substrate material. Most of the coating materials used for _Euclid_'s optical surfaces are not disclosed to us by the manufacturers, hence we cannot make accurate forecasts about their optical properties - such as scattering losses - when iced. Even if the coatings were known, including the exact crystalline or amorphous atomic structure exposed at their surfaces, current theories are not able to reliably predict the growth and structure of deposited ice films (Sect. 3). Quantitative estimates of the in-flight outgassing and contamination rates remain rather uncertain. At _Euclid_'s typical temperatures, even small changes of a few kelvin accelerate the diffusion speed of water in the MLI, and the subsequent sublimation flux, by a factor of a few. There is also a strong dependence on the MLI's chemical composition and molecular structure, and uncertainties when extrapolating the outgassing rates measured at room temperature to _Euclid_ temperatures. Small deviations of _Euclid_'s in-flight temperatures from their pre-launch expectations can therefore have a considerable impact on the actual contamination rates (Sects. 4.2 and 4.3). The matter is complicated further since crystalline ice layers on top of outgassing substrates may act as diffusion barriers (Sects. 4.2). Thus, at low temperatures, existing thin ice films on _Euclid_'s non-optical surfaces might actually be beneficial. However, _Euclid_ is only 10-20 K below the point of 140-150 K where sublimation and diffusion accelerate rapidly in an exponential fashion. Forecasts of the absolute amount of contamination are therefore hard; they also require full 3D modelling of the emitting and contaminating surfaces, as was done for example for JWST in Brieda et al. (2022), which is well beyond the scope of this paper. To estimate the contamination dynamics from a sublimation perspective alone, we assumed that all surfaces in _Euclid_ are already iced and that these ice layers act as effective diffusion barriers, such that diffusion can be neglected. We could then compute the water exchange rates between surfaces using a semi-empirical model of the sublimation flux at cryogenic temperatures, without the uncertainties inherent to direct material outgassing. We find typical water-contamination rates of up to 10 nm month\({}^{-1}\) for the various optical surfaces (Table 2). The coldest surfaces in _Euclid_ are at greatest risk of contamination because they have the lowest sublimation fluxes. The NISP detectors at 95 K will act as a cold trap for water vapour (see also Holmes et al., 2016), which does not have many escape paths from the NISP's ML enclosure (Sects. 4.5 and 4.6). Fortunately, the NISP detectors have separate heaters, and a partial decontamination can be considered if necessary, compared to a full decontamination of the entire spacecraft. Our contamination rates estimated from sublimation alone are comparable to the water contamination rates estimated by ADS. The latter ones are computed directly for diffusion outgassing and range from 0.1 nm month\({}^{-1}\) in the telescope cavity to 1.0 nm month\({}^{-1}\) in the instrument cavity. These estimates are uncertain by a factor of a few or more, as argued in Sects. 4.2 and 4.3. Organic contamination is not expected for _Euclid_, owing to extensive bake-out at high temperatures, which was confirmed during the on-ground thermal vacuum tests (Sect. 5). The Gaia and XMM-_Newton_ OM experiences, though, caution us that considerable in-flight contamination is not necessarily anticipated by on-ground tests, and suitable calibration and decontamination plans must be in place for _Euclid_'s operational phase. In the second paper, we examine the optical effects of water ice on _Euclid_'s spectrophotometric data. We look at absorption, interference, scattering, polarisation, apodisation, and phase shifts, and investigate _Euclid_'s sensitivity to these effects. Our estimates are based on theoretical calculations as well as dedicated optical experiments of contaminated mirror coating samples. Overall, we find _Euclid_ in a great position to detect even very small amounts of a few to a few tens of nanometer water ice in its optical path, using - among others - regular observations of a stellar self-calibration field. This sensitivity, however, also implies that already small amounts of ice must be tracked and accounted for in the data analysis. Decontamination must occur when our calibration requirements cannot be met anymore by the corrected data. For future missions where contamination could be relevant and which cannot be decontaminated easily, the installation of QCMs with suitable viewing angles near critical surfaces would be beneficial. QCMs are capable of detecting even fractional monolayers of water ice and other contaminants, thus providing accurate real-time knowledge of actual contamination levels. This was demonstrated successfully by the MSX experiment (Uy et al., 1998, 2003; Wood et al., 2003). ###### Acknowledgements. The authors at MPIA acknowledge direct funding by the German DLR under grant numbers 50 G 2003 and 50 QE2303, and the support of our librarian Simons Kreonsetter for providing the full texts of numerous non-astronomical references. Most figures in this paper were prepared with Matplotlib (Hunter, 2007). The Euclid Consortium acknowledges the European Space Agency and a number of agencies and institutes that have supported the development of _Euclid_, in particular the Academy of Finland, the Agonia Spazile Italiana, the Belgian Science Policy, the Canadian Euclid Consortium, the French Centre National d'Etudes Spatiales, the Deutsches Zentrum fur Luft- und Raumfahrt, the Danish Space Research Institute, the Fundacja para a Ciencia e a Tecnologia, the Ministerio de Ciencia e Innovacion, the National Aeronautics and Space Administration, the National Astronomical Observatory of Japan, the Netherlands Onderzoekschool Voor Astronomie, the Norwegian Space Agency, the Romanian Space Agency, the State Secretariat for Education, Research and Innovation (SERI) at the Swiss Space Office (SSO), and the United Kingdom Space Agency. A complete and detailed list is available on the _Euclid_ web site ([http://www.euclid-ec.org](http://www.euclid-ec.org)).
2306.09106
Audio Tagging on an Embedded Hardware Platform
Convolutional neural networks (CNNs) have exhibited state-of-the-art performance in various audio classification tasks. However, their real-time deployment remains a challenge on resource-constrained devices like embedded systems. In this paper, we analyze how the performance of large-scale pretrained audio neural networks designed for audio pattern recognition changes when deployed on a hardware such as Raspberry Pi. We empirically study the role of CPU temperature, microphone quality and audio signal volume on performance. Our experiments reveal that the continuous CPU usage results in an increased temperature that can trigger an automated slowdown mechanism in the Raspberry Pi, impacting inference latency. The quality of a microphone, specifically with affordable devices like the Google AIY Voice Kit, and audio signal volume, all affect the system performance. In the course of our investigation, we encounter substantial complications linked to library compatibility and the unique processor architecture requirements of the Raspberry Pi, making the process less straightforward compared to conventional computers (PCs). Our observations, while presenting challenges, pave the way for future researchers to develop more compact machine learning models, design heat-dissipative hardware, and select appropriate microphones when AI models are deployed for real-time applications on edge devices. All related assets and an interactive demo can be found on GitHub
Gabriel Bibbo, Arshdeep Singh, Mark D. Plumbley
2023-06-15T13:02:41Z
http://arxiv.org/abs/2306.09106v1
# Audio Tagging on an Embedded Hardware Platform ###### Abstract Convolutional neural networks (CNNs) have exhibited state-of-the-art performance in various audio classification tasks. However, their real-time deployment remains a challenge on resource-constrained devices like embedded systems. In this paper, we analyze how the performance of large-scale pretrained audio neural networks designed for audio pattern recognition changes when deployed on a hardware such as Raspberry Pi. We empirically study the role of CPU temperature, microphone quality and audio signal volume on performance. Our experiments reveal that the continuous CPU usage results in an increased temperature that can trigger an automated slowdown mechanism in the Raspberry Pi, impacting inference latency. The quality of a microphone, specifically with affordable devices like the Google AIY Voice Kit, and audio signal volume, all affect the system performance. In the course of our investigation, we encounter substantial complications linked to library compatibility and the unique processor architecture requirements of the Raspberry Pi, making the process less straightforward compared to conventional computers (PCs). Our observations, while presenting challenges, pave the way for future researchers to develop more compact machine learning models, design heat-dissipative hardware, and select appropriate microphones when AI models are deployed for real-time applications on edge devices. All related assets and an interactive demo can be found on GitHub1. Footnote 1: [https://github.com/gbibbo/ai4s-embedded](https://github.com/gbibbo/ai4s-embedded) Gabriel Bibbo, Arshdeep Singh, Mark D. Plumbley Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK, {g.bibbo, arshdeep.singh, m.plumbley}@surrey.ac.uk Audio Event Detection, Pre-trained Audio Neural Networks, Edge Computing, Google AIY Voice Kit, Raspberry Pi. ## 1 Introduction Artificial Intelligence (AI) based frameworks have been successfully employed to solve many real-life applications. For example, AI-based systems perform state-of-the-art performance in problems including audio classification [1] and speech recognition [2], and can be used in various applications such as assisted living, surveillance, healthcare and activity recognition. However, the performance of such well-performing AI-based software systems, when deployed on edge devices, has not been much explored despite the advancement of compact devices such as embedded systems or micro-controllers. Even though AI has taken impressive strides in academia in the past couple of years, a gap still exists between theoretical advancements and practical applications. Therefore, this paper aims to analyse the performance of AI models designed audio classification, when such AI models are deployed onto a compact and a portable hardware device. We explore the feasibility and challenges in deploying convolutional neural networks (CNNs) on edge devices like the Raspberry Pi [3]. We use pre-trained audio neural networks (PANNs) [1, 4] for classifying audio events. Despite developments in audio tagging since PANNs, we use this model for its deployment simplicity on linux systems, and promising performance in recognising audio activities. We assess the performance of PANNs on edge device by capturing audio using different microphones, aiming to quantify the effect of microphones on the performance. Additionally, we discover difficulties related to temperature and volume of the audio signal on performance of the PANNs on the edge device. We also encounter challenges in installing the PANNs code on Raspberry Pi with the Raspbian operating system. Unlike installations on conventional computers (PCs), we require specific libraries and specific compilations for the Raspberry Pi's processor architecture. We find that running AI algorithms can significantly increase the temperature of the CPU in the Raspberry Pi after few minutes of operation. To prevent overheating, the Raspberry Pi automatically reduces the CPU clock speed, resulting in an increased inference latency. To summarise our contributions, we aim to answer the following questions, * What are the challenges in deploying AI models on edge devices particularly Raspberry Pi? * How does device temperature affect the inference latency? * How does microphone quality and volume impact PANNs performance in recognising audio events on edge device? We hope our work shines a light on existing and emerging challenges for detection and classification of acoustic scenes and event (DCASE) research community, promoting discussion on deploying advanced algorithms in real-world settings. Despite hurdles, we envision our work as a practical guide for deploying similar systems on embedded devices, aiming to ease the path for their broader adoption. The rest of the paper is organised as follows. Section 2 introduces some background on assisted living applications and edge devices. Section 3 presents various components used in developing the hardware-based demonstration and the objectives to perform experiments. Next, the experimental setup is explained in Section 4. Section 5 presents experimental analysis. Finally, Section 6 discuss the issues encountered in development and Section 7 concludes the paper. ## 2 Related Work ### Everyday ambient assisted living using audio DCASE frameworks have shown significant advances in applications, including the monitoring of elderly or dependent individuals [5, 6, 7] and surveillance concerns [8, 9, 10]. Recognizing certain interior sounds has proven beneficial in multiple surveillance contexts, particularly those related to human behaviour [5]. This becomes practically important when integrated into smart homes for ambient assisted living [6], to aid the elderly or individuals with disabilities in their daily lives, improving their overall quality of life. Furthermore, technological progress, particularly the advent of the Internet of Things (IoT), has enabled these DCASE algorithms to be deployed in real-time wireless acoustic sensor networks [10]. This has allowed for innovative projects such as CIRDO [11] and home-Sound [12]. These initiatives focus on home safety, enabling medical staff to remotely monitor the status of dependent individuals using a decentralised intelligence architecture. Moreover, detecting sounds in outdoor environments have several applications including outdoor surveillance [9], traffic noise mapping [13] and soundscape modelling [14]. ### A brief overview on edge devices and their challenges While developing real-world applications as explained previously, it is crucial to understand the landscape of available hardware options for deployment, in order to optimize cost, hardware size, and software used in development stage. One of the common solutions is to use edge devices which may offer a promising platform for DCASE algorithms in real-time applications due to their lightweight, small size and low energy consumption and affordability [15, 16, 17]. However, there are several challenges to deploy resource-hungry AI models such as CNNs on resource-constrained edge devices. These challenges include limited memory and computations of edge devices [18], hardware heterogeneity, software fragmentation, and the necessity of co-optimization and collaboration across different layers of the development and deployment stack. Additionally, their operation on small, low-capacity batteries imposes further restrictions [19]. Therefore, it is important to understand the limitations of edge devices used to deploy AI-based software in real-life. ## 3 System Requirements and Functionality ### Hardware used for experimentation For our design, we require the ability to handle real-time processing tasks, such as audio monitoring, and allow complex digital signal processing to be performed on the device itself, minimizing the need to transmit large amounts of audio data for centralized computation. With these requirements in mind, at the core of our system is the Raspberry Pi 4 [3] processor, which is a 64-bit Quad-core Cortex-A72 (ARM v8) 1.8 GHz system on chip, with 4 GB RAM, 64 GB flash storage, USB I/O, and Wi-Fi connectivity. The hardware platform provides a range of AI libraries on its Linux-based operating system and community support, and is easy to use, requiring minimal software code adaption. During our quest for suitable hardware, we also identify the Google AIY Voice Kit [20], an AI project designed for voice detection, that is closely aligned with our requirements. The kit encourages hands-on learning about artificial intelligence and programming in an accessible manner and was originally created to facilitate the prototyping of voice AI applications on the Raspberry Pi. We adapt the AIY Voice Kit hardware and software package to build on audio tagging framework, The AIY Kit and its various components are shown in Figure 1 (a). The various components as shown in Figure 1 (b) include a Voice HAT (Hardware Attached on Top), an ICS-43432 stereo microphone optimized for speech detection, cables and a speaker, all housed in an environmentally friendly cardboard box [20]. ### Software used for experimentation **Google AIY Voice Kit software:** The base operating system in the Google AIY Voice Kit is Raspbian, which comes preinstalled with a default image. Raspbian is an open source, linux-based operating system specifically designed for Raspberry Pi hardware. It consumes few computational resources and allows control over various functionalities that the kit offers, from integration with button control libraries, both for pressed action and LED light speaker and microphone, as well as text-to-speech capabilities. **Audio tagging software:** For recognising audio events in the surroundings, we use PANNs-based AI4S demonstration software[4], a software suite that includes a graphical interface for visualising and assigning confidence/probability values corresponding to detected audio events. The AI model in the software comprises of a pre-trained convolutional neural network (PANNs) [1] designed for audio pattern recognition. We use the CNN9 model from PANNs that achieves a mean accuracy (mAP) of 0.37 and has 4.96 million parameters. This PANNs model has been trained on AudioSet, a large-scale dataset of approximately 2 million audio clips with 527 distinct audio event classes. PANNs take as input the log-mel spectrogram of sound signals. We apply certain usability modifications to the code demonstration to better align it with our project's specific needs, leaving the core audio event detection algorithm unaltered. The code graphical user interface (GUI) shows the top few predicted audio event classes with their confidence values. ### Functionality of audio tagging system on hardware The audio tagging system on hardware is controlled by a push button, without the need of a keyboard, mouse, or screen. Upon booting, it automatically connects to the network and plays a welcome audio message: _This is the AI4S demo. Press the button to start recording_. When the push button (ON/OFF) is pressed, an LED lights up and sound recording begins in a buffer. Then, the recording signal is processed by the audio tagging software explained previously to generate and notify the top few predicted audio classes corresponding to the recorded audio signal in the buffer. The audio tagging system on hardware hosts a webpage accessible from any device on the same local network for debugging and remote access. It provides a real-time view of both the CPU temperature, as reported by the Raspberry Pi, and a log of detected audio Figure 1: Google AIY Voice Kit (a) assembled, (b) components events, with a ranking of the most frequently appearing sounds. Additionally, the system has the capability to send "email alerts" when specific sounds, such as a fire alarm, water tap etc. are detected. This feature offers an additional layer for real-time safety, and can be personalised depending on the used case. ### What do we want to measure? With the audio tagging system on hardware or software, we aim to measure or compare following: **(a) Hardware v/s software performance:** The performance of the Raspberry Pi based audio tagging system and the software-based audio tagging system which runs only on computer; **(b) Performance v/s volume of audio signal:** Effect of audio signal volume on performance; **(c) Performance v/s microphone quality:** Effect of different microphones on the performance; **(d) Temperature v/s latency:** The temperature and latency in the Raspberry Pi based audio tagging system. ## 4 Experimental setup ### Dataset preparation & Experimental environment For experimentation, we selected five audio categories which are closely aligned with the objectives of AI4S project, on wellbeing and sound at home. The five audio categories are "Speech", "Baby cry", "Water", "Fire alarm" and "Music" from AudioSet [21]. Next, we created a continuous two-minute-long audio clips by concatenating 10 audio examples for each class. For example, we selected 10 examples of alarm sounds and concatenated them to create 2 minutes of alarm sounds. After obtaining the 2 minutes concatenated audio file, we removed silence and normalized the volume. In the end, we have five different audio recordings each of 2-mins length, one for each class. **Varying playback volume**: We prepare 50, 60, and 70dB volume levels for each audio recording to measure the impact of volume on system performance. These volumes were determined using the N05CC sound level meter [22] in dBA mode with the drivers at a distance of one meter from the microphone. The dBA mode is used for general sound level measurements. We choose these three different volume levels based on the potential range of sound events encountered in home or office environments [23]. **Acoustic Environment and audio playback**: We perform experiments in the Audio Booth in Centre for Vision, Speech, and Signal Processing (CVSSP) at the University of Surrey, UK. The experimental setup is shown on Figure 2. The audio booth is an isolated place without any interference of outside noise and has a base noise with 34 dB sound pressure level inside the room. For audio playback, we simulate a wide-frequency sound source located one meter away from the microphones by utilizing a Genecle 8020B Monitoring System loudspeaker and a 7060A Active Subwoofer at a single point. ### Systems used to compare We use three distinct scenarios to compare the performance of the PANNs model performance to classify the five audio recordings on hardware and software: **1. Google AIY Mic + Rasp_Pi**: We utilize the assembled device, equipped with the low-cost AIY ICS-43432 [24] microphone, and a Raspberry Pi running the audio tagging software. For audio recording playback at different volumes, the audio events are played through loudspeakers. **2. Yeti Mic + PC**: We utilize a Logitech Yeti USB microphone to capture the surrounding sounds and the audio tagging software running on a dedicated PC. Similar to the scenario (1) above, the audio events at different volumes are played through loudspeakers. **3. PC with recorded audio**: We evaluate the performance of the audio tagging software directly on the computer (PC) without involving any microphone and playback. The audio recordings at different volume levels are stored in the computer and used directly to compute performance. ### Performance metrics To evaluate the performance of the audio tagging software on Raspberry Pi or on the computer (PCs), we computed the confidence or probability scores of a specific sound event using the predictions obtained corresponding to every two seconds using a given audio recording. Then, we compute the mean and standard deviation of the probabilities for the specific audio event. Also, we report the percentage of occurrence of the specific audio event in the given audio recording. For this, we use Top-7 predictions from every two seconds of the whole audio clip. We consider a specific event occurred in a two seconds chunk if it is present as one of the Top-7 predictions, otherwise it is considered as absent. In the end, we count the total number of times the specific event occurred for whole audio recording and report its percentage of occurrence. ## 5 Experimental analysis Table 1 shows confidence scores from different systems explained in Section 4.2 at varying audio signal playback volumes. Our experiments led to the following observations: **Hardware v/s software performance:** The PC with recorded audio (scenario (3)) shows an improvement of at least 5 percentage points in confidence and a minimum of 3 percentage points improvement in Top-7 predictions compared to both the Raspberry Pi-based system (scenario (1)) and the Yeti microphone-based PC system (scenario (2)across all audio classes and audio volume levels except for "Water" class at 70dB. **Impact of playback volume:** Generally, the performance obtained using different systems increase with an increase in volume of the audio signals for all sound classes except for"Water" sounds. We perceptually find that increasing the volume for "Water" makes the audio recording more noisy. We also confirm using Audacity audio Figure 2: Experimental setup software [25] that the "Water" sound at 70 dB saturates resulting in audio clipping that may cause the performance degradation compared to the lower volume sounds. **Impact of different microphones:** We find that the AIY microphone outperforms Logitech Yeti in the detection of "Baby cry", and "Speech" sounds for majority of the sound volume levels. It is worth mentioning that the AIY microphone is specifically designed for spoken voice detection [24], which may explain why it is able to detect speech and baby cry sounds better than that of the Logitech Mic. On the other hand, the Logitech Yeti microphone performs better than Google AIY microphone for other sounds. **Performance of Audio tagging system:** The audio tagging system using PANNs model on Raspberry Pi or PC shows significantly better performance in detecting "Music" and "Speech" sounds compared to other sounds. This might be due to the pre-trained PANNs model which is trained using more speech and music audio class examples compared to that of the other sounds [21]. ## 6 Discussion: Development Issues **Installation in ARM-based CPUs:** Deploying the PANNs-code on a Raspberry Pi with the Raspbplan operating system is not a straightforward process compared to that on a conventional computer. On the Raspberry Pi, specific library versions need to be used, and in some cases, they need to be compiled specifically for the ARM architecture of the Raspberry Pi. We provide a GitHub repository stating a detailed step-by-step guide on how to install the demonstration correctly on Raspberry Pi to promote the easier usability and easier deployment of the hardware-based AI frameworks. **Temperature and latency of Raspberry Pi:** We find that the temperature of the CPU in Raspberry Pi increases when we run the AI algorithms on it. To prevent overheating, the Raspberry Pi has an automatic CPU clock control mechanism that slows down the process of Raspberry Pi whenever the temperature gets too high, resulting in increased inference latency. Figure 3 shows the the temperature and latency in the Raspberry Pi when PANNs model runs for few minutes. We observe that the there is a rise in temperature of more than 25\({}^{\circ}\)C after 14 minutes of continuous operation. After 8 minutes of running the PANNs model, the CPU temperature stabilises around 79\({}^{\circ}\)C. Meanwhile, as the temperature increases, the average inference latency also increases from approximately 0.5s to 0.6s. To mitigate this effect, it is important to incorporate heat sinks or ventilation system of the Raspberry Pi hardware container to provide sufficient cooling. ## 7 Conclusions and Future Work This paper presents a deployment of PANNs-based audio tagging framework on a Raspberry Pi hardware system to stimulate further exploration of the role of hardware quality in AI applications and to promote the development of practical sound recognition technologies. We analyse how the performance of the audio tagging software changes when deployed on hardware. We also analyse the role of the quality of microphones and audio signal volume on performance. We observe that effective device temperature management is important to maintain optimal performance and reduce inference latency. Attaining high accuracy is a balancing act involving microphone type, audio signal volume, and the chosen embedded system. Our findings suggest that the performance degrades when AI models are deployed on Raspberry Pi hardware compared to that of the software (PC) based performance. We also find that selection of an appropriate microphone is an important factor in recognising audio events. The volume levels of the audio events may also limit the performance of the AI models on Raspberry Pi. Therefore, AI models should be more robust to handle such variations due to audio volume which can be correlated with the distance of the audio source or microphone quality. In the future, we would like to explore more experiments covering a wider variety of audio event classes to understand impact of real-world versus recorded sound sources on performance. Also, we are interested in identifying optimal volume range and scale for edge device operation, measuring energy consumption to understand performance-power trade-offs and designing appropriate heat control measures to increase efficiency of the hardware-based edge devices. In terms of software, we would like to design efficient AI models to reduce computations for faster inference by exploring techniques like pruning or knowledge distillation. ## 8 Acknowledgements This work was supported by Engineering and Physical Sciences Research Council (EPSRC) Grant EP/T019751/1 "AI for Sound (AI4S)". For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. We would like to thank Dr Tim Brookes, from the Institute of Sound Recording (IoSR) at the University of Surrey, for his invaluable advice and for providing the volume meter Precision Gold IEC 651. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Sound & Vol & Google AIY Mic + Rasp\_Pi & Yeti Mic + PC & PC with recorded audio \\ \hline Class & (dB) & \(\mu\pm\sigma\) & Top-7 & \(\mu\pm\sigma\) & Top-7 & \(\mu\pm\sigma\) & Top-7 \\ \hline \multirow{4}{*}{Speech} & 50 & 0.69\(\pm\)0.17 & 100\% & 0.51\(\pm\)0.21 & 99.7\% & 0.75\(\pm\)0.12 & 100\% \\ & 0 & 0.72\(\pm\)0.15 & 100\% & 0.52\(\pm\)0.21 & 100\% & 0.77\(\pm\)0.12 & 100\% \\ & 70 & 0.73\(\pm\)0.14 & 100\% & 0.54\(\pm\)0.23 & 99.7\% & 0.78\(\pm\)0.12 & 100\% \\ \hline \multirow{4}{*}{Music} & 50 & 0.15\(\pm\)0.15 & 70.8\% & 0.20\(\pm\)0.16 & 62.7\% & 0.30\(\pm\)0.24 & 93\% \\ & 60 & 0.22\(\pm\)0.19 & 82.8\% & 0.19\(\pm\)0.15 & 64.5\% & 0.34\(\pm\)0.25 & 95\% \\ & 70 & 0.32\(\pm\)0.25 & 84.0\% & 0.23\(\pm\)0.18 & 66.4\% & 0.37\(\pm\)0.26 & 95\% \\ \hline \multirow{4}{*}{Witer} & 50 & 0.13\(\pm\)0.10 & 46.5\% & 0.21\(\pm\)0.16 & 81.9\% & 0.28\(\pm\)0.21 & 85\% \\ & 60 & 0.16\(\pm\)0.15 & 60.1\% & 0.33\(\pm\)0.21 & 85.3\% & 0.28\(\pm\)0.22 & 83\% \\ & 70 & 0.22\(\pm\)0.16 & 85.0\% & 0.32\(\pm\)0.20 & 80.2\% & 0.17\(\pm\)0.20 & 62\% \\ \hline \multirow{4}{*}{Fire} & 0 & 0.11\(\pm\)0.08 & 70.4\% & 0.42\(\pm\)0.21 & 95.9\% & 0.21\(\pm\)0.25 & 92\% \\ & 60 & 0.15\(\pm\)0.09 & 68.9\% & 0.55\(\pm\)0.21 & 97.5\% & 0.39\(\pm\)0.27 & 92\% \\ \cline{1-1} & 70 & 0.22\(\pm\)0.13 & 71.2\% & 0.68\(\pm\)0.21 & 98.5\% & 0.45\(\pm\)0.27 & 95\% \\ \cline{1-1} \cline{2-1} & 50 & 0.49\(\pm\)0.26 & 93.7\% & 0.56\(\pm\)0.23 & 98.0\% & 0.67\(\pm\)0.23 & 100\% \\ \cline{1-1} \cline{2-1} \cline{2-10} & 60 & 0.55\(\pm\)0.26 & 96.9\% & 0.64\(\pm\)0.22 & 99.1\% & 0.72\(\pm\)0.24 & 100\% \\ \cline{1-1} \cline{2-10} & 70 & 0.61\(\pm\)0.24 & 97.8\% & 0.71\(\pm\)0.21 & 99.5\% & 0.76\(\pm\)0.25 & 100\% \\ \hline \end{tabular} \end{table} Table 1: Mean (\(\mu\)) and standard deviation (\(\sigma\)) of confidence values, and percentage of occurrence as one of the Top-7 predictions using different systems for various sound events at varying volumes. Figure 3: Temperature and inference latency over time when audio tagging sytem is running on Raspberry Pi.
2306.06235
Resolving the Steiner Point Removal Problem in Planar Graphs via Shortcut Partitions
Recently the authors [CCLMST23] introduced the notion of shortcut partition of planar graphs and obtained several results from the partition, including a tree cover with $O(1)$ trees for planar metrics and an additive embedding into small treewidth graphs. In this note, we apply the same partition to resolve the Steiner point removal (SPR) problem in planar graphs: Given any set $K$ of terminals in an arbitrary edge-weighted planar graph $G$, we construct a minor $M$ of $G$ whose vertex set is $K$, which preserves the shortest-path distances between all pairs of terminals in $G$ up to a constant factor. This resolves in the affirmative an open problem that has been asked repeatedly in literature.
Hsien-Chih Chang, Jonathan Conroy, Hung Le, Lazar Milenkovic, Shay Solomon, Cuong Than
2023-06-09T20:11:49Z
http://arxiv.org/abs/2306.06235v2
# Resolving the Steiner Point Removal Problem ###### Abstract Recently the authors [CCL\({}^{+}\)23] introduced the notion of _shortcut partition_ of planar graphs and obtained several results from the partition, including a _tree cover_ with \(O(1)\) trees for planar metrics and an _additive embedding_ into small treewidth graphs. In this note, we apply the same partition to resolve the _Steiner point removal (SPR)_ problem in planar graphs: Given any set \(K\) of _terminals_ in an arbitrary edge-weighted planar graph \(G\), we construct a minor \(M\) of \(G\) whose vertex set is \(K\), which preserves the shortest-path distances between all pairs of terminals in \(G\) up to a _constant_ factor. This resolves in the affirmative an open problem that has been asked repeatedly in literature.
2305.00314
InfraDet3D: Multi-Modal 3D Object Detection based on Roadside Infrastructure Camera and LiDAR Sensors
Current multi-modal object detection approaches focus on the vehicle domain and are limited in the perception range and the processing capabilities. Roadside sensor units (RSUs) introduce a new domain for perception systems and leverage altitude to observe traffic. Cameras and LiDARs mounted on gantry bridges increase the perception range and produce a full digital twin of the traffic. In this work, we introduce InfraDet3D, a multi-modal 3D object detector for roadside infrastructure sensors. We fuse two LiDARs using early fusion and further incorporate detections from monocular cameras to increase the robustness and to detect small objects. Our monocular 3D detection module uses HD maps to ground object yaw hypotheses, improving the final perception results. The perception framework is deployed on a real-world intersection that is part of the A9 Test Stretch in Munich, Germany. We perform several ablation studies and experiments and show that fusing two LiDARs with two cameras leads to an improvement of +1.90 mAP compared to a camera-only solution. We evaluate our results on the A9 infrastructure dataset and achieve 68.48 mAP on the test set. The dataset and code will be available at https://a9-dataset.com to allow the research community to further improve the perception results and make autonomous driving safer.
Walter Zimmer, Joseph Birkner, Marcel Brucker, Huu Tung Nguyen, Stefan Petrovski, Bohan Wang, Alois C. Knoll
2023-04-29T17:59:55Z
http://arxiv.org/abs/2305.00314v1
InfraDet3D: Multi-Modal 3D Object Detection based on Roadside Infrastructure Camera and LiDAR Sensors ###### Abstract Current multi-modal object detection approaches focus on the vehicle domain and are limited in the perception range and the processing capabilities. Roadside sensor units (RSUs) introduce a new domain for perception systems and leverage altitude to observe traffic. Cameras and LiDARs mounted on gantry bridges increase the perception range and produce a full digital twin of the traffic. In this work, we introduce _InfraDet3D_, a multi-modal 3D object detector for roadside infrastructure sensors. We fuse two LiDARs using early fusion and further incorporate detections from monocular cameras to increase the robustness and to detect small objects. Our monocular 3D detection module uses HD maps to ground object yaw hypotheses, improving the final perception results. The perception framework is deployed on a real-world intersection that is part of the _A9 Test Stretch_ in Munich, Germany. We perform several ablation studies and experiments and show that fusing two LiDARs with two cameras leads to an improvement of \(+1.90\) mAP compared to a camera-only solution. We evaluate our results on the A9 infrastructure dataset and achieve 68.48 mAP on the test set. The dataset and code will be available at [https://a9-dataset.com](https://a9-dataset.com) to allow the research community to further improve the perception results and make autonomous driving safer. 3D Perception, Camera-LiDAR Fusion, Roadside Sensors, Infrastructure Sensors, Autonomous Driving ## I Introduction Roadside perception is vital to improve the situation awareness and to provide a far-reaching view for automated vehicles. Roadside sensors installed on infrastructure systems like the A9 Test Stretch [2, 3] increase the perception range drastically. They perceive objects around the corner, e.g. to warn drivers performing a left or right turn. A cost-effective solution is needed to process perception models in real-time and provide accurate results at the same time. Positional data captured from roadside sensors is sent through high performance units to all traffic participants to decrease blind spots and prevent accidents. It has been shown that roadside sensors increase the situation awareness by sending important notifications and warnings to vulnerable road users (VRUs) and drivers [4, 5, 6]. In this work, we contribute to the challenge of sparse point clouds in the domain of roadside perception in the following way: * We propose a real-time point cloud registration algorithm to register infrastructure LiDARs which enhances the point density. Our experiments show that early fusion of point clouds leads to an increase of \(+1.32\) mAP. * Fusing supervised and unsupervised LiDAR 3D object detectors increases the robustness and reduces the number of false positive detections. * We connect our perception module to real HD maps (+2.7 mAP) of the _A9 Testbed_ to extract road information, as well as to validate and filter the perception results. * Our camera-LiDAR fusion module further enhances the robustness of our whole perception toolbox (+1.62 mAP) by providing perception results during day and night time. * Finally, we evaluate all 3D detectors on the A9-I dataset and introduce a leaderboard to allow the research community to benchmark their models on our dataset. Fig. 1: Early and late fusion of two roadside cameras and LiDARs. We register point clouds from two LiDARs using G-ICP [1] and project them with the camera-LiDAR detections into the image. _Left column:_ Night detection results in more and better classified LiDAR detections. _Right column:_ Detections during day time demonstrate a 41.67% increase in detections using the fusion approach. Moreover, even occluded objects, like the car behind the trailer (right) or the truck behind the gantry bridge (left), can be detected with our _InfraDet3D_ Fusion Framework. ## II Related Work Much research has been done in the area of roadside 3D perception. Traditional approaches [7] increase the robustness of roadside LiDAR perception systems because of the similarity and the lack of diversity in the background point cloud. Furthermore, they do not require labeled data and process point clouds efficiently. In [8] a 3D vehicle detection approach is proposed that uses a single camera. First, they segment the instance mask in the image, extract the bottom contour and project it on the road plane to get the 3D position. Then, they cluster the projected points into objects by applying K-means clustering. Afterwards, they estimate the dimensions (length and width) and orientation (heading angle) of vehicles by fitting a box for each cluster. Finally, they refine the 3D box to fit it within the 2D box by maximizing the posterior probability. Bai et al. proposes a learning-based approach [9] that requires huge labeled datasets and performs poorly in domains where no labeled data is available. The authors introduce a real-time LiDAR-based traffic surveillance system to detect objects in 3D. They develop _3DSORT_, a 3D multi-object tracker by extending _DeepSORT_[10]. The limitation of all mentioned approaches is that they have no labeled training data of roadside LiDARs and use open-source datasets like _nuScenes_[11] to train the model. To the best of our knowledge there is no roadside 3D perception framework available that is able to fuse data from multiple road side sensor units. Furthermore, there is no solution that combines different fusion levels (early and late fusion), as well as traditional and learning-based approaches into a single framework. ## III A9 Intersection Dataset The A9 Intersection (A9-I) dataset is an extension of the A9 Dataset [12]. It contains labeled data (in _OpenLABEL_ format) of two cameras and two LiDAR sensors mounted on the S110 gantry bridge that is part of the _A9 Test Stretch for Autonomous Driving_. It contains 9,600 labeled point clouds and images with 57,743 labeled 3D objects (\(\varnothing\)12/frame) and is split into a training (80%), validation (10%), and test set (10%). The test set contains a sequence with labeled track IDs and sampled frames from four different scenarios. We applied stratified sampling to balance the dataset among sensor types and scenarios. The set contains 25% night data with severe weather conditions like heavy rain which allows the model to perform well under challenging weather conditions. Our dataset was created by labeling experts and some improvements were done to further enhance the label quality using the _proAnno_ labeling toolbox which is based on [13]. ## IV Sensor Calibration In our framework multiple roadside LiDAR and camera sensors are fused and processed together for the detection task. Our automatic calibration of infrastructure LiDARs and cameras, which outputs the precise pose of these sensors, is the most fundamental part of the framework. In order to calibrate the sensors in the real world, we propose an automatic target-less LiDAR-camera calibration model. We use the calibration method proposed in [14] as a baseline and extend it to outdoor scenes captured by infrastructure roadside sensors of a different manufacturer. To improve the robustness of the model under different external conditions, such as different scene complexities, lighting conditions, or sensor conditions, we introduce various automatic preprocessing submodules (see Figure 3). First, we undistort the input images. After that, an automatic background cropping (based on monocular depth estimation [15]) is employed to remove the background objects. If there is shadow on the ground, the automatic shadow filtering module will be activated to filter the shadow. After the preprocessing, the _Canny_ edge detector [16] is adopted to extract 2D edges Fig. 2: _InfraDet3D Perception Framework Architecture. Our proposed model is deployed on a real intersection (S110) part of the A9 Test Stretch for Autonomous Driving in Munich, Germany._ in images. For LiDAR preprocessing, point clouds from three LiDARs are registered to the target LiDAR. The input point cloud is cropped and only four dimensions are preserved (x,y,z and intensity). Scattering is applied to increase the density of single frame point cloud scans. Afterwards, the point cloud is automatically subdivided into ground and non-ground point clouds. Outlier removal is applied to the ground point cloud to filter the noise in order to preserve more points of the gantry bridge. We also use point upsampling [17] to improve the surface texture of point clouds. After the preprocessing, voxels are extracted from the point clouds. For faster extraction, adaptive voxelization [18] is introduced. _RANSAC_ plane fitting is applied to extract planes within the voxel. The intersections among planes are extracted as LiDAR edge clouds. After the edges are extracted from the point cloud, they are projected into the image and correspondences between LiDAR and camera edges are established. A cost based on maximum likelihood estimate is optimized and the qualitative result is generated. Our automatic calibration model demonstrates good robustness against different weather conditions and traffic scenarios in the intersection and provides accurate extrinsic calibration values for the perception framework. ## V Monocular 3D Object Detection Due to their low cost and high output information density, monocular RGB cameras are incorporated as sensors into the _InfraDet3D_ architecture. The monocular detection pipeline is based on an augmented _L-Shape-Fitting_ algorithm as proposed by [19]. The basic _L-Shape-Fitting_ algorithm has also been used in other recent roadside infrastructure perception architectures, such as the detector for the MONA dataset [20] and the Cooperative Vehicle Infrastructure System 3D detector [8]. However, the augmentation of this algorithm with object tracking, to score yaw hypotheses based on historical plausibility, is novel. Furthermore, we propose the integration of the High-Definition (HD) map to limit yaw hypotheses with regard to matching lanes. Both features are inspired by TrafficNet [21] and UrbanNet [22] architectures. An overview of the full monocular detection pipeline is given in Figure 4. ### _From 2D Instance Masks to 3D Bottom Contours_ We use the YOLOv7 Instance Segmentation model [23] on RGB camera frames. The RGB frames are downscaled to 1280x720 pixels size, to accelerate the instance segmentation runtime. The instance masks, which are output from the model, are processed to extract the bottom image contour from the masks. The 2D bottom contour coordinates for each mask are then projected from screen-space to 3D intersection space via raycasting. Finally, the _DBSCAN_ (Density-Based Spatial Clustering of Applications with Noise) [24] algorithm is applied to denoise each detection's 3D bottom outline. ### _HD Map Yaw Candidate Lookup_ Using lane geometry from an HD map of the sensor-covered areas, each lane's road surface is rasterized into a heading lookup grid, covering the field of view of the respective camera. The heading lookup grids are rendered to a resolution of 10x10 cm grid cells. Each grid cell \(C_{ij}\) is a set \(\{(\text{lane\_id}_{k},\theta_{k})\}_{k=0}^{k\leq N_{ij}}\) of lane ID and heading pairs which apply to the respective cell. The heading for a lane at the position of the grid cell is interpolated from the direction of the surrounding lane borders. At inference time, for each 3D bottom contour point of a detected object, the grids are queried to compute a set \(L=\{(\text{lane\_id}_{i},\theta_{i})\}_{i=0}^{i\leq N}\) of possible heading values along the bottom contour. This set is aggregated into a histogram with hit counts and average heading angle per lane ID. The hit counts for each lane ID are normalized into confidence values in the range of \([0,1]\) through division over the maximum hit count value. This yields one \(H_{j}=\{(\text{lane\_id}_{i},\theta_{i},\text{confidence}_{i})\}_{i=0}^{i<M}\) three-tuple-set of possible heading values for each instance j. ### _Augmented L-Shape-Fitting_ The _L-Shape-Fitting_ (LSF) algorithm searches for a rectangle that fits a specific bottom contour by maximizing a score value1, which is calculated as a function of a rectangle yaw (\(\theta\)) hypothesis and the 3D bottom contour points. In the basic form, the algorithm simply goes through several \(\theta\) values from the range \([0,\pi]\) at fixed increments. In our augmented version of the algorithm, we only run LSF for \(\theta\) values as present in the HD map lookup histogram for each 3D bottom contour. Furthermore, we multiply the calculated score value for each yaw hypothesis with the respective confidence value from the normalized map lookup histogram. Finally, the score is also multiplied with a historical plausibility factor. The calculation of this factor is explained in the following. Fig. 3: Automatic calibration pipeline. We integrate four camera image and seven LiDAR point cloud preprocessing modules into our pipeline in order to increase the robustness of real-world outdoor calibration of roadside sensors. The algorithm takes the image and point cloud that is published continuously on the live system as input and outputs both, the calibration results and qualitative projections of point clouds into camera images. Using a screen-space SORT tracker [25], we match a detected object's bounding box to detections from previous frames. For a successfully matched detection, historical 3D position values \(L=\{\vec{l}_{t-1},\ldots,\vec{l}_{t-T}\}\) are retrieved. Given the historical positions \(L\) and a position hypothesis \(\vec{l}_{t}^{{}^{\prime}}(\theta_{t})\), the historical plausibility score HP for a yaw hypothesis \(\theta_{t}\) is calculated as in the following equation: \[\text{HP}=\prod_{\delta_{t}=1}^{\delta_{t}\leq T}\pi/2-\Delta_{\angle}(|\theta _{t}-\text{atan2}(\vec{l}_{t}(\theta_{t})-\vec{l}_{t-\delta_{t}})|\ \texttt{mod}\ \pi)\] The Delta-Angle function \(\Delta_{\angle}:[0,\pi)\rightarrow[0,\pi/2)\) converts the passed raw angular difference, which is already less than \(\pi\), into a value less than \(\pi/2\) by returning angular deltas \(\delta_{>\pi/2}\) larger than \(\pi/2\) as \(\pi-\delta_{>\pi/2}\). This ensures that a yaw hypothesis, which is parallel, yet opposed to a historical orientation, is not erroneously punished. In practice, we have implemented a threshold of six historical positions that are evaluated to determine the plausibility of a yaw hypothesis. ### _Height Estimation and Dimension Filtering_ The height for each detection is initialized from a fixed value for the object type of the detection. Both the height and the location are then jointly optimized through binary search, until the estimated projected 2D object height and the original mask height are the same by \(\epsilon<1\)px. The length and width values, as estimated by the _L-Shape-Fitting_ algorithm for each 3D bottom contour, are limited to minimum and maximum values, which are also looked up per object category. ## VI LiDAR 3D Object Detection ### _Unsupervised 3D Object Detection_ LiDAR sensors are a popular choice for roadside object detection as they provide accurate 3D information in a large field of view and are lighting invariant. Studies on roadside LiDAR object detection favor traditional approaches based on clustering. Before clustering an extracted foreground point cloud into individual objects, these studies discard the ground, walls, trees, and other background artifacts from the raw point cloud. To discard the irrelevant background, our first 3D LiDAR object detector uses a fast four step procedure. First, the detector crops a predefined region of interest, which always remains the same as the LiDAR sensor is installed statically on roadside infrastructure. This first step removes 69.9% of points on average. Second, the detector finds points belonging to the ground by considering the Euclidean distance to a predefined plane model together with a threshold of 0.2 m. Third, the detector filters background artifacts within the region of interest based on the coarse-fine triangle algorithm [26]. The fourth step is radius outlier removal (\(n=15\), \(r=0.8\)), which refines the extraction of the foreground point cloud. The remaining foreground point cloud represents all traffic objects, including stationary ones. It is divided into distinct point clusters, each corresponding to a potential road user, by _DBSCAN_ (\(\epsilon=0.8\), \(n_{min}=3\)). Around each point cluster, the detector fits an oriented 3D bounding box using its convex hull and principal component analysis. Finally, the detector classifies the localized objects by means of object dimensions and point density. ### _Supervised 3D Object Detection_ For the data-driven approach, we are using _PointPillars_[27] which runs with a fast inference rate of \(38\) FPS. In comparison to the unsupervised approach, we can input the registered point cloud (262k points) directly into the model, consisting of three modules. In the first step the _PillarFeatureNet_ converts the point cloud into a sparse pseudo-image. After obtaining the pseudo-image, the 2D backbone produces features at a small spatial resolution. Theses features are then upsampled and concatenated. In the last step, an anchor-based detection head tries to match the bounding boxes to the ground truth. We used the _PointPillars_ implementation of _OpenPCDet_[28] and adapted it to our A9 intersection dataset. For training, we limited the point cloud range from \(-64\) to \(64\) m in x-y direction and from \(-8\) to 0 m in z direction. In the feature extraction step, we set the voxel size to \([0.16,0.16,0.8]\). The model was trained on 10 classes for \(160\) epochs and optimized using Adam with a learning rate of \(\alpha=0.003\), weight decay of \(0.01\) and cyclic momentum of \(\beta=0.9\). ## VII Multi-Modal 3D Object Detection For the fusion of both modalities (LiDAR and camera detections) a late fusion technique is applied (see Fig. 5). Fig. 4: Monocular 3D object detection pipeline, grounding shape hypotheses via tracking and the HD map. ### _Data Association_ A widely adopted method for combining and matching sensor data at the later stage is through data association, also defined as the linear assignment problem (LAP). It finds a one-to-one mapping between two sets of elements, such that the sum of the assigned pairwise costs is minimized. The _Jonker-Volgenant_ algorithm [29] is a method for solving the LAP and is based on augmenting paths. The algorithm starts by finding an initial feasible solution, e.g. by using the _Hungarian_ algorithm [30]. Then, it repeatedly searches for an augmenting path, a path of alternating unmatched and matched elements that starts and ends at an unmatched element, and increases the number of assigned elements by one. The algorithm stops when no augmenting path can be found - the solution is optimal. The _modified Jonker-Volgenant_ algorithm [31] is a variation of the original one that improves its performance by using a heuristic search strategy. The heuristic builds on the idea of prioritizing the search for augmenting paths that are expected to have a high gain in terms of reducing the total cost. In this work, the _modified Jonker-Volgenant_ algorithm is chosen due to its increased speed (\(O(n^{3})\)[31]) in comparison to its variants. It also works well with non-integer costs. In our case, the matching process took \(0.008\ ms\) on average per frame on the test set on a AMD Ryzen 5800X 8-Core CPU with an average number of \(14.55\) objects per frame. ### _Early Fusion of LiDAR Sensors_ Our first fusion module combines multiple point cloud scans from different LiDAR sensors at time step \(t\) into a single dense point cloud. We preprocess the point clouds, as described in [5]. First, we downsample the point cloud and estimate point normals. Then, we compute a 33-dimensional FPFH2 feature vector [32] for each point. This feature describes the local geometric property of each 3D point. Afterwards, we register several point clouds from roadside LiDARs that are time-synchronized with an NTP time server. The point cloud registration algorithm makes use of Fast Global Registration [1] to provide an initial transformation. For the refinement of the transformation, we use point-to-point ICP [33] as it leads to a lower RMSE value (0.448 m) than point-to-plane ICP. The full registration pipeline of two Ouster OS1-64 LiDARs takes 18.36 ms (54 FPS) on an Intel Core i7-9750H CPU and a voxel size of 2 m. Footnote 2: Fast point feature histogram ### _Late Fusion of LiDAR Sensors_ For the LiDAR-to-LiDAR late fusion, we operate in LiDAR coordinate space. We transform the detections obtained by the unsupervised LiDAR detector and the supervised LiDAR detector into a common coordinate system. We match detections based on a distance of 3 m between their central positions. Matched detections are merged by selecting the central position and yaw vector of the detected object from the LiDAR sensor closest to the detection. Dimensions of the merged detections are computed by calculating the mean average of the detections from both detectors. Additionally, all unmatched detections are also included in the final result, resulting in an increase of 12.93% in the number of detections compared to using only a single LiDAR sensor. ### _Camera-LiDAR Late Fusion_ For the camera-LiDAR fusion, we transform the LiDAR detections into the base coordinate system of the gantry bridge, which serves as the coordinate system for obtaining the monocular detections. This step is crucial for computing the inter-detection distances between camera and LiDAR instances based on their respective center positions. After the linear sum assignment, the matched detections are further filtered by a distance threshold of 3 m. The attributes of the matched detections are merged by eliminating matched camera detections and retaining only matched LiDAR detections, as they demonstrate greater accuracy on average during evaluation. The integration of the HD map leads to a substantial improvement (see Table III) in the camera yaw result, however it remains inferior to the results obtained from LiDAR. Table II displays the dependence of the map increase on the various attributes. ## VIII Evaluation ### _Monocular Perception - L-Shape-Fitting Augmentations_ To determine the impact of the aforementioned augmentations on the quality of the 3D pose estimation, we evaluated the _L-Shape-Fitting_ algorithm in several configurations on the categories (_Car_, _Bus_, _Truck_, _Motorcycle_) of the A9 infrastructure dataset. The ablation study results of the _L-Shape-Fitting_ augmentation evaluations are presented in Table III. The ablation study confirms that tracking and historical plausibility alone are not useful to improve over basic _L-Shape-Fitting_. With the addition of the HD map, however, the risk that an earlier bad yaw choice propagates into the future is greatly reduced, and the historical plausibility further increases the gain in map from \(+2.7\) to \(+5.64\). Fig. 5: Multi-modal 3D object detection pipeline. We apply a camera field-of-view filtering for all detections. ### _Monocular 3D Perception - Performance Considerations_ As presented, the monocular 3D object detection pipeline achieves a throughput of 22 FPS in our test bench setup using an RTX 2080S GPU with 1280x720 24-bit RGB input frames. This is limited by the performance of the _YOLOv7_ instance segmentation inference time. At 640x480 resolution, the frame rate increases to 66 FPS using _TensorRT_. ### _LiDAR 3D Perception - Runtime Evaluation_ Our unsupervised 3D detector achieves a processing speed of \(47\) FPS as Table IV demonstrates. Table V shows the runtime of _PointPillars_ on the A9-I dataset. car in the first row, can be detected by fusing camera and LiDAR detections. The final perception results are visualized in the CARLA simulation environment, that contains a full reconstruction of the A9 Test Stretch. ## IX Conclusion _InfraDet3D_ is a novel perception architecture that increases situation awareness and range of traditional single-sensor systems by combining data from multiple sensors distributed on a 20 m long infrastructure gantry bridge. We show that our multi-modal perception framework, fusing multiple roadside LiDARs and cameras, is able to achieve better results (\(+1.62\) mAP) than object detectors using only the camera input. The distributed sensors combine their perception results and allow to detect partially and even fully occluded objects. Our solution is deployed on high performance edge units and is very cost-effective, since it is distributed among the CPU (calibration, unsupervised point cloud detection, fusion) and the GPU (instance segmentation, supervised detection in point clouds). Future trends and challenges include a better perception in adverse weather conditions such as heavy rain, snow, and fog. These conditions reduce the range, reflection intensity, resolution of point clouds, increase the noise, and produce outliers. In [34] and [35], a method to filter snow points is proposed that will be incorporated in the future. A point cloud compression module will be integrated for real-time communication and data sharing between RSUs and vehicles. In the future, we plan to extend our framework into a deep fusion architecture. Finally, our goal is to evaluate our models on other infrastructure roadside datasets like DAIR-V2X-I [36], Rope3D [37], LUMPI [38], and IPS300+ [39]. We will also label more roadside sensor data and apply few-shot and active learning [40] to deal with small datasets and limited information. To improve domain adaptation, we will adapt our solution to other roadside LiDAR sensors and different domains (ODDs) to achieve a domain-invariant data representation. ## Acknowledgment This research was supported by the Federal Ministry of Education and Research in Germany within the project _AUTOtech.agil_, Grant Number: 01IS22088U. We thank Christian Cress, Venkatnarayanan Lakshminarasimhan, and Leah Strand for the collective work on the A9 infrastructure system. Moreover, we thank 3D Mapping Solutions for providing the HD map.
2303.02855
Friedman's "Long Finite Sequences'': The End of the Busy Beaver Contest
Harvey Friedman gives a comparatively short description of an ``unimaginably large'' number $n(3)$ , beyond, e.g. the values $$ A(7,184)< A({7198},158386) < n(3)$$ of Ackermann's function - but finite. We implement Friedman's combinatorial problem about subwords of words over a 3-letter alphabet on a family of Turing machines, which, starting on empty tape, run (more than) $n(3)$ steps, and then halt. Examples include a (44,8) (symbol,state count) machine as well as a (276,2) and a (2,1840) one. In total, there are at most 37022 non-trivial pairs $(n,m)$ with Busy Beaver values ${\tt BB(n,m)} < A(7198,158386).$ We give algorithms to map any $(|Q|,|E|)$ TM to another, where we can choose freely either $|Q'|\geq 2$ or $|E'|\geq 2$ (the case $|Q'|=2$ for empty initial tape is the tricky one). Given the size of $n(3)$ and the fact that these TMs are not {\it holdouts}, but assured to stop, Friedman's combinatorial problem provides a definite upper bound on what might ever be possible to achieve in the Busy Beaver contest. We also treat $n(4)> A^{(A(187196))}(1)$.
Michael Vielhaber, Mónica del Pilar Canales Chacón, Sergio Jara Ceballos
2023-03-06T03:17:03Z
http://arxiv.org/abs/2303.02855v1
# Friedman's "Long Finite Sequences": ###### Abstract Harvey Friedman gives a comparatively short description of an "unimaginably large" number \(n(3)\), beyond _e.g._ the values \[A(7,184)<A(7198,158386)<n(3)\] of Ackermann's function - but finite. We implement Friedman's combinatorial problem about subwords of words over a 3-letter alphabet on a family of Turing machines, which, starting on empty tape, run (more than) \(n(3)\) steps, and then halt. Examples include a (44,8) (symbol,state count) machine as well as a (276,2) and a (2,1840) one. In total, there are at most 37022 non-trivial pairs \((n,m)\) with Busy Beaver values \(\mathtt{BB}(\mathtt{n},\mathtt{m})<A(7198,158386)\). We give algorithms to map any \((|Q|,|E|)\) TM to another, where we can choose freely either \(|Q^{\prime}|\geq 2\) or \(|E^{\prime}|\geq 2\) (the case \(|Q^{\prime}|=2\) for empty initial tape is the tricky one). Given the size of \(n(3)\) and the fact that these TMs are not _holdouts_, but assured to stop, Friedman's combinatorial problem provides a definite upper bound on what might ever be possible to achieve in the Busy Beaver contest. We also treat \(n(4)>A^{(A(187196))}(1)\). **Keywords:** Busy beaver, long finite sequences, Turing machine. ## Introduction Harvey Friedman describes in [4] the problem to decide for any alphabet \(B\) of size \(k\in\mathbb{N}\) the length of a largest word \(s\in B^{*}\), such that property \[(*)\nexists 1\leq i<j\leq n/2\colon s^{(i)}:=s_{i}s_{i+1}\ldots s_{2i}\text{ is a subword of }s^{(j)}:=s_{j}s_{j+1}\ldots s_{2j}\] is satisfied. For \(k=1\)_i.e._\(B=\{1\}\), 111 satisfies \((*)\), since \(s_{2}s_{3}s_{4}\) is not even defined, but \(s=1111\), the only word in \(B^{4}\), already violates \((*)\) for \(i=1,j=2\). Thus, \(n(1)=3\). Similarly, for \(k=2\) and \(B=\{1,2\}\), we find the word \(s=1122211111\) of length 11 satisfying \((*)\), but all \(2^{12}\) words of length 12 (and thus all larger ones) violate \((*)\) (see [4, p. 3f.]): This gives us \(n(1)=3,n(2)=11\). How does it go on thereafter with \(n(3),n(4),\ldots\)? Unexpected! Friedman [4, Sect. 4] first shows a lower bound of \(n(3)>A(7,184)\) and in [4, Sect. 6] presents results by Dougherty yielding even \[n(3)>A(7198,158386).\] This current lower bound of \(A(7198,158386)\) is (by far!) larger than _e.g._\(A(3,158386)=2^{2^{\cdots^{2^{2}}}}\), with 158386 2's stacked exponentially. One bit of \(n(3)\) we know, though: The last one. Since all \(n(k)\) are odd. We describe Turing machines guaranteed to halt after more than \(n(3)\) steps, starting with an empty tape for _all_ non-trivial pairs \((n,m)\in\mathbb{N}^{2}\) of (state, symbol) counts with only 37022 exceptions. Thus the Busy Beaver contest is effectively a _finite_ matter. Section 1 covers the relevant work of Ackermann, Turing, Rado and Friedman. Section 2 describes the algorithm and an 8-symbol implementation with 44 states. In Sections 3 and 4, we show how to get either the symbol count or the state count down to 2, for _any_ Turing machine. Section 5 briefly treats the case \(n(4)\) over 4 symbols and in Section 6, we obtain the main result: All but at most 37022 non-trivial \((n,m)\) pairs have a BB(n,m) value above \(n(3)\), and at most 51671 BB(n,m) lie below \(n(4)\). Four Mathematicians and their Crucial Results ### Wilhelm Ackermann, 1926: "Ackermann function" The (modified) Ackermann function [1] used here as defined in [4] is \(A(1,c)=2c\), \(A(f,1)=2\), and recursively \(A(f,c)=A(f-1,A(f,c-1))\), which is equivalent to a \(c\)-fold nesting of \(A(f-1,\cdot)\). The parameter \(f\) can be seen as defining a function family, while the counter \(c\) is a pointer into this family. Every sequence \(A(f,\mathbb{N})\) is a subsequence of the previous \(A(f-1,\mathbb{N})\). Increasing \(f\) is what lets the values explode. For a nice overview of _really_ large numbers, even beyond \(|\mathbb{N}|=\aleph_{0}\), see [https://sites.google.com/site/largenumbers/home](https://sites.google.com/site/largenumbers/home). One easily obtains \(A(1,c)=2\cdot c,A(2,c)=2^{c},A(3,c)=2^{2^{\cdot\cdot\cdot^{2}}}\), with \(c\) copies of 2 stacked onto each other. Also, \(A(f,1)=2,A(f,2)=4,\forall f\). \(A(4,3)=A(3,A(3,A(3,1)))=A(3,A(3,2))=A(3,4)=2^{2^{2}}=65536\). \(A(4,4)=A(3,A(4,3))=A(3,65536)=2^{2^{\cdot\cdot\cdot^{2}}}\), \(65536-1\) exponents. Thus \(A(4,5)=A(3,A(4,4))\) will be a tower of 2's, whose height is described by a tower of \(65536\) **2's**, "and so on". \(A(5,3)=A(4,A(5,2))=A(4,4)\) as above, while \(A(5,4)=A(4,A(5,3))=A(4,A(4,4))\) is an \(A(4,4)\)-fold iterated evaluation of \(A(3,\cdot)\) - we have no idea of its value, and not even a means to visualize that number. Friedman [4, p. 106] calls \(A(5,5)\) an "unimaginably large number". We have nothing to add. ### Alan M. Turing, 1936: "Turing Machine" We use Turing's invention [11] with the following modifications: - no output tape or F (figures) cells - bi-infinite tape, no (left) end markers needed or provided There is a finite set \(Q\) of states with \(n:=|Q|\), the fixed tape alphabet \(E=\{0,1\}\) for Rado's original problem, or any larger, but finite alphabet \(E\) with \(m:=|E|\). The three relevant functions are \(Q^{+}\colon Q\times E\to Q\) for the next state, \(E^{+}\colon Q\times E\to E\) for the new symbol to be written onto the tape, and \(D^{+}\colon Q\times E\to\{R,L\}\) for the movement of the tape's head. ### Rado Tibor, 1962: "Busy Beaver" First some nomenclature: 1. A Busy Beaver is _any_ Turing machine, which, starting with an empty tape, eventually halts. _When_ it halts, is _not_ relevant for being a Busy Beaver (see Rado [9], Michel [7, p. 4], or Green [6]). The generalized form allows for larger symbol sets (tape alphabets) than the original \(B=\{0,1\}\) of Rado. 2. The Busy Beaver Contest suggested by Rado consists in providing a TM configuation's QED\({}^{+}\) values and a purported halting time T. If said configuration on empty tape stops after exactly T steps, the entry is valid. 3. The current Busy Beaver Champion of its class \((n,m)=(|Q|,|E|)\) is the halting machine with - up to the time - largest T. 4. The Busy Beaver function BB(n) = BB(n,2) or BB(n,m) is the time T of the BB champion for the class (n,m) -- provided it is proved by any means, _e.g._ exhaustive search, that no other machine in that class exists that might halt on empty tape and do so after more than \(T\) steps. Otherwise the T of the current champion is a lower bound for BB(n,m). Known results are given in Figure 1, see Michel [7]. Numbers are exact figures, values of the Ackermann function are lower bounds (with \(A(2,2^{16})=A(3,4)\)). Figure 1: Known values or lower bounds for BB(n,m) ### Harvey Friedman, 2001: "Long Finite Sequences" We shall show in this paper that there are at most 37022 non trivial (\(|Q|,|E|\geq 2\)) Busy Beaver contests, since all other pairs lead to a lower bound of \(A(7198,158386)<\texttt{BB}(\texttt{n},\texttt{m})\) steps for a halting configuration implementing Friedman's combinatorial problem "_Long Finite Sequences_". Friedman considers finite words over a \(k\) letter alphabet, in particular \(k=3\) and \(B=\{1,2,3\}\). His crucial definition \((*)\) is: A word \(s=s_{1}s_{2}\ldots s_{n}\) from \(B^{n}\) of length \(n\) satisfies property \((*)\) whenever the set of subwords \(s^{(i)}=s_{i}s_{i-1+1}\ldots s_{2i},1\leq i\leq n/2\), that is \(s_{1}s_{2},s_{2}s_{3}s_{4},\ldots,(s_{i}\ldots s_{2i}),\ldots,(s_{n/2}\ldots s _{n})\), does _not_ contain two words with \(i<j\) such that \(s^{(i)}\) is a subword of \(s^{(j)}\). A word \(a_{1}\ldots a_{m}\) is called a subword of \(b_{1}\ldots b_{n}\) whenever there are indices \(1\leq\iota_{1}<\iota_{2}<\cdots<\iota_{m}\leq n\) with \(a_{k}=b_{\iota_{k}}\). For every \(k\in\mathbb{N}\), let \(n(k)\) be the length of a largest word from \(\{1,\ldots,k\}^{*}\) satisfying \((*)\). One easily verifies \(n(1)=3\). Friedman shows that \(12221111111\in\{1,2\}^{*}\) satisfies \((*)\), but no larger word over two letters, and thus \(n(2)=11\). All \(n(k)\) are odd, by the way. The last odd indexed letter will not generate a new subword and thus the status of \((*)\) does not change. After \(n(1)=3\) and \(n(2)=11\), we have quite a jump: **Theorem 1**.: (Friedman[4, Theorem 4.7])__ \[n(3)>A(7,184).\] Proof.: See Theorem 4.7 in [4]. This paper will implement the search for the first word not satisfying \((*)\) over \(B=\{1,2,3\}\) and this search will take more than \(n(3)\) steps - way above the "incomprehensibly large number" \(A(5,5)\) - and then halt. Dougherty even obtains the following fantastically large bound: **Theorem 2**.: (Friedman[4, Theorem 6.9])__ \[n(3)>A(7198,158386).\] Proof.: See Theorem 6.9 in [4]. Friedman furthermore conjectures [5, p. 7] the upper bound \[n(3)<A(A(5,5),A(5,5)).\] ## 2 The Algorithm and an 8-Symbol Implementation We shall start with the symbol set \(E=\{Y,X,1,2,3,-,\$,+\}\), where \(Y\) is the blank. The active part of the tape is divided into 3 segments: I Two unary counters \(0\leq i\leq imax=N/2\) and \(0\leq l\leq lmax\leq N/2\). II The current word \(s\in B^{N}\) in the form \(s_{1}s_{2}\ldots s_{N}\in\{1,2,3\}^{N}\) or certain symbols replaced by their prime equivalent, with \(-\equiv 1^{\prime},\$\equiv 2^{\prime},+\equiv 3^{\prime}\). III \(N/2\) copies of \(s\), separated by '+'s, which are then trimmed to \(s^{(1)}=(s_{1}s_{2}),\ldots,s^{(i)}=(s_{i}..s_{2i}),\ldots,s^{(N/2)}=(s_{N/2}.. s_{N})\). Segment bounds are given by the markers Y,Y,X,Y that is the whole tape has a structure like \[{}^{\omega}\mbox{Y}\cdots\mbox{I}\cdots\mbox{Y}\cdots\mbox{II}\cdots\mbox{X} \cdots\mbox{III}\cdots\mbox{Y}^{\omega}\] This marker sequence, YYXY, has the advantage - defining the tape's blank symbol as Y - that both ends are immersed within the \({}^{\omega}\) Y...Y\({}^{\omega}\) of the bi-infinite tape and are thus automatically correct upon extension of segments. The algorithm consists of 9 lines as given in Figure 2. States in \(Q\) are named 'ql-c', where \(l\in\{1,\ldots,9\}\) refers to the line of the algorithm, and \(c\in\mathbb{N}_{0}\) is just a counter within the line. On pages 8-12, we describe the 44 states implementing the algorithm. For each program line, we indicate the segment dealt with (I, II, or III) and 1. Copy \(s\) from II to III, N/2 times, separated by \(+\). 2. In III, cut away initial triangle \(\varepsilon,s_{1},s_{1}s_{2},\ldots,s_{1}\cdots s_{i-1},\ldots,s_{1}\cdots s_{ N/2-1}\). 3. In III, cut away double trailing triangle \(s_{3}\cdots s_{N},s_{5}\cdots s_{N},\ldots,s_{N-1}s_{N},\varepsilon\), leaving \(s_{1}s_{2},s_{2}s_{3}s_{4},\ldots,s^{(i)},\ldots,s^{(N/2)}\) on the tape as III. 4. Remove the \((i-1)\) patterns \(s^{(1)},\ldots,s^{(i-1)}\). 5. Check \(s^{(i)},\ldots,s^{(N/2)}\) for subword match, property \((*)\). 6. Clear segment III to X\(\varepsilon\)Y\({}^{\omega}\). IF match in I. 5, GOTO 7 ELSE GOTO 8. 7. \(s++\); IF \(s=(1)^{N+1}\) HALT ELSE \(i:=0,l:=0\). GOTO 1. 8. \(i:=0;lmax++\); IF \(lmax<N/2\) GOTO 1 ELSE GOTO 9. 9. \((N/2)++\); \(s:=(1)^{N}\); GOTO 1. Figure 2: Algorithm: Long Finite Sequences the starting position by @I, l.h.s. end of I to III@, r.h.s. end of III. Each entry gives the relevant symbols from \(E=\{Y,X,-,\$,+,1,2,3\}\) in the first line; an asterisk '*' stands for all symbols not yet mentioned before. An entry like \(*\backslash 3,X\) stands for all other symbols, where 3 and X are not actually used (think \(Q^{+}\)(q1-1,3)=\(Q^{+}\)(q1-1,X) = ERROR). Symbols that are omitted behave like the 3,X in \(*\backslash 3,X\): They will not appear with this state. The 2nd line of each entry gives the new symbol \(E^{+}\) ('=' meaning no change in symbol) and the direction \(D^{+}\), left (L) or right (R), of the tape head. The third line is the next state \(Q^{+}\). We start with \(i=l=0,\) I = II = III = \(\varepsilon\) in line 3. The TM starts in state q1-4, initializes a Y to X, then moves on to q3-1, q4-1, q4-2, q5-0, q5-1, q6-2, q8-0, q8-1, q8-2, q9-1 (tape is empty, all Y, except that one X), and in line 9 we start to increase the support from empty to length \(N/2=1\) in Segment I, and \(s=---1^{\prime}1^{\prime}\) in Segment II. State q7-1 is the finishing state, going into HALT with Y. States q9-3 and q9-5 have been merged into q3-3 and q1-9, respectively, to save on state count. In q9-3 and q9-5 we only deal with symbols "-" and "Y", which are absent in (the original) q3-3 and q1-9. 1,2,3 stand for themselves, also \(-\),$,+ stand for 1',2',3' in Segment II. The two counters \(l,i\) in Segment I are coded as in Figure 3. The example value is \(i=7\) with a maximum of \(imax=9\), and \(l=4\) with a maximum of \(lmax=6\) (the 0s can change to 1s, the '-' can not). We get \(|E|\) down to 7 by removing the marker X. This requires to do a double scan \(\cdots\to Y\to Y\) instead of passing over the other symbol like \(\ldots\stackrel{{ X}}{{\longrightarrow}}Y\) or \(\ldots\stackrel{{ Y}}{{\longrightarrow}}X\), and affects states q1-C1, q1-C2, q1-C3, q1-6, q1-9, q2-1, q4-1, q4-3, q5-0, and q9-4, ten states in all. Duplicating these states leads to an implementation with \(|Q|=54,|E|=7\) for \(n(3)\). We account for these 10 states in Section 3 by a parameter \(\Delta\): \(\Delta=0\) for \(X\in E,|E|=2k+2\) and \(\Delta=1\) for \(X\not\in E,|E|=2k+1\). Figure 3: Counters \(l\) and \(i\) and their encoding \begin{tabular}{c|c 2. @II. Cut away left triangle in III. \begin{tabular}{c|c|c|c|c|} \hline q2-1 & Y & * & \multicolumn{2}{c}{Find rhs of III} \\ \cline{2-5} & =,L & =,R & \\ & q2-2 & q2-1 & & \\ \hline q2-2 & X & + & *\(\backslash\)Y & \\ \cline{2-5} & =,R & \$,R & =,L \\ & q3-1 & q2-3 & q2-2 \\ \hline q2-3 & 1,2,3 & Y & – \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-4 & q2-2 & q2-3 \\ \hline q2-4 & – & Y & *\(\backslash\) +,X & \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-3 & q2-2 & q2-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c|} \hline q3-1 & \$ & Y & *\(\backslash\)X & \\ \cline{2-5} & +,R & =,L & =,R \\ & q3-5 & q4-1 & q3-1 \\ \hline q3-5 & Y & – & + \\ \cline{2-5} & =,L & =,L & =,L \\ & q4-1 & q3-5 & q3-2 \\ \hline q3-2 & 1,2,3 & *\(\backslash\)XY\$ & \\ \cline{2-5} & -,L & =,L & \\ & q3-3 & q3-2 & \\ \hline q3-3 & 1,2,3 & [Y] & [-] \\ \cline{2-5} & -,L & =,R & =,R \\ & q3-4 & q9-4 & q3-3 \\ \hline q3-4 & -,+ & X & *\(\backslash\)Y,\$ \\ \cline{2-5} & =,L & =,R & =,L \\ & q3-2 & q3-1 & q3-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline q2-1 & Y & *\(\backslash\)Y & \\ \cline{2-5} & =,R & =,L \\ & q3-1 & q2-3 & q2-2 \\ \hline q2-3 & 1,2,3 & Y & – \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-4 & q2-2 & q2-3 \\ \hline q2-4 & – & Y & *\(\backslash\) +,X \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-3 & q2-2 & q2-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline q3-1 & \$ & Y & *\(\backslash\)X & \\ \cline{2-5} & +,R & =,L & =,R \\ & q3-5 & q4-1 & q3-1 \\ \hline q3-5 & Y & – & + \\ \cline{2-5} & =,L & =,L & =,L \\ & q4-1 & q3-5 & q3-2 \\ \hline q3-2 & 1,2,3 & *\(\backslash\)XY\$ & \\ \cline{2-5} & -,L & =,L & \\ & q3-3 & q3-2 & \\ \hline q3-3 & 1,2,3 & [Y] & [-] \\ \cline{2-5} & -,L & =,R & =,R \\ & q3-4 & q9-4 & q3-3 \\ \hline q3-4 & -,+ & X & *\(\backslash\)Y,\$ \\ \cline{2-5} & =,L & =,R & =,L \\ & q3-2 & q3-1 & q3-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline q2-1 & Y & *\(\backslash\)Y & \\ \cline{2-5} & =,R & =,L \\ & q2-2 & q2-1 & \\ \hline q2-2 & X & + & *\(\backslash\)Y & \\ \cline{2-5} & =,R & \$,R & =,L \\ & q3-1 & q2-3 & q2-2 \\ \hline q2-3 & 1,2,3 & Y & – \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-4 & q2-2 & q2-3 \\ \hline q2-4 & – & Y & *\(\backslash\) +,X \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-3 & q2-2 & q2-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline q3-1 & \$ & Y & *\(\backslash\)X & \\ \cline{2-5} & +,R & =,L & =,R \\ & q3-5 & q4-1 & q3-1 \\ \hline q3-5 & Y & – & + \\ \cline{2-5} & =,L & =,L & =,L \\ & q4-1 & q3-5 & q3-2 \\ \hline q3-2 & 1,2,3 & *\(\backslash\)XY\$ & \\ \cline{2-5} & -,L & =,L & \\ & q3-3 & q3-2 & \\ \hline q3-3 & 1,2,3 & [Y] & [-] \\ \cline{2-5} & -,L & =,R & =,R \\ & q3-4 & q9-4 & q3-3 \\ \hline q3-4 & -,+ & X & *\(\backslash\)Y,\$ \\ \cline{2-5} & =,L & =,R & =,L \\ & q3-2 & q3-1 & q3-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline q2-1 & Y & *\(\backslash\)Y & \\ \cline{2-5} & =,R & =,L \\ & q2-2 & q2-1 & \\ \hline q2-2 & X & + & *\(\backslash\)Y & \\ \cline{2-5} & =,R & \$,R & =,L \\ & q3-1 & q2-3 & q2-2 \\ \hline q2-3 & X & + & *\(\backslash\)Y & \\ \cline{2-5} & =,R & \$,R & =,L \\ & q3-1 & q2-3 & q2-2 \\ \hline q2-4 & - & Y & *\(\backslash\) +,X \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-3 & q2-2 & q2-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline q3-1 & \$ & Y & *\(\backslash\)X & \\ \cline{2-5} & +,R & =,L & =,R \\ & q3-5 & q4-1 & q3-1 \\ \hline q3-5 & Y & – & + \\ \cline{2-5} & =,L & =,L & =,L \\ & q4-1 & q3-5 & q3-2 \\ \hline q3-2 & 1,2,3 & *\(\backslash\)XY\$ & \\ \cline{2-5} & -,L & =,L & \\ & q3-3 & q3-2 & \\ \hline q3-3 & 1,2,3 & [Y] & [-] \\ \cline{2-5} & -,L & =,R & =,R \\ & q3-4 & q9-4 & q3-3 \\ \hline q3-4 & -,+ & X & *\(\backslash\)Y,\$ \\ \cline{2-5} & =,L & =,R & =,L \\ & q3-2 & q3-1 & q3-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c} \hline q2-1 & Y & *\(\backslash\)Y & *\(\backslash\)Y \\ \cline{2-5} & =,R & =,L \\ & q3-1 & q2-3 & q2-2 \\ \hline q2-3 & 1,2,3 & Y & – \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-4 & q2-2 & q2-3 \\ \hline q2-4 & – & Y & *\(\backslash\) +,X \\ \cline{2-5} & -,R & =,L & =,R \\ & q2-3 & q2-2 & q2-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c} \hline q3-1 & \$ & Y & *\(\backslash\)X & \\ \cline{2-5} & +,R & =,L & =,R \\ & q3-5 & q4-1 & q3-1 \\ \hline q3-5 & Y & – & + \\ \cline{2-5} & =,L & =,L & =,L \\ & q4-1 & q3-5 & q3-2 \\ \hline q3-2 & 1,2,3 & *\(\backslash\)XY\$ & \\ \cline{2-5} & =,L & =,L & \\ & q3-3 & q3-2 & \\ \hline q3-3 & 1,2,3 & [Y] & [-] \\ \cline{2-5} & -,L & =,R & =,R \\ & q3-4 & q9-4 & q3-3 \\ \hline q3-4 & -,+ & X & *\(\backslash\)Y,\$ \\ \cline{2-5} & =,L & =,R & =,L \\ & q3-2 & q3-1 & q3-4 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline \hline \hline \end{tabular} \ \begin{tabular}{c|c \begin{tabular}{l|c ## 3 Varying \(|E|\) down to \(|E^{\prime}|=2\) We now have a \(|Q\times E|=44\cdot 8\) implementation (as well as a \(54\cdot 7\) one). Next, to bring the symbol count down to \(|E^{\prime}|=2\) or 3, we may map up to 8 symbols to triples of bits and up to 9 symbols to pairs of ternary "trits". The general case has the following relevant parameters: The new alphabet size \(b=|E^{\prime}|\), usually 2 or 3, the length or the size of the \(l\)-tuple of \(b\)-ary digits simulating one original symbol from \(E\), \(l=\lceil\log_{b}|E|\rceil\), and \(k\) is from our problem \(n(k)\), usually \(k=3\) or 4. Every state is simulated in up to 4 sweeps through the \(l\)-tuple: Sweep-0: If a state is entered alternately from both sides, Sweep-0 uses \(l-1\) substates to bring the head from the "other" end to the normal one. If a state is always entered from the same side, Sweep-0 is skipped. Sweep-1 moves through the \(l\)-tuple to obtain the current symbol \(e\), with \(1,b,b^{2},\ldots,b^{l-1}\) states in the successive positions. For "Scan states" where we search for 1 symbol and all others are combined in the wildcard case '*', the count is upperbounded by \(2l-1\) substates, since in each position (after the first) we only have to distinguish "scan symbol still possible" vs. "is some other symbol". Sweep-2 moves back to replace \(e\) by \(E^{+}(q,e)\neq e\) and or just to reach the other end to leave the \(l\)-tuple. It uses \(l-1\) substates per case. Sweep-3 moves forth again, in \(l-1\) substates per case, if we had to replace \(e\) by \(E^{+}(q,e)\), but leave opposite to the entry side. Our \(35+3k\) states have between \(v=2\) and \(v=6\) different cases. \(\#_{Sweep-0}=3+\Delta\) of them (q1-9,q3-3,q3-5), q1-9 is duplicated for \(\Delta=1\), are entered from both sides. \(\#_{Scan>}=6+\Delta\cdot 3\) states (q1-2,q4-1,q4-3,q5-0,q7-3,q8-0) are scan states that are left opposite of the entry side and do not change the symbol. Hence only Sweep-1 with \((1,2,2,2,\dots)\), _i.e._\(2l-1\), substates is needed. Another \(\#_{Scan<}=6+k+\Delta(2+k)\) states (q1-Ch,q1-6,q1-7,q1-11,q2-1,q5-2,q6-1) are scan states left at the entry end in case of a match, with or without changing \(e\) to \(E^{+}\). Here we have Sweep-1 as before, and \(l-1\) substates for Sweep-2 of the match case. Then there are \(\#_{Scan\neq}=5+\Delta\) states (q4-4,q6-3,q7-4,q9-1,q9-4) that scan in Sweep-1, but afterwards change \(e\) to \(E^{+}(q,e)\neq e\). Here we have \(2l-1\) substates for Sweep-1, but Sweep-2 and -3 are counted as in the general case. Figure 4: Details of states The further \(\#_{Sweep1}=18+2k+\Delta\cdot 1\) states require \(1+b+\cdots+b^{l-1}=(b^{l}-1)/(b-1)\) substates for Sweep-1. They and the \(\#_{Scan\neq}\) states are given in Table 4. The total number of substates according to this upper bound is then \([\texttt{Sweep}-0]\quad(l-1)\cdot\#_{Sweep-0}+\) \([\texttt{Sweep}-1]\quad(2l-1)\cdot(\#_{Scan>}+\#_{Scan\neq})+(3l-2)\cdot\#_{Scan<}\) \(+(1+b+b^{2}+\cdots+b^{l-1})\cdot\#_{Sweep-1}+\) \([\texttt{Sweep}-2/3]\quad(l-1)\cdot(0\cdot v_{>=}+1\cdot(v_{<=}+v_{<\neq})+2 \cdot v_{>\neq})\) Note the dual use of the factor \(k\) in Figure 4 : In q1-4,q1-10,15-1, we have to distinguish \(k\) symbols in \(k\) different cases. In q1-Ch,q5-Vh,q5-Kh, there are \(k\) different states, each one with a constant amount of cases. For \(b=2,l=3\) we have \(2\cdot 3+5\cdot(6+6)+7\cdot(6+k)+(1+2+4)\cdot(17+2k)+2\cdot(2k+15+k+9+2 \cdot(6k+17))\) \(=343+51k\) substates. For \(k=3\) that is 496 substates, to be compared with the actual 276 states for the "hand-wired" version. Figure 5: Synopsis of symbols over various alphabets Figure 6: Substate counts per sweep For \(b=3,l=2\) we have \(1\cdot 3+3\cdot(6+6)+4\cdot(6+k)+(1+3+9)\cdot(17+2k)+1\cdot(2k+15+k+9+2\cdot(6k+17))\) \(=351+45k\) substates, which is 486 for \(k=3\), while the "hand-wired" version for pairs of trits \((l=2,b=3)\) needs only 155 states. For \(b=2\) and \(l=4\) we have \(3\cdot 3+7\cdot(6+6)+10\cdot(6+k)+(1+2+4+8)\cdot(17+2k)+3\cdot(2k+15+k+9+2\cdot(6 k+17))\) \(=582+85k\). With \(k=4\), we get a \((922,2)\) implementation for \(n(4)\). **Theorem 3**.: \((i)\) _Given any TM with \(|Q|=n,|E|=m\), we obtain another TM with \(|E^{\prime}|=b\), \(l:=\lceil\log_{b}|E|\rceil\) and a state count of at most \(|Q^{\prime}|\leq\)_ \[n\cdot[(l-1)+(1+b+b^{2}+\cdots+b^{l-1})+|E|\cdot 2(l-1)]<2(m+1)n\lceil\log_{b }(m)\rceil.\] \((ii)\) _With \(n,m,b,l\) as before and the numbers \(\#_{Sweep-0},\#_{Scan},\#_{>=},\#_{>\neq},\)\(\#_{<=},\#_{<=}\) given as in the text, we obtain the sharper upper bound \(|Q^{\prime}|\leq\#_{Sweep-0}\cdot(l-1)+\#_{scan}\cdot(2l-1)+(n-\#_{scan})(1+b +\cdots+b^{l-1})\) \(+(\#_{<=}+\#_{<\neq}+2\#_{>\neq})\cdot(l-1)\)_ \(<n\left(\frac{b^{l}-1}{b-1}+l-1\right)+\#_{Sweep-0}\cdot(l-1)-\#_{Scan}\left( \frac{b^{l}-1}{b-1}-2l+1\right)\)_._ Proof.: \((i)\) We use \(l-1<\log_{b}(m),1+b+\cdots+b^{l-1}=\frac{b^{l}-1}{b-1}<2m\). \((ii)\) We use \((\#_{<=}+\#_{<\neq}+\mathbf{1}\cdot\#_{>\neq})\leq n\). The bounds in Theorem 3 are independent of the mapping \(E\to E^{\prime}\), which may improve the numbers. ## 4 Varying \(|Q|\) down to \(|Q^{\prime}|=2\) ### \(|Q^{\prime}|=3\) Reduction in states down to 3 is achieved via simulation. We use only three states, qX (expansion), qL (go left), and qR (go right), so \(|Q^{\prime}|=3\). The new symbol set is \(E^{\prime}=Q\times\{X,L,R\}\times E\ni(q,d,e)\), where we assemble the new state by transferring it bit-by-bit via qL or qR, successively yielding the \(q\) part of \(e^{\prime}=(q,e,d)\). The generalized direction \(d\) tells, whether the state still has to be expanded, \(d=X\), or the direction is \(d\in\{L,R\}\) as in the original TM. The \(e\) part is the original symbol. The part \(e\) of the current symbol \(e\) is replaced by \(e^{\prime}=Q^{+}(q,e)\) at expansion. The direction is \(d=\) X during the assembly of \(q\) and then \(d=D^{+}(q,e)\). We first use transition (1) from Figure 7 with \(d\in\{L,R\}\) that is we move to the left with \(qL\) for \(d=L\) or to the right with \(qR\) for \(d=R\), respectively. Then we return from the neighbour cell, whose \(q\) is incremented, using transition (2) or (3). We repeat until \(q=0\) with \(qX\), and then release, \(d:=X\), the current cell via (4), thus finishing this current cell. We move over to the neighbor as new current cell by (4), staying in state \(qX\). Here, we expand the symbol \((q,e,X)\) by transition (5): We first calculate the new values \(\tilde{q}:=Q^{+}(q,e),\tilde{e}:=E^{+}(q,e)\), and \(\tilde{d}:=D^{+}(q,e)\) with the functions of the original machine. The intermediate result would be \((qX,[\tilde{q},\tilde{d},\tilde{e}],\tilde{d})\). However, we immediately include the first move, decrementing \(\tilde{q}\), and obtain the overall value, starting the bitwise transfer of \(\tilde{q}\in\mathbb{N}\) in the new direction \(\tilde{q}\). Two remarks: The tape's blank symbol is \((0,X,-)\) where \(-\) is the original blank from \(E\), and upon halting the original machine, \(Q^{+}\)=HALT, also the simulator actually halts. **Theorem 4**.: _Given any TM with \(|Q|=n,|E|=m\), we obtain another TM with \(|Q^{\prime}|=3\) and at most_ \[|E^{\prime}|\leq 3(n+1)\cdot m\] _states._ Proof.: From Figure 7, we have a possible \(Q^{\prime}:=\{qX,qL,qR\}\) and \(E^{\prime}:=(Q\cup\{0\})\times\{L,R,X\}\times E\) with the given sizes. ### \(|Q^{\prime}|=2b+1,b\geq 2\) With \(|Q^{\prime}|=3\), we used a one-letter alphabet for the "information transfer" that is \(\log(1)=0\) bits of information to be transferred in each move. As Chaitin [2] points out, the information about the _length_ of the transmission, or the _end_ of the transfer is a necessary and important piece of information. Here it was the _only_ information. We can, however, use larger alphabets with \(b\) letters and \(Q^{\prime}=\{qX,qL_{0},\)\(qL_{1},\ldots,qL_{b-1},qR_{0},\ldots qR_{b-1}\}\), thus \(|Q^{\prime}|=2b+1\). Our symbol set then is: **Definition 5**.: Symbol set \(E^{\prime}\) for \(|Q^{\prime}|=2b+1\) Let \(E^{\prime}:=\{-,(q^{\prime}_{l-1}q_{l-2}\ldots q_{1})_{b},\ldots(q^{\prime}_{l -1}q_{l-2})_{b},(q^{\prime}_{l-1})_{b}\}\times E\times\{L,R\}\) \(\dot{\cup}\ \ (\{-\}\dot{\cup}\{(q^{\prime}_{l-1}q_{l-2}\ldots q_{1}q_{0})_{b} \}\dot{\cup}[b]^{l-1}\dot{\cup}[b]^{l-2}\dot{\cup}\ldots\dot{\cup}[b])\times\{ -\}\times E\) where \(l:=\lceil\log_{b}|Q|\rceil\), \((q^{\prime}_{l-1}q_{l-2}\ldots q_{0})_{b}\leq|Q|\) and \([b]:=\{0,1,\ldots,b-1\}\). The numbers in the first part describe \(|Q|/b,|Q|/b^{2},\ldots|Q|/b^{l-1}\) prefixes, the numbers in the second part \(|Q|,b^{l-1},\ldots,b\) suffixes of the elements from \(Q\equiv\{1,\ldots,|Q|\}\). We have \(E^{\prime}\leq(1+\lceil n/b\rceil+\lceil n/b^{2}\rceil+\cdots+\lceil n/b^{l-1 }\rceil)\cdot 2m+nm+\frac{b^{l}-1}{b-1}\cdot m\)\(<\left[n\cdot\frac{b+1}{b-1}+2(l-1)+\frac{b^{l}-1}{b-1}\right]\cdot m\), using \(1+\frac{2}{b-1}\cdot\frac{b^{l-1}-1}{b^{l-1}}<\frac{b+1}{b-1}\). The state \(X\) corresponds to being in the current tape cell, the states \(L_{i},R_{i},0\leq i\leq b-1\) are used in the adjacent cell to the left or right, respectively. Let the current state and symbol be \(qX\) and \([(q_{f}q_{f-1}\ldots q_{1}q_{0}),L,e],f\geq 1\). Then \(D^{+}=L\) from the last component of the symbol. We cut off one \(b\)-ary digit, \(q_{f}\), which goes into the nextstate \(Q^{+}=qL_{q_{f}}\), and obtain \(E^{+}=(q_{f-1}\ldots q_{1}q_{0},e,L)\), transition (2) in Figure 8. Let now the current state be \(qL_{i}\) after moving to the left. Let the symbol there be \([(q_{l-1}q_{l-2}\ldots q_{f+1}),-,e]\). Then \(Q^{+}=qX\), \(D^{+}=R\), the opposite di Figure 8: Transitions for \(|Q|=2b+1\) rection from \(q=L_{i}\). Also, let \(E^{+}=[(q_{l-1}q_{l-2}\ldots q_{f+1}i),-,e]\), appending the \(b\)-ary digit transferred as index \(i\) to the _right_ of the current first component, there becoming \(q_{f}\) as in transition (1) of Figure 8. **Example** (see Figure 9) We run two transitions \(QED^{+}(7,e_{1})=(15,e_{2},L)\) and \(QED^{+}(15,e_{3})=(4,e_{5},R)\) of the original machine, where \(7,15,4\in Q\) and \(e_{1},e_{2},e_{3},e_{5}\in E\). We assume to be _e.g._ in position 102, with position 101 holding some symbol \(e_{3}\) and position 103 holding \(e_{4}\), hence the simulator has **101:**\((-,-,e_{3})\), 102: \((021,-,e_{1})\), 103: \((-,-,e_{4})\) on its tape where \((021)_{3}=7\) is the current state. In \((*)\) of Figure 9 we use transition (3) of Figure 8 and expand according to the original transition \((15,e_{3})\rightarrow(4,e_{5},R)\). The new state \((011)_{3}\) is divided into (01) in the symbol and the trailing 1 as index to \(q=R_{1}\). The same happens in \((**)\), where now QED\({}^{+}(4,e_{2})=(q^{\prime}|i,e_{x},d)\) defines the new parts \((3\cdot q^{\prime}+i)\in Q,e_{x}\in E,d\in\{L,R\}\). _E.g._ for QED\({}^{+}(4,e_{2})=(8,e_{6},L)\) with \(8=(022)_{3}\), we have \(q^{\prime}=02,i=2,e_{x}=e_{6},D=L\) and thus \((L_{2},[02,L,e_{6}],L)\) as r.h.s. Figure 9: Example for \(b=3\), \(|Q|=7\) **Theorem 6**.: _Given any TM with \(|Q|=n,|E|=m\), there is another TM with \(|Q^{\prime}|=2b+1\) states and at most_ \[|E^{\prime}|\leq\left(n\cdot\frac{b+1}{b-1}+2(l-1)+\frac{b^{l}-1}{b-1}\right)\cdot m\] _symbols, where \(l:=\lceil\log_{b}(n)\rceil\)._ Proof.: Set \(Q^{\prime}=\{X,L_{1},\ldots,L_{b-1},R_{1},\ldots,R_{b-1}\}\), \(E^{\prime}\) as in Def. 5, and QED\({}^{+}\) as in Figure 8. ### \(|Q^{\prime}|=2\), One Initial Non-Blank Symbol We use \(Q^{\prime}=\{L,R\}\), \(E^{\prime}:=\{0,1,2,\ldots,|Q|\}\times\{-,L_{new},L_{old},R_{new},R_{old}\}\times E\), and the transitions from Figure 10 The meaning of state L (respectively R) here is _being in_ the left (right) of the two active cells. If we have a transfer to the right (for \(D^{+}=R\)) the \(L\) state in \(R_{old}\) decrements the \(q\) part of its symbol (transition (3) with \(X=L,\overline{X}=R\)), while the R state in \(R_{new}\) increments it, a transition (2), with \(X=R,\overline{X}=L\). We start in state \(L\) on the non-blank symbol \((1,R_{new},e_{0})\), the 1 denoting the start state, whereas the rest of the tape is blanked out with \((0,-,e_{0})\) or any other initial contents \((0,-,e_{n})\). We immediately execute a QED\({}^{+}\), transition (5), according to \((q_{1},e_{0})\) of the original TM. ### \(|Q^{\prime}|=2\), Starting on Empty Tape \({}^{\omega}0^{\omega}\) Introducing A Third Option via "Overflow Error" Apparently, when seeing a blank symbol \([0,-,e_{0}]\) (and the BB rules demand _only_ blanks initially), we have to distinguish 3 situations: - we are to the left of the current position, activate this cell, increase the \(q\) counter from the blank value 0 to 1 by a transition of type (1) in Figure 10 with \(X=L\) and return to the right - we are to the right of the current position and behave symmetrically - at start, we have to convert that blank into \((q_{1},-,0)\) to start computation, since there is no state yet on the tape. When working with only \(|Q^{\prime}|=2\) states and initially only 1 symbol, the blank, we, also apparently, can _not_ distinguish 3 cases. If, however, we are allowed a single non-blank tape cell, we can do just fine, see Figure 10. As we have seen, three states or two states plus one non-blank are sufficient to get the simulation started. Our task, however, is starting with a blank tape and \(Q=\{L,R\}\). What does really happen then: \((L,[0,-,e_{0}])\rightarrow(R,[1,L_{new},e_{0}],R)\)\((R,[0,-,e_{0}])\rightarrow(L,[1,R_{new},e_{0}],L)\) \((L,[1,L_{new},e_{0}])\rightarrow(R,[2,L_{new},e_{0}],R)\)\((R,[1,R_{new},e_{0}])\rightarrow(L,[2,R_{new},e_{0}],L)\) \((L,[2,L_{new},e_{0}])\rightarrow(R,[3,L_{new},e_{0}],R)\)\((R,[2,R_{new},e_{0}])\rightarrow(L,[3,R_{new},e_{0}],L)\) ... and so on, _ad infinitum_. Actually, there is no symbol [\(q\),...] in \(E^{\prime}\) for \(q>|Q|\) and this yields the third option: Let \(X:=D^{+}(q_{1},0)\) be the first move of the original TM. Then we start in X and set the equivalences: \(X,[|Q|+1,X_{new},0]\mathrel{\mathop{:}}\equiv X,[q_{1},\overline{X}_{new},0]\) (bootstrap start) and \(\overline{X},[|Q|+1,\overline{X}_{new},0]\mathrel{\mathop{:}}\equiv\overline{ X},[0,-,0]\) (blank). That gives us a 2-state empty initial tape simulation for any other TM with empty initial tape. **Example** (see Figure 11) Let the original machine start in position 1 on empty tape and execute \((q_{1},0)\rightarrow(q_{2},1,R)\)\((q_{2},0)\rightarrow(q_{3},2,L)\)\((q_{3},1)\rightarrow(q_{4},3,L)\)\((q_{4},0)\rightarrow(\)HALT). We start with an all-blank \((0,-,e_{0})\) tape in state \(R=D^{+}(q_{1},0)\) from the original machine. We assume \(|Q|+1=5\). See Figures 11/12 for details. **Theorem 7**.: \((i)\) _A TM with \(|Q|=n,|E|=m\) and empty initial tape can be simulated by another TM with \(n^{\prime}=2\), \(Q^{\prime}\mathrel{\mathop{:}}=\{L,R\}\) and \(m^{\prime}=5m(n+2)\), \(E^{\prime}\mathrel{\mathop{:}}=\{0,1,2,\ldots,|Q|,|Q|+1\}\times\{-,L_{new},L_{ old},\)\(R_{new},R_{old}\}\times E\)._ \((ii)\) _If the simulating TM does not require an empty initial tape, we have \(|E^{\prime}|=5m(n+1)\), omitting symbols \([|Q|+1,...,...]\)._ Proof.: The simulating TM is given by Figure 10. Figure 11: Two states, empty tape (part 1) ## 5 Larger symbol alphabets: \(n(k)\) ### The number \(n(4)\) For an alphabet with 4 symbols, Friedman gives the "remarkable" lower bound: **Theorem 8**.: (Friedman[5, Theorem 8.4])__ \[n(4)>A^{(A(187196))}(1),\] _where \(A(k)=A(k,k)\) and \(A^{(n)}(1)=A(A^{(n-1)}(1)),A^{(1)}\equiv A\)._ Proof.: The statement appears in [5, p. 7] as a Theorem, no proof given there. We have \(A(1)=2,A(2)=4,A(3)=2^{2^{2}}=16\), \(A(4)=A^{(3)}(1)\) (see below)... up to \(A(187196)\) to generate the exponent. Then we do \(A(187196)\) recursions, starting with \(A^{(1)}(1)=2,A^{(2)}(1)=4\), \[A^{(3)}(1)=2^{2^{\cdots^{2^{2}}}},A^{(4)}(1)=A\big{(}2^{2^{\cdots^{2^{2}}}},2^ {2^{\cdots^{2^{2}}}}\big{)},\] each with 65536 2s stacked,... to yield the lower bound for \(n(4)\). This number is indeed, to quote Friedman "_a whole 'notther kettle of fish"_[5, p. 7]. Figure 12: Two states, empty tape (part 2) ### TMs for \(n(k)\) How do our TMs change? We need two more symbols, the 4 and a 4' in part II. Also, three new states q1-C4, q5-K4, q5-V4 deal with the additional symbol. Hence, the \((44,8)\) implementation for \(n_{3}\) yields a \((47,10)\) implementation for \(n(4)\), and in general there is a \((35+3k,2+2k)\) implementation for computing \(n(k),k\geq 3\in\mathbb{N}\). For general \(k\geq 3\), \(\Delta\in\{0,1\}\), including the case \(X\not\in E,\Delta=1\), we have TMs of size \((35+3k+\Delta(7+k),2k+2-\Delta)\). The (47,10) implementation immediately yields implementations with sizes (2,2450) and (3,1440) as well as (922,2) and (353,3), where we use the somewhat crude upper bounds from Theorems 7, 4 and 3. **Theorem 9**.: _There are TMs with the following sizes to compute \(n(3)\) and \(n(4)\), respectively:_ \begin{tabular}{c c c c c} \(n\) & \(m\) & Case & \(k=3\) & \(k=4\) \\ \hline (2, & \(\bullet\)) & C1 & TM(2,1840) & TM(2,2450) \\ (3, & \(\bullet\)) & C2 & TM(3,1080) & TM(3,1440) \\ (9, & \(\bullet\)) & C3 & TM(9,800) & TM(9,1030) \\ (\(\bullet\), & \(2k+2\)) & A & TM(44,8) & TM(47,10) \\ (\(\bullet\), & \(2k+1\)) & A & TM(54,7) & TM(58,9) \\ (\(\bullet\), & \(4\)) & -/A & -- & TM(160,4) \\ (\(\bullet\), & \(3\)) & A/B & TM(155,3) & TM(353,3) \\ (\(\bullet\), & \(2\)) & A/B & TM(276,2) & TM(922,2) \\ \end{tabular} Proof.: By construction in case A. By applying Theorem 3 in case B, Theorem 7 in case C1, Theorem 4 in case C2, and Theorem 6 in case C3. ## 6 Infeasable Busy Beaver contests Whenever a \((n,m)\) contest lies beyond an \((n^{\prime},m^{\prime})\) implementation of \(n(3)\) or \(n(4)\), _i.e._\(n\geq n^{\prime},m\geq m^{\prime}\), it can safely be considered infeasible. On the other hand, all BB(1,m) and BB(n,1) contests are trivial. That leaves us with the interesting cases, _i.e._ both feasable and non-trivial, as collected in the left column, \(n(3)\), of Figure 13, where we use only the cases from Theorem 9 to interpolate. The right column, \(n(4)\), already goes beyond feasability. In summary, all but 37022 non-trivial cases \((n,m)\) lead to \(\texttt{BB}(\texttt{n},\texttt{m})>A(7198,158386)\) and all but 51671 cases even have \(\texttt{BB}(\texttt{n},\texttt{m})>A^{(A(178195))}(1)\). ## Conclusion We have shown that at most 51671 BB(n,m) values are below \(A^{(A(1178195))}(1)\). This gives a third description of the difficulty of the Busy Beaver problem: **Goldbach**CodeGolfAddict[3] gives a (27, 2) implementation to search for a counterexample for Goldbach's conjecture that every even number is the sum of two primes. The value BB(27,2) thus depends on a mathematician's (number theory, _not_ TCS) success to solve Goldbach's conjecture. **Long Finite Sequences** Our contribution shows that BB(44,8) and BB(276,2) are computationally infeasable, lying beyond \(A(7198,158386)\). **ZFC** Yedidia and Aaronson [12] have given a (7910,2) TM that checks ZFC for congruency -- going back to another work of Friedman. Hence, resolving BB(7910,2) is outside the scope of ZFC. Stefan O'Rear has a (748,2) implementation [8]. Note that Friedman's problem is a definite lower bound, while the other two problems can be put aside as holdouts that might never halt and thus not enter the BB contest, when you have confidence in Christian Goldbach, Ernst Zermelo and Adolf Fraenkel. We furthermore have described algorithms to obtain, for any given TM, Turing machines with either state count or symbol count freely selectable down to the value 2. (C++ implementations available upon request from the first author) Figure 13: Interesting BB cases, as bounded by \(n(3)\) (left) and \(n(4)\) (right)
2307.01115
MeT: A Graph Transformer for Semantic Segmentation of 3D Meshes
Polygonal meshes have become the standard for discretely approximating 3D shapes, thanks to their efficiency and high flexibility in capturing non-uniform shapes. This non-uniformity, however, leads to irregularity in the mesh structure, making tasks like segmentation of 3D meshes particularly challenging. Semantic segmentation of 3D mesh has been typically addressed through CNN-based approaches, leading to good accuracy. Recently, transformers have gained enough momentum both in NLP and computer vision fields, achieving performance at least on par with CNN models, supporting the long-sought architecture universalism. Following this trend, we propose a transformer-based method for semantic segmentation of 3D mesh motivated by a better modeling of the graph structure of meshes, by means of global attention mechanisms. In order to address the limitations of standard transformer architectures in modeling relative positions of non-sequential data, as in the case of 3D meshes, as well as in capturing the local context, we perform positional encoding by means the Laplacian eigenvectors of the adjacency matrix, replacing the traditional sinusoidal positional encodings, and by introducing clustering-based features into the self-attention and cross-attention operators. Experimental results, carried out on three sets of the Shape COSEG Dataset, on the human segmentation dataset proposed in Maron et al., 2017 and on the ShapeNet benchmark, show how the proposed approach yields state-of-the-art performance on semantic segmentation of 3D meshes.
Giuseppe Vecchio, Luca Prezzavento, Carmelo Pino, Francesco Rundo, Simone Palazzo, Concetto Spampinato
2023-07-03T15:45:14Z
http://arxiv.org/abs/2307.01115v1
# _MeT_: A Graph Transformer for Semantic Segmentation of 3D Meshes ###### Abstract Polygonal meshes have become the standard for discretely approximating 3D shapes, thanks to their efficiency and high flexibility in capturing non-uniform shapes. This non-uniformity, however, leads to irregularity in the mesh structure, making tasks like segmentation of 3D meshes particularly challenging. Semantic segmentation of 3D mesh has been typically addressed through CNN-based approaches, leading to good accuracy. Recently, transformers have gained enough momentum both in NLP and computer vision fields, achieving performance at least on par with CNN models, supporting the long-sought architecture universalism. Following this trend, we propose a transformer-based method for semantic segmentation of 3D mesh motivated by a better modeling of the graph structure of meshes, by means of global attention mechanisms. In order to address the limitations of standard transformer architectures in modeling relative positions of non-sequential data, as in the case of 3D meshes, as well as in capturing the local context, we perform positional encoding by means the Laplacian eigenvectors of the adjacency matrix, replacing the traditional sinusoidal positional encodings, and by introducing clustering-based features into the self-attention and cross-attention operators. Experimental results, carried out on three sets of the Shape COSEG Dataset [1], on the human segmentation dataset proposed in [2] and on the ShapeNet benchmark [3], show how the proposed approach yields state-of-the-art performance on semantic segmentation of 3D meshes. ## I Introduction Three-dimensional (3D) shapes are at the core of computer graphics and play an important role in many daily-life applications such as vision, robotics, medicine, augmented reality, and virtual reality. In recent years, many approaches have been proposed to encode real-world shapes, including 3D meshes [4] and point clouds [5]. Meshes have become widely adopted to represent complex real-world objects, which are commonly composed of continuous surfaces, through a discrete approximation. The mesh is an efficient way to represent non-uniform surfaces, from simple shapes that generally require only a small number of polygons, up to arbitrarily complex objects, where the number of required polygons may increase significantly. The advantages presented by a mesh are particularly evident when compared to other forms of representation, like point clouds, which fall short when higher quality and preservation of sharp shape features are required. With the increasing spread of deep learning techniques in many fields, research has tried to apply approaches from computer vision to 3D shape analysis. Convolutional neural networks (CNNs), in particular, have demonstrated outstanding performance on a variety of images-related tasks such as classification [6, 7, 8] and semantic segmentation [9, 10, 11]. However, CNNs are designed to work on images, which are represented on a regular grid of discrete values, far from the irregular representation of 3D shapes. On the other hand, representing 3D objects through volumetric grids, e.g. mapping 3D shapes to multiple 2D projections [12] or 3D voxel grids [13], is extremely inefficient and leads to computational costs that increase exponentially with higher resolution. Recent approaches have tried to directly apply CNNs to the sparse point cloud representation [14, 15]. These approaches have a substantial gain in terms of efficiency, but present an ill-defined notion of neighborhoods and connectivity and are inherently oblivious to the local surface. This issue makes the application of convolution and pooling operations non-trivial. To overcome this limitation, several works have recently tried to generalize CNN architectures to non-Euclidean domains such as graphs, and incorporate neighborhood information [16, 17, 18]. Other approaches have tried to apply deep neural networks to 3D meshes [19, 20, 21]. One recent example is MeshCNN [21], which obtained state-of-the-art results on several segmentation datasets. A recent trend in computer vision revolves around the use of transformer-based architectures, originally born for NLP, [22] for vision tasks [23, 24]. The success of transformers lies in their extensive attention mechanism, which allows the network to learn global correlations between inputs. This property makes transformers able to intrinsically operate on fully-connected graphs. However, when dealing with sparse graphs, transformers show evident limitations, mainly because of the sinusoidal positional encoding that is not able to exploit graph topology and to the lack of local attention operators. Recently, [25] proposed an approach to extend the transformer architecture for arbitrary graphs. It introduces a graph transformer architecture which leverages the graph connectivity inductive bias, exploiting the graph topology. In particular, they 1) propose a new attention mechanism, 2) replace the positional encoding with the Laplacian eigenvectors, 3) re-introduce batch normalization layers, and 4) take into account edge feature representation. Inspired by this work, and leveraging the structure of a mesh, which can be represented as a graph where the nodes correspond to vertices connected by polygon edges, we propose MeT, a transformer-based architecture for semantic mesh segmentation. In particular, our approach embeds locality features by means of the Laplacian operator (as in [25]) and by combining polygon features with clustering-based features into a novel two-stream transformer layer architectures, where features from the two modalities are extracted through self-attention and combined through cross-attention. Additionally, we ensure that graph structure inferred by the input mesh affects the attention operators, by injecting adjacency and clustering information as attention masks. We evaluate our method on a variety of publicly-available mesh datasets of 3D objects and human bodies; in our experiments, the proposed approach is able to outperform previous works, both qualitatively and quantitatively. To sum up, the key contributions of this work are: * We enforce graph locality in the transformer by a combination of clustering information operator with Laplacian positional encoding in place of positional encoding. * We introduce novel self-attention and cross-attention mechanisms, specifically designed for mesh segmentation, that take into account adjacency and clustering information to mask elements and further impose locality. * Experimental results on multiple standard benchmarks with different type of meshes showing that our model outperforms, both quantitatively and qualitatively, existing mesh segmentation methods, setting new state-of-the-art performance on the task. ## II Related work Meshes represent a way to describe 3D objects. They consist of vertices, edges and faces that defines the shape of a polyhedral object. In this work we will focus on triangular meshes, i.e., a mesh where all the faces are triangles. ### _Mesh segmentation_ The semantic segmentation of 3D meshes is the process of assigning a label to each face. The task of semantic segmentation for meshes has applications in many fields such as robotics, autonomous driving, augmented reality and medical images analysis. Following the success of deep learning, several CNN-based methods have been applied 3D meshes to tackle the task of mesh segmentation [26, 27]. We hereby present an overview of relevant work on 3D data analysis using neural networks, grouped by input representation type. **Volumetric.** A common approach to represent the 3D shape into a binary voxel form that is the 3D analogous to a 2D grid such as an image. This allows for extending to 3D grids operations that are applied on 2D grids, thus applying any common image-based approaches to the shape domain. This concept was first introduced by [13], who present a CNN that processes voxelized shapes for classification and completion. Following this approach, [28] introduce a shape reconstruction method, using a voxel-based variational autoencoder. In 2019, [29] present Alignet which used a voxel representations estimated applied the deformation on the original mesh. Although being easy to process and extend existing method to voxels, this kind of representation is computationally and memory expensive. Resource efficient methods to process volumetric representations are an open research field with several approaches being proposed [30, 31]. Sparse convolutions allows to further reduce computational and memory requirements, leading to more efficient approaches [32, 33, 34, 35, 36], but suffer from inaccurate position information due to voxelization. **Graph.** Another family of approaches leverages the ability to represent meshes as a graph structure. We distinguish between two main approaches for graph processing, one relies on the spectral properties of graphs [37, 38, 39, 40]; the second one is to directly process graphs extracting locally connected regions and transforming them into a canonical form for a neural network [41]. In 2017, [20] propose a new architecture called Directionally Convolutional Network (DCN) that extends CNNs by introducing a rotation-invariant convolution and a pooling operation on the surface of 3D shapes. In particular, they propose a two-stream segmentation framework: one stream uses the proposed DCN with face normals as the input, while the other one is implemented by a neural network operating on the face distance histogram. The learned shape representations from the two streams are fused by an element-wise product. Finally, Conditional Random Field (CRF) is applied to optimize the segmentation. [42] propose SyncSpecCNN, a spectral CNN with weight sharing in the spectral domain spanned by graph laplacian eigenbases, to tackle the task of 3D segmentation. [40] propose a Graph Neural Network (GNN) which exploits the Dirac operator to leverage extrinsic differential geometry properties of three-dimensional surfaces. These methods generally operate on the vertices of a graph. **Manifold.**[43], with the Geodesic Convolutional Neural Networks, and [19] with the Anisotropic Convolutional Neural Networks, proposed two different CNNs-based architectures for triangular mesh segmentation. In 2019, MeshNet was proposed by [21]. This architecture differs from the previous by working on mesh edges rather than faces. MeshCNN combines specialized convolution and pooling layers that operate on the mesh edges by leveraging their intrinsic geodesic connections. Convolutions are applied on edges and the four edges of their incident triangles, and pooling is applied via an edge collapse operation that retains surface topology, thereby, generating new mesh connectivity for the subsequent convolutions. MeshCNN learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones. Other approaches, like [44, 45, 46] propose alternative solutions to the segmentation task. MeshWalker [44] represents mesh's geometry and topology by a set of random walks along the surface; these walks are fed to a recurrent neural network. HodgeNet [45], instead, tackles the problem relying on spectral geometry, and proposes parallelizable algorithms for differentiating eigencomputation, including approximate backpropagation without sparse computation. Finally, DiffusionNet [46] introduces a general-purpose approach to deep learning on 3D surfaces, using a simple diffusion layer to agnostically represent any mesh. ### _Graph transformers_ Since their introduction, Transformers [22] have demonstrated their wide applicability to many different tasks, from NLP to Computer Vision. The original transformer was designed for handling sequential data in NLP, and operates on fully connected graphs representing all connections between the words in a sentence. However, when dealing with sparse graph, transformers perform poorly. Recently, several attempts to adapt transformers to graphs have been proposed [47, 48, 49] focusing on heterogeneous graphs, temporal networks and generative modeling [50, 51, 52, 53]. In 2019, [47] introduce a model employing attention an all graph nodes, instead of a node's local neighbors, to capture global information. This approach limits the exploitation of sparsity, which is a good inductive bias for learning on graph datasets as shown in [25]. To learn global information other approaches involve the use of a graph-specific positional features [49], node Laplacian position eigenvectors [54, 25], relative learnable positional information [55] and virtual nodes [56]. [25], propose an approach to extend the transformer architecture for arbitrary graphs. It introduces a graph transformer architecture with four new properties compared to the standard model, which are: 1) an attention mechanism which is a function of the neighborhood connectivity for each node in the graph; 2) positional encoding represented by the Laplacian eigenvectors, which naturally generalize the sinusoidal positional encoding often used in NLP; 3) a batch normalization layer in contrast to the layer normalization; 4) edge feature representation. MeshFormer [57] propose a mesh segmentation method based on graph transformers, which uses a boundary-preserving simplification to reduce the data size, a Ricci flow-based clustering algorithm for constructing hierarchical structures of meshes, and a graph transformer with cross-resolution convolutions, which extracts richer high-resolution semantic. Recently [58] introduced a novel method for 3D mesh segmentaion named Navigation Geodesic Distance Transformer (NGD-Transformer). It exploit the manifold properties of the mesh through a novel positional encoding called navigation geodesic distance positional encoding, which encodes the geodesic distance between vertices. Our work takes inspiration from [25] and proposes a transformer-based architecture for tackling 3D meshes represented as graphs. As in [25] we employ a positional encoding represented by the Laplacian eigenvectors of the adjacency matrix and a pre-layer batch normalization. However, we extend the original approach by adapting the architecture to 3D meshes, particularly, by proposing two cross-attention modules (similarly to decoder layers) learning local and global representations on 3D meshes and clusters thereof. ## III Method In this work, we propose a novel transformer-based architecture for semantic segmentation of 3D meshes. The proposed method takes inspiration from recent vision transformer architectures [59] and spectral methods for graphs, in order to create an embedding in a Euclidean space with the topological features of the mesh. Given a triangular mesh, described as a set of \(V\) vertices \(\left\{\mathbf{v}_{k}=(x_{k},y_{k},z_{k})\right\}_{k=1,\ldots,V}\) and a set of \(N\) triangles \(\left\{\mathbf{f}_{i}=(k_{i,1},k_{i,2},k_{i,3},\mathbf{n}_{i})\right\}_{i=1, \ldots,N}\), where each triangle is defined by its three vertices and the normal direction \(\mathbf{n}_{i}\) of its surface, the goal is to assign a class \(c_{i}\in\mathcal{C}\) to each triangle \(\mathbf{f}_{i}\), representing the dominant class on the surface of the triangle. ### _Feature extraction_ For each triangle, we initially extract a set of features based on spectral properties of the triangle graph (where triangles are nodes, and shared sides are edges), which is the dual of the mesh (where vertices are nodes, and triangle sides are edges). The process starts by building the adjacency matrix \(\mathbf{A}\), of size \(N\times N\) (\(N\) being the number of triangles in the mesh), such that \(A_{ij}=1\) if the \(i\)-th and the \(j\)-th triangles share an edge, and \(A_{ij}=0\) otherwise. From the the adjacency matrix \(\mathbf{A}\), we then compute the symmetric normalized Laplacian matrix \(\mathbf{L}\) as: \[\mathbf{L}=\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}, \tag{1}\] where \(\mathbf{I}\) is the identity matrix and \(\mathbf{D}\) is the degree matrix for \(\mathbf{A}\), i.e., a diagonal matrix such that \(D_{ii}\) is the number of edges connected to \(i\) (equivalently, the sum of the elements in the \(i\)-th row of \(\mathbf{A}\)). Then, we identify the \((E+1)\)-th (with \(E\leq N\)) eigenvector with the smallest non-zero eigenvalue. The \(i\)-th components of the \(E\) remaining eigenvectors, corresponding to vector \(\mathbf{l}_{i}\), are then used to encode the location of the \(i\)-th triangle within the mesh. We employ these features as a positional encoding in the transformer, as described by [25], and to identify local neighborhood by means of clustering (described in the next section). Formally, given a triangle \(\mathbf{f}_{i}=(k_{1},k_{2},k_{3},\mathbf{n}_{i})\), where \(\mathbf{n}_{i}\) is its normal vector direction, obtained by computing the vector product \(\tilde{\mathbf{n}}_{i}=(\mathbf{v}_{k_{2}}-\mathbf{v}_{k_{1}})\times(\mathbf{ v}_{k_{3}}-\mathbf{v}_{k_{1}})\) and then normalizing it as \(\mathbf{n}_{i}=\tilde{\mathbf{n}}_{i}/\left\|\mathbf{n}_{i}\right\|\), we obtain the feature representation \(\mathbf{t}_{i}\) for triangle \(i\) as \(\mathbf{t}_{i}=(\mathbf{v}_{k_{1}},\mathbf{v}_{k_{2}},\mathbf{v}_{k_{3}}, \mathbf{n}_{i},\mathbf{l}_{i})\). Fig. 1 shows the visualization of the Laplacian eigenvectors for a mesh. ### _Clustering_ Before being processed by the network, triangles' features are clustered in \(M=V/\lambda\) clusters, where \(\lambda\) is a configurable parameter, controlling the average number of mesh vertices per cluster. As we describe in detail when presenting our transformer architecture, we introduce clustering as an additional and more explicit way than positional encoding to enforce locality on the features extracted for each triangle. Clustering is carried out using the Ward method [60], which applies constraints on the connectivity dictated by the dual graph adjacency matrix, generating clusters geometrically and topologically connected and cohesive. The result is a matrix \(\mathbf{J}\) with shape \(N\times M\), such that \(J_{im}=1\) if the \(i\)-th triangle is in the \(m\)-th cluster, and \(J_{im}=0\) otherwise. _Each row \(\mathbf{j}_{i}\) in \(\mathbf{J}\) can be interpreted as the one-hot cluster representation for the \(i\)-th triangle._ Fig. 2 shows an example of mesh triangles clustering. ### _Network architecture_ **Mesh Transformer.** The proposed MeT architecture implements a transformer model with two internal feature extraction streams, one for triangle features and one for cluster features, organized in matching sets (i.e., the \(i\)-th element of the triangle set corresponds to the \(i\)-th element of the cluster set). The two sets of features are processed by a cascade of transformer layers; only features from the triangle stream are finally used for prediction through a two-layer feedforward network, which predicts for each triangle a score vector \(\mathbf{s}_{i}\), of size equal to the number of segmentation classes. Given the extracted features \(\mathbf{t}_{i}\) and cluster identifier \(\mathbf{j}_{i}\) for each mesh triangle, we first convert them into two sequences of _tokens_, to be provided as input to the transformer layers. Triangle tokens \(\mathbf{e}_{i}\) are obtained as: \[\mathbf{e}_{i}=\mathrm{FF}_{t}\left(\mathbf{t}_{i}\right) \tag{2}\] where \(\mathrm{FF}_{t}\) is a feedforward layer with ReLU activation1, of output size \(d_{t}\). Cluster tokens \(\mathbf{p}_{i}\), of size \(d_{p}\), are obtained by a learnable embedding layer on the corresponding one-hot cluster identifier \(\mathbf{j}_{i}\). Matrices \(\mathbf{E}\in\mathbb{R}^{N\times d_{t}}\) and \(\mathbf{P}\in\mathbb{R}^{N\times d_{p}}\) are defined by laying each token as a row in the corresponding matrix. Footnote 1: All feedforward layers in our model have ReLU activations. Each network layer, illustrated in Fig. 3, can thus be defined as a function \(L_{i}\left(\cdot,\cdot\right)\) on token sequences: \[L_{i}\left(\mathbf{E},\mathbf{P}\right)=\left(\mathrm{R}_{i}\Big{(}\mathrm{ SA}_{t,i}\big{(}\mathrm{TC}_{i}\left(\mathbf{E},\mathbf{P}\right)\big{)} \Big{)},\mathrm{R}_{i}\Big{(}\mathrm{SA}_{p,i}\big{(}\mathrm{CT}_{i}\left( \mathbf{E},\mathbf{P}\right)\big{)}\Big{)}\right) \tag{3}\] where \(\mathrm{SA}_{t,i}\) and \(\mathrm{SA}_{p,i}\) are, respectively, the triangle and cluster self-attention functions, \(\mathrm{R}_{i}\) is a residual connection function, and \(\mathrm{TC}_{i}\) and \(\mathrm{CT}_{i}\) are, respectively, the function updating triangle tokens from cluster tokens and vice versa. The output of each layer has the same dimensions as the input, allowing for arbitrary length of encoder sequences. **Multi-head attention.** Before introducing the details of the encoder layers, let us present a general formulation of multi-head attention, which is extensively employed in the proposed architecture. An attention function \(A\) receives three matrices \(\mathbf{Q}\in\mathbb{R}^{N_{q}\times d_{k}}\) (query), \(\mathbf{K}\in\mathbb{R}^{N_{k}\times d_{k}}\) (key) and \(\mathbf{V}\in\mathbb{R}^{N_{k}\times d_{v}}\) (value), and returns a matrix \(\mathbf{O}\in\mathbb{R}^{N_{q}\times d_{v}}\), where each row is computed as a linear combination of rows from \(\mathbf{V}\), weighted by normalized dot-product similarity between rows of \(\mathbf{Q}\) and \(\mathbf{K}\), as follows: \[A(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax}\left(\frac{\mathbf{Q} \mathbf{K}^{\top}}{\sqrt{d_{k}}}\right)\mathbf{V} \tag{4}\] where softmax is applied on rows of the input matrix. In multi-head attention, in order to capture several possible attention patterns between elements, \(\mathbf{Q}\), \(\mathbf{K}\) and \(\mathbf{V}\) are usually computed by linearly projecting a set of input matrices \(\hat{\mathbf{Q}}\in\mathbb{R}^{N_{q}\times d_{k}}\), \(\hat{\mathbf{K}}\in\mathbb{R}^{N_{k}\times d_{k}}\) and \(\hat{\mathbf{V}}\in\mathbb{R}^{N_{k}\times d_{v}}\) through multiple sets of projection matrices \(\left\{\left(\mathbf{W}_{q,i},\mathbf{W}_{k,i},\mathbf{W}_{v,i}\right)\right\}_ {i=1,\dots,h}\), with \(h\) being the number of heads. The attention outputs for each set of projection matrices are then concatenated and linearly projected to produce the final output, as follows: \[\text{MA}(\hat{\mathbf{Q}},\hat{\mathbf{K}},\hat{\mathbf{V}})=\text{concat} \left(H_{1},\dots,H_{h}\right)\mathbf{W}_{O} \tag{5}\] where \(\mathbf{W}^{O}\in\mathbb{R}^{N\times d_{a}}\) is a linear projector to the desired output dimension, and \(H_{i}\) is the output of the \(i\)-th attention head: \[H_{i}=A\left(\hat{\mathbf{Q}}\mathbf{W}_{q,i},\hat{\mathbf{K}}\mathbf{W}_{k,i},\hat{\mathbf{V}}\mathbf{W}_{v,i}\right) \tag{6}\] The amount of computation required for multi-head attention is approximately the same as in single-head attention, by uniformly splitting dimensions \(d_{q}\), \(d_{k}\) and \(d_{v}\) among the \(h\) heads. In this work, for simplicity, we set \(d_{q}=d_{k}=d_{v}=d_{o}=d\), whose specific value depends on where multi-head attention is employed in the network, as described below. **Self-attention for cluster tokens.** The architecture of the self-attention module for cluster tokens is presented in Fig. 4. The module receives the set of cluster tokens \(\mathbf{P}\) and applies a function \(\mathrm{SA}_{p}\) defined as: \[\mathbf{P}_{n}=\text{PLN}\left(\mathbf{P}\right) \tag{7}\] Fig. 1: Visualization of the first three eigenvectors of the Laplacian of the mesh dual graph. Fig. 2: Example of triangles clustering with \(\lambda=8\). \[\text{SA}_{p}\left(\mathbf{P}\right)=\text{MA}\left(\mathbf{P}_{n},\mathbf{P}_{n}, \mathbf{P}_{n}\right)+\mathbf{P} \tag{8}\] where PLN is Pre-Layer Normalization [61], which has been shown to improve training of transformer architectures, and query, key and values matrices are all set to \(\mathbf{P}_{n}\), as is typical of self-attention. A final residual connection is applied to improve gradient flow. The \(d\) size is set to \(d_{p}\), i.e., the size of the input cluster tokens. **Self-attention for triangles tokens**, illustrated in Fig. 5, shares the same architecture as the self-attention module for clusters, but it employs the adjacency matrix \(\mathbf{A}\) as a mask for multi-head attention computation. The choice to adopt a adjacency-based attention masking mechanism is due to the need to preserve the capacity of the model to capture both local composition and long-range dependency [59] and to reduce computation requirements for high-resolution meshes exploiting the sparsity of the \(\mathbf{A}\) matrix. To carry out masked multi-head attention, the attention function in Eq. 4 is modified by subtracting infinity from masked positions of the query-key similarity vector, in order to nullify the corresponding softmax terms. The resulting attention function \(A_{\text{mask}}\) is defined as: \[A_{\text{mask}}(\mathbf{Q},\mathbf{K},\mathbf{V},\mathbf{M})=\text{softmax} \left(\frac{\mathbf{Q}\mathbf{K}^{\top}-\mathbf{M}}{\sqrt{d_{k}}}\right) \mathbf{V} \tag{9}\] where elements of \(\mathbf{M}\) are either 0 or \(-\infty\). We can thus define our self-attention module for triangles as: \[\mathbf{E}_{n}=\text{PLN}\left(\mathbf{E}\right) \tag{10}\] \[\text{SA}_{t}\left(\mathbf{E}\right)=\text{MA}_{\text{mask}}\left(\mathbf{E} _{n},\mathbf{E}_{n},\hat{\mathbf{A}}\right)+\mathbf{E} \tag{11}\] where MA\({}_{\text{mask}}\) is the variant MA employing \(A_{\text{mask}}\) as attention function, and \(\hat{\mathbf{A}}=\log\mathbf{A}\), so that \(\hat{A}_{ij}=-\infty\) where \(A_{ij}\) = 0, and \(\hat{A}_{ij}=0\) where \(A_{ij}\) = 1. The \(d\) size for multi-head attention is set to \(d_{t}\), i.e., the size of the input triangle tokens. **Updating cluster representation from triangle tokens**. The cluster-triangle update module is introduced to update the clusters' representation w.r.t. the triangles', thus allowing the network to exchange information between the two different modalities employed for modeling graph structure, i.e., Laplacian eigenvectors and clustering. To this aim, we employ masked multi-head attention using cluster tokens for computing query vectors, and triangle tokens to compute keys and values; in order to aggregate, for each cluster, only information of the triangles contained in it, we compute a symmetric matrix \(\mathbf{C}\) from the \(\mathbf{J}\) clustering matrix by setting \(C_{ij}=1\) if triangles \(i\) and \(j\) belong to the same cluster, i.e., \(\mathbf{j}_{i}=\mathbf{j}_{j}\), and \(C_{ij}=0\) otherwise. The architecture of the cluster-triangle update module is presented in Fig. 6, and implements the following function: \[\mathbf{P}_{n}=\text{PLN}\left(\mathbf{P}\right) \tag{12}\] \[\text{CT}\left(\mathbf{E},\mathbf{P}\right)=\text{MA}_{\text{ mask}}\left(\mathbf{P}_{n},\mathbf{E},\mathbf{E},\hat{\mathbf{C}}\right)+ \mathbf{P} \tag{13}\] where mask \(\hat{\mathbf{C}}=\log\mathbf{C}\), as above. The \(d\) dimension for multi-head attention is set to \(d_{p}\), i.e., the size of input cluster tokens. **Updating triangle representations from cluster tokens**. A triangle-cluster update module is also used to update triangle representation with respect to clusters. Similarly to the cluster-triangle case, each triangle is affected only by elements Fig. 4: Architecture of the self-attention module for cluster tokens. Fig. 5: Representation of the self-attention module for triangles. Fig. 3: Representation of an encoder layer of the Mesh Transformer. Fig. 6: Representation of the cluster-triangle update module. belonging to the same cluster. The cross-attention module computes the sum between each triangle token and a projection of the average of the corresponding cluster tokens through a single feed-forward layer, as follows: \[\mathbf{E}_{n}=\text{PLN}\left(\mathbf{E}\right) \tag{14}\] \[\text{TC}\left(\mathbf{E},\mathbf{P}\right)=\mathbf{E}_{n}+\text{FF}_{\text{TC }}\left(\mathbf{CP}\right) \tag{15}\] where \(\text{FF}_{\text{TC}}\) is a single feedforward layer. The architecture of the triangle update module is presented in Fig. 7. This operation can be interpreted as a form of cross-attention between triangle tokens and cluster tokens, where the former attend to the latter by means of a constant attention factor defined by cluster membership. **Layer residual connection**. The output of each token stream of a network layer finally undergoes a feedforward residual transformation, to independently transform each token, as follows: \[SS_{n}=\text{PLN}\left(SS\right) \tag{16}\] \[\left(SS\right)=\text{FF}_{R}\left(SS_{n}\right)+SS \tag{17}\] where \(SS\) is either \(\mathbf{E}\) or \(\mathbf{P}\), and \(\text{FF}_{R}\) is a feedforward layer. The architecture of the triangle update module is presented in Fig. 7. ## IV Experimental results In this section, we first introduce the datasets employed in our work: the COSEG Shapes dataset [1], and the Human segmentation datasets proposed by [2]. Then, we evaluate the accuracy of our approach on the two different datasets. First, we assess how the model performs in three categories of the COSEG dataset, namely, _Chairs_, _Vases_ and _Tele-Aliens_; afterwards, we evaluate our method on the segmentation of human body meshes as well as on the ShapeNet dataset [3]. Ablation study then follows to substantiate the choices on the architecture components. As a methodical note on the evaluation, for the comparison to MeT, with existing methods, we report the performance values reported in their original papers on the considered benchmarks. ### _Datasets and metrics_ We test the performance of MeT and compare it with those yielded by existing models on three standard benchmarks, namely, the Shape COSEG dataset [1], the Human dataset [2] and the ShapeNet dataset [3]. The Shape COSEG dataset [1] consists of 11 sets of shapes with a consistent ground-truth segmentation and labeling: 8 sets are rather small and come from the dataset by [62], while the 3 remaining ones contain, respectively, tele-alines, vases and chairs. Given the scale of tele-alines, vases and chairs sets compared to the other eight sets, we used only them to evaluate the performance of MeT. Train and test splits are the same defined in MeshCNN [21] for a fair comparison. As validation set we use \(6\%\) of the training set. We also evaluate our method on human segmentation dataset introduced by [2]. It consists of human meshes from several datasets, in particular SCAPE, FAUST, MIT Animation and SHREC 2007. The latest is used as test set, as in the MeshCNN [21] paper. ShapeNet [3] is a large-scale repository of shapes represented by 3D models of objects categorized following the WordNet taxonomy. ShapeNet contains semantic annotations about object parts as well as for rigid alignments, bilateral symmetry planes, physical sizes and other annotations. ### _Model training and evaluation_ We train our model with mini-batch gradient descent, using the AdamW [63] optimizer and a batch size of 12. Learning rate is set to \(5\cdot 10^{-5}\) with a weight decay of 0.01. Dropout with probability 0.1 is used after each feedforward layer and multi-head attention in the transformer encoder, and after each feedforward layer in the classification network. The value of the \(\lambda\) parameters controlling the features clustering, described in Sec. III, is 8 for all the experiments, while token dimensions are set as \(d_{t}=512\) and \(d_{p}=1024\). All these parameters we set by measuring performance on a validation set extracted from each the COSEG dataset. Cross-entropy loss function is used and weighted for each triangle based on its surface (larger triangles have more weight). We perform data augmentation by applying random translation, rotation and scaling for each mesh in a mini-batch. Accuracy is computed, as in DCN [20], as the total surface of triangles correctly classified over the entire surface. **Data preprocessing**. Similarly to MeshCNN, each mesh in the dataset is preprocessed reducing the number of vertices to a maximum of 1200 using the algorithm proposed by [64]. Duplicated vertices are merged and "padding" triangles are added to allow batched processing of meshes. After preprocessing, each mesh consists of 2412 triangles. Padding triangles are not adjacent to any mesh triangle and do not influence the final prediction. Vertex coordinates are standardized between -1 and 1. The \(\mathbf{A}\) and \(\mathbf{P}\) matrices are extended to include the padding triangles. We first evaluate our models on the Chairs, Vases and Tele-Aliens subsets of the COSEG dataset. For each set, we report the performance, in terms of accuracy. Tab. I shows that our approach achieves a higher global accuracy on all the COSEG sets w.r.t. state of the art methods. Fig. 8 shows segmentation examples for each mesh set. Mesh segmentation performances on the Human dataset [2] are reported in Tab. II, showing better performance of our approach also on this benchmark when comparing with three state-of-the-art algorithms. Fig. 9 shows qualitative results for the predicted segmentation. Fig. 7: Representation of the triangle-cluster update module. Finally, we compute mesh segmentation performances on the ShapeNet dataset [2], which are showed in Tab. III. Also on this benchmark, MeT yields better accuracy than state-of-the-art methods. ### _Ablation study_ We perform an ablation study, on the three subsets of the _COSEG_ dataset, to substantiate our design choices. We first assess how each component in the triangle representation affects performance, namely, triangle coordinates, surface normal and Laplacian positional encoding. Results in Tab. IV show that all the input features positively affect accuracy. However, the highest contribution to the final performance is provided by the the Laplacian. We then assess the importance of the cluster-related stream described in Sec. III, i.e., cluster self-attention and cluster-triangle cross-attention. A comparison of the model accuracy with and without the cluster modules is presented in Tab. V, where we can see that the cluster modules lead to gain of 3.6 percent points over the baseline, i.e., the model using only triangle information. ## V Conclusion In this work, we introduce a novel transformer-based architecture for 3D mesh segmentation. Our approach successfully and significantly extends standard transformers with features specifically designed for the task at hand. First, we introduce a two-stream processing pipeline with each transformer layer, designed to enforce locality through the combination between mesh triangle features and clustering-based features, and by integrating spectral graph properties, through Laplacian vectors, to replace classic sinusoidal positional encoding. Additionally, we adapt typical attention mechanisms in transformers, by taking into account graph properties, and in particular by using adjacency matrix and triangle clustering to explicitly mask multi-head self- and cross-attention. Experimental results, evaluated on multiple object categories, show that the resulting approach is able to outperform state-of-the-art methods on mesh segmentation, and demonstrate the positive impact of our architectural novelties by means of extended ablation studies. To conclude, we show that transformer models -- in spite of their characteristics for global processing and limitations with representing locality in sparse graphs -- can be successfully adapted to mesh analysis, by carefully integrating methodological adjustments designed to capture mesh properties in a complex task such as segmentation. ## VI Acknowledgements This research is supported by the project Future Artificial Intelligence Research (FAIR) PNRR MUR Cod. PE0000013-CUP: E63C22001940006 and by the project "LEGO.AI: LEarning the Geometry of knOwledge in AI systems", n. 2020TA3K9N.
2306.09660
Homogenization of eigenvalues for problems with high-contrast inclusions
We study quantitative homogenization of the eigenvalues for elliptic systems with periodically distributed inclusions, where the conductivity of inclusions are strongly contrast to that of the matrix. We propose a quantitative version of periodic unfolding method, based on this and the recent results concerned on high-contrast homogenization, the convergence rates of eigenvalues are studied for any contrast $\delta \in (0,\infty)$.
Xin Fu
2023-06-16T07:28:57Z
http://arxiv.org/abs/2306.09660v1
# Homogenization of eigenvalues for problems with high-contrast inclusions ###### Abstract. We study quantitative homogenization of the eigenvalues for elliptic systems with periodically distributed inclusions, where the conductivity of inclusions are strongly contrast to that of the matrix. We propose a quantitative version of periodic unfolding method, based on this and the recent results concerned on high-contrast homogenization, the convergence rates of eigenvalues are studied for any contrast \(\delta\in(0,\infty)\). **Key words**: Periodic homogenization, high contrast media, double porosity problem, eigenvalues asymptotics, periodic unfolding method, perforated domains. **Mathematics subject classification (MSC 2020)**: 35B27, 35J70, 35P20 ## 1. Introduction In this paper, we consider homogenization for eigenvalues of the operator \(\mathcal{L}_{\varepsilon,\delta}\) which is described as following: Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}^{d}\), we define the unbounded operator \(\mathcal{L}_{\varepsilon,\delta}\) on \([L^{2}(\Omega)]^{m}\) by \[\mathcal{L}_{\varepsilon,\delta}:=-\frac{\partial}{\partial x_{i}}\left[ \Lambda_{\delta}^{\varepsilon}(x)a_{ij}^{\alpha\beta}\left(\frac{x}{ \varepsilon}\right)\frac{\partial}{\partial x_{j}}\right]=-\mathrm{div}\left[ \Lambda_{\delta}^{\varepsilon}(x)A\left(\frac{x}{\varepsilon}\right)\nabla\right] \tag{1.1}\] for \(1\leq i,j\leq d,1\leq\alpha,\beta\leq m,0<\varepsilon<1,0<\delta<\infty\), with the domain \[\mathcal{D}(\mathcal{L}_{\varepsilon,\delta}):=\Big{\{}u\in[H_{0}^{1}(\Omega) ]^{m}:\mathcal{L}_{\varepsilon,\delta}\,u\in[L^{2}(\Omega)]^{m}\Big{\}}. \tag{1.2}\] The coefficient tensor \(A(y)=\big{(}a_{ij}^{\alpha\beta}(y)\big{)}\in H^{1}(Y)\) is assumed to be real and satisfy 1. (Ellipticity) There exists \(\nu>0\) such that \[\nu|\xi|^{2}\leq a_{ij}^{\alpha\beta}(y)\xi_{i}^{\alpha}\xi_{j}^{\beta}\leq \frac{1}{\nu}|\xi|^{2}\quad\text{for $y\in\mathbb{R}^{d}$ and $\xi=(\xi_{i}^{\alpha})\in\mathbb{R}^{dm}$}.\] (1.3) 2. (Periodicity) \[A(y+\mathbf{n})=A(y)\quad\text{for $y\in\mathbb{R}^{d}$ and $\mathbf{n}\in\mathbb{Z}^{d}$}.\] (1.4) 3. (Holder continuity) For any \(y,w\in\mathbb{R}^{d}\), \[|A(y)-A(w)|\leq\tau|y-w|^{\lambda}\quad\text{for some $\lambda\in(0,1)$ and $\tau>0$}.\] (1.5) 4. (Symmetry) For any \(y\in\mathbb{R}^{d}\), \[a_{ij}^{\alpha\beta}(y)=a_{ji}^{\beta\alpha}(y)\quad\text{for $1\leq i,j\leq d$ and $1\leq\alpha,\beta\leq m$}.\] (1.6) The scalar function \(\Lambda_{\delta}^{\varepsilon}(x):=\delta\mathbb{1}_{D_{\varepsilon}}(x)+ \mathbb{1}_{\Omega_{\varepsilon}}(x)\) models the contrast between inclusions and matrix: Let \(Y=[0,1)^{d}\) be the unit cell, and let \(\omega\subset Y\) be an open subset with connected Lipschitz boundary such that \(\mathrm{dist}(\omega,\partial Y)>0\); for simplicity, assume \(\omega\) is simply connected. \(\omega\) is then the model inclusion in the unit scale. Given \(\varepsilon>0\) and \(\mathbf{n}\in\mathbb{Z}^{d}\), we denote \(\varepsilon(\mathbf{n}+Y)\) and \(\varepsilon(\mathbf{n}+\omega)\) by \(Y_{\varepsilon}^{\mathbf{n}}\) and \(\omega_{\varepsilon}^{\mathbf{n}}\), respectively. Let \(\Pi_{\varepsilon}\) be the set of lattice points \(\mathbf{n}\) such that \(\overline{Y_{\varepsilon}^{\mathbf{n}}}\) be contained in \(\Omega\), i.e., \[\Pi_{\varepsilon}:=\left\{\mathbf{n}\in\mathbb{Z}^{d}:\overline{Y_{\varepsilon }^{\mathbf{n}}}\subset\Omega\right\}. \tag{1.7}\] Then the inclusions set \(D_{\varepsilon}\) and the matrix part \(\Omega_{\varepsilon}\) are defined by \[D_{\varepsilon}:=\bigcup_{\mathbf{n}\in\Pi_{\varepsilon}}\omega_{\varepsilon }^{\mathbf{n}},\qquad\Omega_{\varepsilon}:=\Omega\setminus\overline{D_{ \varepsilon}}. \tag{1.8}\] Under these conditions, we study the quantitative asymptotic behavior of the eigenvalues of \(\mathcal{L}_{\varepsilon,\delta}\) as \(\varepsilon\to 0\). To state the main results, let \(\widehat{\mathcal{L}}_{\delta}\) be the homogenized operator defined on \([L^{2}(\Omega)]^{m}\) by \[\widehat{\mathcal{L}}_{\delta}:=-\frac{\partial}{\partial x_{i}}\left[ \widehat{a}_{ij,\delta}^{\alpha\beta}\frac{\partial}{\partial x_{j}}\right]=- \mathrm{div}\,(\widehat{A}_{\delta}\nabla), \tag{1.9}\] with the domain \[\mathcal{D}(\widehat{\mathcal{L}}_{\delta}):=\Big{\{}u\in[H_{0}^{1}(\Omega)]^{ m}:\widehat{\mathcal{L}}_{\delta}\,u\in[L^{2}(\Omega)]^{m}\Big{\}}, \tag{1.10}\] where the coefficient tensor \(\widehat{A}_{\delta}=\big{(}\widehat{a}_{ij,\delta}^{\alpha\beta}\big{)}\) is defined by \[\widehat{a}_{ij,\delta}^{\alpha\beta}=\int_{Y}\Lambda_{\delta}(y)\Big{[}a_{ ij}^{\alpha\beta}(y)+a_{ik}^{\alpha\gamma}(y)\frac{\partial}{\partial y_{k}} \chi_{j,\delta}^{\gamma\beta}\Big{]}\,dy. \tag{1.11}\] Here for each \(i\) and \(\alpha\), \(\chi_{i,\delta}^{\alpha}=\big{(}\chi_{i,\delta}^{\alpha 1},\cdots,\chi_{i,\delta}^{ \alpha m}\big{)}\) is the solution of the _cell problem_ \[\begin{cases}\mathcal{L}_{1,\delta}\big{(}\chi_{i,\delta}^{\alpha}+y_{i}e^{ \alpha}\big{)}=0\qquad\text{in }Y,\\ \chi_{i,\delta}^{\alpha}\text{ is }Y-\text{periodic and mean zero},\end{cases} \tag{1.12}\] where \(e^{\alpha}=(0,\cdots,1,\cdots,0)\in\mathbb{R}^{m}\) with \(1\) in the \(\alpha^{\text{th}}\) position. We denote the tensor \(\chi_{\delta}=\big{(}\chi_{i,\delta}^{\alpha\beta}\big{)}\). Let \[\widehat{\Pi}_{\varepsilon}:=\Big{\{}\mathbf{n}\in\mathbb{Z}^{d}:\overline{Y_ {\varepsilon}^{\mathbf{n}}}\cap\Omega\neq\emptyset\Big{\}}\,,\quad\text{and} \quad\widehat{\Omega}_{\varepsilon}:=\bigcup_{\mathbf{n}\in\widehat{\Pi}_{ \varepsilon}}\overline{Y_{\varepsilon}^{\mathbf{n}}}. \tag{1.13}\] **Theorem 1.1**.: _Let \(\lambda_{\varepsilon,\delta}^{i}\) be the \(i\)-th eigenvalue of \(\mathcal{L}_{\varepsilon,\delta}^{-1}\) in the decreasing order, and let \(\eta_{\varepsilon,\delta}^{i}\) be the \(i\)-th eigenvalue of_ \[\widehat{\mathcal{L}}_{\delta}^{-1}\mathbbm{1}_{\Omega}\langle\cdot\rangle_{Y} +\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}:[L^{2}(\widehat{ \Omega}_{\varepsilon}\times Y)]^{m}\to[L^{2}(\widehat{\Omega}_{\varepsilon} \times Y)]^{m} \tag{1.14}\] _in the decreasing order, then there exists a constant \(C>0\), depends only on \(\Omega\) and \(\omega\), such that_ \[\left|\lambda_{\varepsilon,\delta}^{i}-\eta_{\varepsilon,\delta}^{i}\right| \leq C\varepsilon^{\frac{1}{2}}, \tag{1.15}\] _where \(\langle\cdot\rangle_{Y}\) is the integral operator with respect to \(y\) variable defined in Proposition 2.4, \(\kappa=\delta^{-1}\varepsilon^{2}\), \(\mathcal{P}_{\varepsilon}\) is the projection operator in \([L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)]^{m}\) on piecewise constant functions in \(x\) in each cell \(Y_{\varepsilon}^{\mathbf{n}}\) defined in Proposition 2.2, and \(\mathcal{L}_{\omega,y}^{-1}\,f\) is the solution of_ \[\begin{cases}\mathcal{L}_{1,1}\,u=f&\text{in }\omega,\\ u=0&\text{on }Y\setminus\omega.\end{cases} \tag{1.16}\] It is clear that \(\widehat{\mathcal{L}}_{\delta}^{-1}\mathbbm{1}_{\Omega}\langle\cdot\rangle_{Y} +\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}\) is a positive compact self-adjoint operator on \([L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)]^{m}\), this yields that its spectrum is discrete. Given \(\varepsilon>0\), we decompose the spectrum into two parts: 1. The first part comprises all eigenvalues for which the corresponding eigenfunction \(u\) has a zero mean in \(Y\), i.e. \(\int_{Y}u(x,y)\,dy=0\). This part is referred to as the pure Bloch spectrum. 2. The second part consists of all eigenvalues such that the corresponding eigenfunction \(u\) has a nonzero mean in \(Y\), i.e. \(\int_{Y}u(x,y)\,dy\neq 0\). We call this part as the residual spectrum. The next two theorems characterize the pure Bloch spectrum and the residual spectrum, respectively. **Theorem 1.2**.: _The pure Bloch spectrum is ordered by_ \[\underbrace{\kappa\alpha_{1}=\cdots=\kappa\alpha_{1}}_{|\hat{\Pi}_{\varepsilon}| \ \text{\rm terms}}\geq\cdots\geq\underbrace{\kappa\alpha_{i}=\cdots=\kappa\alpha_ {i}}_{|\hat{\Pi}_{\varepsilon}|\ \text{\rm terms}}\geq\cdots\searrow 0, \tag{1.17}\] _where the sequence \((\alpha_{i})_{i\geq 1}\), ordering decreasingly, denotes the eigenvalues of \(\mathcal{L}_{\omega,y}^{-1}\) with associated mean-zero eigenfunction._ Let \[\gamma_{\kappa}(\lambda)=-\int_{Y}(\kappa\mathcal{L}_{\omega,y}^{-1}-\lambda) ^{-1}[I_{m}](y)\,dy,\] then we observe that: 1. If \(\delta>>\varepsilon^{2}\), then \(\lim_{\varepsilon\to 0}\kappa=0\), and \[\lim_{\varepsilon\to 0}\gamma_{\kappa}(\lambda)=\lambda^{-1}I_{m}.\] 2. If \(\delta<<\varepsilon^{2}\), then \(\lim_{\varepsilon\to 0}\kappa=\infty\), and \[\lim_{\varepsilon\to 0}\gamma_{\kappa}(\lambda)=\lambda^{-1}(1-\theta)I_{m},\] where \(\theta=|\omega|\) is the Lebesgue measure of the unit inclusion \(\omega\). 3. In the critical case where \(\delta\approx\varepsilon^{2}\), i.e., \(\kappa=O(1)\), \(\gamma_{\kappa}(\lambda)\) is a nontrivial symmetric matrix. The residual spectrum can be depicted using \(\gamma_{\kappa}\). Given \(\varepsilon>0\), we denote the residual spectrum by \(\operatorname{RSpec}_{\varepsilon}\). **Theorem 1.3**.: _For any \(\gamma_{\kappa}(\lambda)\in\operatorname{Spec}\widehat{\mathcal{L}}_{\delta}\) with multiplicity \(k\), there exists some \(\lambda_{\varepsilon}\in\operatorname{RSpec}_{\varepsilon}\) with multiplicity \(k\), such that_ \[\|\gamma_{\kappa}(\lambda_{\varepsilon})^{-1}-\gamma_{\kappa}(\lambda)^{-1}\| \leq C\varepsilon\|I_{m}-\lambda_{\varepsilon}^{-1}\gamma_{\kappa}(\lambda_{ \varepsilon})^{-1}\|. \tag{1.18}\] _Moreover, we have_ \[\gamma_{\kappa}(\operatorname{RSpec}_{\varepsilon})^{-1}\subset\bigcup_{ \gamma_{\kappa}(\lambda)\in\operatorname{Spec}\widehat{\mathcal{L}}_{\delta}} B\Big{(}\gamma_{\kappa}(\lambda)^{-1},C\varepsilon\|I_{m}-\lambda_{ \varepsilon}^{-1}\gamma_{\kappa}(\lambda_{\varepsilon})^{-1}\|\Big{)}. \tag{1.19}\] **Remark 1**.: As an immediately corollary of Theorem 1.1, Theorem 1.2 and Theorem 1.3, we obtain the qualitative result: \[\lim_{\varepsilon\to 0}\operatorname{Spec}\mathcal{L}_{\varepsilon,\delta}= \begin{cases}\operatorname{Spec}\widehat{\mathcal{L}}_{\delta}&\text{for }\delta>> \varepsilon^{2},\\ \Big{(}\lim_{\varepsilon\to 0}\delta\varepsilon^{-2}\{\alpha_{m}^{-1}\}_{m \geq 1}\Big{)}\cup(1-\theta)^{-1}\operatorname{Spec}\widehat{\mathcal{L}}_{0}& \text{for }\delta<<\varepsilon^{2},\\ \kappa^{-1}\{\alpha_{m}^{-1}\}_{m\geq 1}\cup\overline{\{\lambda^{-1}:\gamma_{ \kappa}(\lambda)\in\operatorname{Spec}\widehat{\mathcal{L}}_{0}\}}&\text{for }\delta \approx\varepsilon^{2}.\end{cases} \tag{1.20}\] **Remark 2**.: More careful analysis of the convergence rate are presented in Section 4 in the scalar case \(m=1\), see Theorem 4.1. This paper is organized as following: In Section 2, we introduce the quantitative periodic unfolding method. In Section 3, we prove Theorem 1.1, Theorem 1.2 and Theorem 1.3. In Section 4, we show a more delicate analysis in the scalar case \(m=1\). These completes this paper. For the sake of clarity and simplicity, we will proceed by omitting the upper symbol \(m\) from our notation. Instead of the product space \(H^{m}\), we will use \(H\). Here, \(H\) denotes a variety of function spaces, including but not limited to \(L^{2}(\Omega)\), \(H^{1}(\Omega)\), and \(L^{2}(\Omega\times Y)\). ## 2. Quantitative periodic unfolding method In this section, we introduce the quantitative periodic unfolding method. In 1990, Arbogast, Douglas and Hornung [2] introduced a 'dilation' operation to study homogenization in a periodic medium with double porosity. This technique reduces two-scale convergence to weak convergence in an appropriate space. Combining this approach with ideas from Finite Element approximations, Cioranescu, Damlamian and Griso [3] propose the periodic unfolding method to study homogenization of multiscale periodic problems. For further details, we refer to [1, 4]. The most significant advantage of the periodic unfolding method is that it lifts the notion of (weak) two-scale convergence in \(L^{2}(\Omega)\) to the (weak) convergence in the unfolded space \(L^{2}(\Omega\times Y)\). Nevertheless, to study the convergence rates in problems of homogenization, quantitative estimates for the corresponding operators are required. These estimates are provided in this section. **Definition 2.1**.: For \(x\in\mathbb{R}^{d}\), let \([x]_{Y}\in\mathbb{Z}^{d}\) be the integer part of \(x\), and \(\{x\}_{Y}=x-[x]_{Y}\in Y\) be the fractional part of \(x\). * The unfolding operator \(\widetilde{\mathcal{T}}_{\varepsilon}:L^{2}(\widehat{\Omega}_{ \varepsilon})\to L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)\) is defined by \[\widetilde{\mathcal{T}}_{\varepsilon}u(x,y)=\sum_{\mathbf{n}\in\widehat{ \Pi}_{\varepsilon}}\mathbbm{1}_{Y_{\varepsilon}^{\mathbf{n}}}(x)u\left( \varepsilon\left[\frac{x}{\varepsilon}\right]_{Y}+\varepsilon y\right).\] * The local averaging operator \(\widetilde{\mathcal{U}}_{\varepsilon}:L^{2}(\widehat{\Omega}_{\varepsilon} \times Y)\to L^{2}(\widehat{\Omega}_{\varepsilon})\) is defined by \[\widetilde{\mathcal{U}}_{\varepsilon}\phi(x)=\sum_{\mathbf{n}\in\widehat{ \Pi}_{\varepsilon}}\mathbbm{1}_{Y_{\varepsilon}^{\mathbf{n}}}(x)\int_{Y}\phi \left(\varepsilon\left[\frac{x}{\varepsilon}\right]_{Y}+\varepsilon z,\left\{ \frac{x}{\varepsilon}\right\}_{Y}\right)\,dz.\] The following proposition states the basic properties of \(\widetilde{\mathcal{T}}_{\varepsilon}\) and \(\widetilde{\mathcal{U}}_{\varepsilon}\), whose proof could be found in [1]. **Proposition 2.2**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded Lipschitz domain._ * \(\widetilde{\mathcal{T}}_{\varepsilon}\) _and_ \(\widetilde{\mathcal{U}}_{\varepsilon}\) _are both bounded by norm_ \(1\)_, and_ \(\widetilde{\mathcal{T}}_{\varepsilon}\) _is the adjoint of_ \(\widetilde{\mathcal{U}}_{\varepsilon}\)_, i.e.,_ \(\widetilde{\mathcal{U}}_{\varepsilon}^{*}=\widetilde{\mathcal{T}}_{\varepsilon}\)_._ * \(\widetilde{\mathcal{U}}_{\varepsilon}\circ\widetilde{\mathcal{T}}_{\varepsilon }=\operatorname{Id}_{L^{2}(\widehat{\Omega}_{\varepsilon})}\) _is the identity operator on_ \(L^{2}(\widehat{\Omega}_{\varepsilon})\)_._ * \(\mathcal{P}_{\varepsilon}:=\widetilde{\mathcal{T}}_{\varepsilon}\circ \widetilde{\mathcal{U}}_{\varepsilon}\) _is a projection operator in_ \(L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)\) _on piecewise constant function in_ \(x\) _in each cell_ \(Y_{\varepsilon}^{\mathbf{n}}\) _for_ \(\mathbf{n}\in\widehat{\Pi}_{\varepsilon}\)_. More precisely, for any_ \(\phi\in L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)\)_,_ \[\mathcal{P}_{\varepsilon}\phi(x,y)=\sum_{\mathbf{n}\in\widehat{\Pi}_{ \varepsilon}}\mathbbm{1}_{Y_{\varepsilon}^{\mathbf{n}}}(x)\varepsilon^{-d} \int_{Y_{\varepsilon}^{\mathbf{n}}}\phi(x^{\prime},y)\,dx^{\prime}.\] (2.1) We denote that \[\mathcal{T}_{\varepsilon}=\widetilde{\mathcal{T}}_{\varepsilon}\circ \mathbbm{1}_{\Omega},\quad\text{and}\quad\mathcal{U}_{\varepsilon}= \mathbbm{1}_{\Omega}\circ\widetilde{\mathcal{U}}_{\varepsilon}, \tag{2.2}\] then \(\mathcal{T}_{\varepsilon}\) is the adjoint of \(\mathcal{U}_{\varepsilon}\), and \(\mathcal{U}_{\varepsilon}\circ\mathcal{T}_{\varepsilon}=\mathbbm{1}_{\Omega}\). As we have mentioned above, the periodic unfolding method transforms the concept of two-scale convergence in \(L^{2}(\Omega)\) into the typical convergence in \(L^{2}(\Omega\times Y)\). Hence, the two-scale compactness becomes the usual compactness in \(L^{2}(\Omega\times Y)\). This facilitates an easy transition from the equation under consideration to the homogenized equation at the homogenization limit. However, to obtain the rate of convergence, one must consider the asymptotic limit of the unfolding operators. This is presented in Proposition 2.4. Before that, we need a lemma: **Lemma 2.3**.: _Let \(D\subset\mathbb{R}^{d}\) be a bounded Lipschitz domain, there exists a constant \(C>0\), depends only on \(D\), such that for any \(u\in H^{1}(D)\),_ \[\int_{D}\int_{D}|u(x)-u(y)|^{2}\,dxdy\leq C\|\nabla u\|_{L^{2}(D)}^{2}.\] Proof.: This is just a corollary of the Poincare-Wirtinger inequality. We note that \[|u(x)-u(y)|^{2}\leq 2|u(x)-\langle u\rangle_{D}|^{2}+2|u(y)-\langle u\rangle_{D} |^{2},\] where \(\langle u\rangle_{D}=\int_{D}u(x)\,dx\). Therefore, \[\int_{D}\int_{D}|u(x)-u(y)|^{2}\,dxdy\leq 2|D|\Big{(}\int_{D}|u(x)-\langle u \rangle_{D}|^{2}\,dx+\int_{D}|u(y)-\langle u\rangle_{D}|^{2}\,dy\Big{)}\leq C \|\nabla u\|_{L^{2}(D)}^{2},\] where the second inequality using the Poincare-Wirtinger inequality. **Proposition 2.4** (Quantitative estimates).: _Let \(\langle\cdot\rangle_{Y}:L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)\to L^{2} (\widehat{\Omega}_{\varepsilon})\) be the integral operator with respect to \(y\) variable, i.e.,_ \[\langle\cdot\rangle_{Y}:\phi(x,y)\mapsto\int_{Y}\phi(x,y)\,dy.\] _Let \(\iota:L^{2}(\widehat{\Omega}_{\varepsilon})\to L^{2}(\widehat{\Omega}_{ \varepsilon}\times Y)\) be the embedding operator, i.e.,_ \[\iota:u(x)\to u(x,y).\] _It is clear that \(\iota^{*}=\langle\cdot\rangle_{Y}\). Then there exists a constant \(C>0\), depends only on \(\Omega\), such that_ * \(\|\widetilde{\mathcal{U}}_{\varepsilon}-\langle\cdot\rangle_{Y}\|_{L^{2}( \widehat{\Omega}_{\varepsilon}\times Y)\to H^{-1}(\widehat{\Omega}_{ \varepsilon})}\leq C\varepsilon\)_._ * \(\|\widetilde{\mathcal{T}}_{\varepsilon}-\iota\|_{H^{1}(\widehat{\Omega}_{ \varepsilon})\to L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)}\leq C\varepsilon\)_._ * \(\|\mathcal{P}_{\varepsilon}-\operatorname{Id}_{L^{2}(\widehat{\Omega}_{ \varepsilon})}\|_{H^{1}(\widehat{\Omega}_{\varepsilon})\to L^{2}( \widehat{\Omega}_{\varepsilon})}\leq C\varepsilon\)_._ Proof.: We prove the proposition one item by one item. * We verify that \(\|\widetilde{\mathcal{U}}_{\varepsilon}-\langle\cdot\rangle_{Y}\|_{L^{2}( \widehat{\Omega}_{\varepsilon}\times Y)\to H^{-1}(\widehat{\Omega}_{ \varepsilon})}\leq C\varepsilon\). Given \(v\in L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)\) and \(\phi\in H^{1}_{0}(\widehat{\Omega}_{\varepsilon})\), by definition we have \[\big{\langle}\widetilde{\mathcal{U}}_{\varepsilon}v-\langle v\rangle_{Y}, \phi\big{\rangle}_{L^{2}(\widehat{\Omega}_{\varepsilon})}=\sum_{\mathbf{n} \in\widehat{\Pi}_{\varepsilon}}\int_{Y_{Y}^{\mathbf{n}}}\Big{(}\int_{Y}v\, \Big{(}\varepsilon\left[\frac{x}{\varepsilon}\right]_{Y}+\varepsilon z,\Big{\{} \frac{x}{\varepsilon}\Big{\}}_{Y}\Big{)}\,\,dz-\int_{Y}v(x,w)\,dw\Big{)}\phi(x )\,dx,\] using change of variable \(x=\varepsilon\mathbf{n}+\varepsilon y\), we get \[\big{\langle}\widetilde{\mathcal{U}}_{\varepsilon}v-\langle v\rangle_{Y}, \phi\big{\rangle}_{L^{2}(\widehat{\Omega}_{\varepsilon})}=\varepsilon^{d} \sum_{\mathbf{n}\in\widehat{\Pi}_{\varepsilon}}\int_{Y}V_{\varepsilon}^{ \mathbf{n}}(y)\phi(\varepsilon\mathbf{n}+\varepsilon y)\,dy,\] where \[V_{\varepsilon}^{\mathbf{n}}(y):=\int_{Y}v(\varepsilon\mathbf{n}+\varepsilon z,y)\,dz-\int_{Y}v(\varepsilon\mathbf{n}+\varepsilon y,w)\,dw.\] Since \(\int_{Y}V_{\varepsilon}^{\mathbf{n}}(y)\,dy=0\), we obtain that \[\big{|}\big{\langle}\widetilde{\mathcal{U}}_{\varepsilon}v- \langle v\rangle_{Y},\phi\big{\rangle}_{L^{2}(\widehat{\Omega}_{\varepsilon})} \big{|} \leq\varepsilon^{d}\left|\sum_{\mathbf{n}\in\widehat{\Pi}_{ \varepsilon}}\int_{Y}V_{\varepsilon}^{\mathbf{n}}(y)\Big{(}\phi( \varepsilon\mathbf{n}+\varepsilon y)-\int_{Y}\phi(\varepsilon\mathbf{n}+ \varepsilon s)\,ds\Big{)}\,dy\right|\] \[\leq\varepsilon^{d}\sum_{\mathbf{n}\in\widehat{\Pi}_{\varepsilon} }\|V_{\varepsilon}^{\mathbf{n}}\|_{L^{2}(Y)}\left\|\phi(\varepsilon\mathbf{n}+ \varepsilon y)-\int_{Y}\phi(\varepsilon\mathbf{n}+\varepsilon s)\,ds\right\|_{L^ {2}(Y)}\] \[\leq C\varepsilon^{\frac{d}{2}+1}\left(\sum_{\mathbf{n}\in \widehat{\Pi}_{\varepsilon}}\int_{Y}|V_{\varepsilon}^{\mathbf{n}}|^{2}\right)^{ 1/2}\|\phi\|_{H^{1}(\widehat{\Omega}_{\varepsilon})},\] where the last inequality follows from the Poincare-Wirtinger inequality on \(Y\). Now by definition, we have \[\left|V_{\varepsilon}^{\mathbf{n}}(y)\right|^{2}\leq 2\int_{Y}\left|v( \varepsilon\mathbf{n}+\varepsilon z,y)\right|^{2}dz+2\int_{Y}\left|v( \varepsilon\mathbf{n}+\varepsilon y,w)\right|^{2}dw,\] so \[\sum_{\mathbf{n}\in\widehat{\Pi}_{\varepsilon}}\int_{Y}\left|V_{ \varepsilon}^{\mathbf{n}}(y)\right|^{2}dy \leq 2\sum_{\mathbf{n}\in\widehat{\Pi}_{\varepsilon}}\int_{Y}\int_ {Y}\left|v(\varepsilon\mathbf{n}+\varepsilon z,y)\right|^{2}dzdy\] \[=2\varepsilon^{-d}\sum_{\mathbf{n}\in\widehat{\Pi}_{\varepsilon} }\int_{Y}\int_{Y_{\varepsilon}^{\mathbf{n}}}\left|v(x,y)\right|^{2}dxdy\] \[=2\varepsilon^{-d}\int_{\widehat{\Omega}_{\varepsilon}\times Y} \left|v(x,y)\right|^{2}dxdy.\] This yields the desired conclusion. 2. We verify that \(\left\|\widetilde{\mathcal{T}}_{\varepsilon}-\iota\right\|_{H^{1}(\widehat{ \Omega}_{\varepsilon})\to L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)}\leq C\varepsilon\). Given \(u\in H^{1}(\widehat{\Omega}_{\varepsilon})\), we have \[\int_{\widehat{\Omega}_{\varepsilon}\times Y}\left|\widetilde{\mathcal{T}}_{ \varepsilon}u(x,y)-u(x)\right|^{2}dxdy=\varepsilon^{d}\sum_{\mathbf{n}\in \widehat{\Pi}_{\varepsilon}}\int_{Y\times Y}\left|u(\varepsilon\mathbf{n}+ \varepsilon y)-u(\varepsilon\mathbf{n}+\varepsilon s)\right|^{2}dsdy,\] by Lemma 2.3 for \(D=Y\), we obtain that \[\int_{\widehat{\Omega}_{\varepsilon}\times Y}\left|\widetilde{ \mathcal{T}}_{\varepsilon}u(x,y)-u(x)\right|^{2}dxdy \leq C\varepsilon^{d+2}\sum_{\mathbf{n}\in\widehat{\Pi}_{ \varepsilon}}\int_{Y}\left|\nabla u(\varepsilon\mathbf{n}+\varepsilon x) \right|^{2}dx\] \[\leq C\varepsilon^{2}\|u\|_{H^{1}(\widehat{\Omega}_{\varepsilon})} ^{2},\] which is the desired conclusion. 3. Lastly we verify that \(\left\|\mathcal{P}_{\varepsilon}-\mathrm{Id}_{L^{2}(\widehat{\Omega}_{ \varepsilon})}\right\|_{H^{1}(\widehat{\Omega}_{\varepsilon})\to L^{2}( \widehat{\Omega}_{\varepsilon})}\leq C\varepsilon\). Given \(u\in H^{1}(\widehat{\Omega}_{\varepsilon})\), we compute \[\left\|\mathcal{P}_{\varepsilon}u-u\right\|_{L^{2}(\widehat{\Omega}_{ \varepsilon})}^{2}=\sum_{\mathbf{n}\in\widehat{\Pi}_{\varepsilon}}\int_{Y_{ \varepsilon}^{\mathbf{n}}}\left|\varepsilon^{-d}\int_{Y_{\varepsilon}^{ \mathbf{n}}}u(x^{\prime})\,dx^{\prime}-u(x)\right|^{2}dx\leq C\varepsilon^{2} \|u\|_{H^{1}(\widehat{\Omega}_{\varepsilon})}^{2},\] where the last inequality follows from Poincare-Wirtinger inequality on \(Y\). ## 3. Homogenization of the eigenvalues This section is devoted to provide proofs for Theorem 1.1, Theorem 1.2 and Theorem 1.3. Here is an overviw of our method: _Step 1._ Fu and Jing proved in their work [5] that **Lemma 3.1**.: _There exists a constant \(C>0\), depends only on \(d,m,\mu,\lambda,\tau,\kappa,\Omega\) and \(\omega\), such that_ \[\left\|\mathcal{L}_{\varepsilon,\delta}^{-1}-\widehat{\mathcal{L}}_{\delta}^{ -1}-\delta^{-1}\mathcal{L}_{D_{\varepsilon}}^{-1}\right\|_{L^{2}(\Omega)\to L ^{2}(\Omega)}\leq C\varepsilon^{1/2}. \tag{3.1}\] Lemma 3.1 suggests that the spectrum of \(\mathcal{L}_{\varepsilon,\delta}^{-1}\) and \(\widehat{\mathcal{L}}_{\delta}^{-1}+\delta^{-1}\mathcal{L}_{D_{\varepsilon}}^{ -1}\) are nearly same. Recall that \(\mathcal{T}_{\varepsilon}\) is the adjoint of \(\mathcal{U}_{\varepsilon}\) and \(\mathcal{U}_{\varepsilon}\circ\mathcal{T}_{\varepsilon}=\mathbb{1}_{\Omega}\), the lifted (conjugated) operator \(\mathcal{T}_{\varepsilon}\big{(}\widehat{\mathcal{L}}_{\delta}^{-1}+\delta^{-1 }\mathcal{L}_{D_{\varepsilon}}^{-1}\big{)}\mathcal{U}_{\varepsilon}\) is self-adjoint and exhibits an identical spectrum to \(\widehat{\mathcal{L}}_{\delta}^{-1}+\delta^{-1}\mathcal{L}_{D_{\varepsilon}}^{ -1}\). These insights guide us towards studying the spectrum of the lifted operator. We then utilize the quantitative periodic unfolding method introduced in Section 2 to demonstrate that the lifted operator approaches \(\widehat{\mathcal{L}}_{\delta}^{-1}\mathbb{1}_{\Omega}\langle\cdot\rangle_{Y}+ \kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}\) in operator norms with an error \(O(\varepsilon)\). This validates the proof of Theorem 1.1. _Step 2._ Proving Theorem 1.2 is straightforward. In order to prove Theorem 1.3, intuitively we infer that the spectrum of \(\widehat{\mathcal{L}}_{\delta}^{-1}\mathbb{1}_{\Omega}\langle\cdot\rangle_{Y }+\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}\) converges to that of \(\widehat{\mathcal{L}}_{\delta}^{-1}\mathbb{1}_{\Omega}\langle\cdot\rangle_{Y }+\kappa\mathbb{1}_{\Omega}\mathcal{L}_{\omega,y}^{-1}\) by Proposition 2.4 (c). Regrettably, the later (limit) operator is not compact, and the operator norm of their difference does not converges to zero. To circumvent this issue, we construct an auxillary operator \(\widehat{\mathcal{L}}_{\delta}^{-1}+B_{\varepsilon,\delta,\lambda}\) on \(H^{-1}(\Omega)\) with a new inner product \(\langle\cdot,\cdot\rangle_{\delta}\). By showing that \(\|B_{\varepsilon,\delta,\lambda}\|_{\delta}\to 0\), we successfully prove Theorem 1.3. This process is summarized in the following diagram: We now give the proof of Theorem 1.1. Proof of Theorem 1.1.: Since \(\mathcal{T}_{\varepsilon}\) is the adjoint of \(\mathcal{U}_{\varepsilon}\) and \(\mathcal{U}_{\varepsilon}\circ\mathcal{T}_{\varepsilon}=\mathbb{1}_{\Omega}\), the spectrum of \(\mathcal{L}_{\varepsilon,\delta}^{-1}\) coincides with the spectrum of \(\mathcal{T}_{\varepsilon}\mathcal{L}_{\varepsilon,\delta}^{-1}\mathcal{U}_{\varepsilon}\). By Lemma 3.1, we obtain that \[\big{\|}\mathcal{T}_{\varepsilon}\mathcal{L}_{\varepsilon,\delta}^{-1}\mathcal{ U}_{\varepsilon}-\mathcal{T}_{\varepsilon}\big{(}\widehat{\mathcal{L}}_{ \delta}^{-1}+\delta^{-1}\mathcal{L}_{D_{\varepsilon}}^{-1}\big{)}\mathcal{U}_{ \varepsilon}\big{\|}_{L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)\to L^{2}( \widehat{\Omega}_{\varepsilon}\times Y)}\leq C\varepsilon^{1/2}.\] We first observe that the commutate law \(\varepsilon^{-2}\mathcal{L}_{D_{\varepsilon}}^{-1}\mathcal{U}_{\varepsilon}= \mathcal{U}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}\) holds, so \(\mathcal{T}_{\varepsilon}\big{(}\delta^{-1}\mathcal{L}_{D_{\varepsilon}}^{-1 }\big{)}\mathcal{U}_{\varepsilon}=\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_ {\omega,y}^{-1}\). Moreover, by Proposition 2.4 (a) and (b), we have \[\|\mathcal{T}_{\varepsilon}\widehat{\mathcal{L}}_{\delta}^{-1} \mathcal{U}_{\varepsilon}-\widehat{\mathcal{L}}_{\delta}^{-1}\mathbb{1}_{ \Omega}\langle\cdot\rangle_{Y}\|_{L^{2}(\widehat{\Omega}_{\varepsilon}\times Y \to L^{2}(\widehat{\Omega}_{\varepsilon}\times Y))}\] \[\leq\|(\widetilde{\mathcal{T}}_{\varepsilon}-\iota)\widehat{ \mathcal{L}}_{\delta}^{-1}\mathcal{U}_{\varepsilon}\|_{L^{2}(\widehat{\Omega}_ {\varepsilon}\times Y)\to L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)}+\| \widehat{\mathcal{L}}_{\delta}^{-1}\mathbb{1}_{\Omega}(\widetilde{\mathcal{U} }_{\varepsilon}-\langle\cdot\rangle_{Y})\|_{L^{2}(\widehat{\Omega}_{ \varepsilon}\times Y)\to L^{2}(\Omega)}\] \[\leq C\varepsilon.\] These estimates, combine with the standard stability theorem for self-adjoint operators, yield the conclusion (1.15). **Remark 3**.: We note that the rate \(O(\varepsilon^{1/2})\) follows from Lemma 3.1, in fact, by exploring the regularity property of \(\mathcal{L}_{\varepsilon,\delta}\), one may show the optimal \(L^{2}\) convergence rate is \(O(\varepsilon)\), as studied in [6, 7]. We may study this aspect in a future work. Proof of Theorem 1.2.: Assume that there exists a nonzero \(\psi\) such that \(\int_{\omega}\psi(y)\,dy=0\) and \[\mathcal{L}_{\omega,y}^{-1}\,\psi=\lambda\psi,\] we define \(|\widehat{\Pi}_{\varepsilon}|\) numbers of independent functions \(u^{\mathbf{n}}(x,y)=\mathbb{1}_{Y^{\mathbf{n}}_{\varepsilon}}(x)\psi(y)\), where \(\mathbf{n}\in\widehat{\Pi}_{\varepsilon}\), then \[\big{(}\widehat{\mathcal{L}}_{\delta}^{-1}\mathbb{1}_{\Omega}\langle\cdot \rangle_{Y}+\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}\big{)}u ^{\mathbf{n}}=\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}u^{ \mathbf{n}}=\kappa\lambda u^{\mathbf{n}},\] which shows that (1.17) is contained in the pure Bloch spectrum. Conversely, for any \(\lambda\) in the pure Bloch spectrum, by definition, there exists an eigenfunction \(u\in L^{2}(\widehat{\Omega}_{\varepsilon}\times Y)\) such that \(\int_{Y}u(x,y)\,dy=0\) and \[\big{(}\widehat{\mathcal{L}}_{\delta}^{-1}\mathbb{1}_{\Omega}\langle\cdot \rangle_{Y}+\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}\big{)}u =\lambda u,\] which yields that \(\kappa\mathcal{P}_{\varepsilon}\mathcal{L}_{\omega,y}^{-1}u=\lambda u\). Apply the projection \(\mathcal{P}_{\varepsilon}\) on both sides and note that \(\mathcal{P}_{\varepsilon}\) commutes with \(\mathcal{L}_{\omega,y}^{-1}\), we get that \[\kappa\mathcal{L}_{\omega,y}^{-1}\mathcal{P}_{\varepsilon}u=\lambda\mathcal{P} _{\varepsilon}u. \tag{3.2}\] We write \(\mathcal{P}_{\varepsilon}u\) in the form of \[\mathcal{P}_{\varepsilon}u(x,y)=\sum_{\mathbf{n}\in\widehat{\Pi}_{\varepsilon}} \mathbbm{1}_{Y^{\mathbf{n}}_{\varepsilon}}(x)u^{\mathbf{n}}_{\varepsilon}(y), \qquad\text{where }\langle u^{\mathbf{n}}_{\varepsilon}\rangle_{Y}=0, \tag{3.3}\] then (3.2) implies that \[-\mathcal{L}^{-1}_{\omega,y}u^{\mathbf{n}}_{\varepsilon}=\kappa^{-1}\lambda u^ {\mathbf{n}}_{\varepsilon},\quad\langle u^{\mathbf{n}}_{\varepsilon}\rangle_{ Y}=0,\quad\text{for any }\varepsilon>0\text{ and }\mathbf{n}\in\widehat{\Pi}_{\varepsilon}.\] Therefore, there exists \(i\in\mathbb{N}\) such that \(\lambda=\kappa\alpha_{i}\). This completes the proof. Then we have the following proposition. **Proposition 3.2**.: \(\lambda\in\mathbb{C}\) _is in the residual spectrum if and only if \(\gamma_{\kappa}(\lambda)^{-1}\) is the (matrix-valued) eigenvalue of \(\widehat{\mathcal{L}}^{-1}_{\delta}+B_{\varepsilon,\delta,\lambda}\), where \(B_{\varepsilon,\delta,\lambda}\) is defined by_ \[B_{\varepsilon,\delta,\lambda}:=\big{(}I_{m}-\lambda^{-1}\gamma_{\kappa}( \lambda)^{-1}\big{)}\mathbbm{1}_{\Omega}(\mathcal{P}_{\varepsilon}-\mathrm{Id })\widehat{\mathcal{L}}^{-1}_{\delta}.\] _Moreover, the multiplicity of \(\lambda\) is same as the multiplicity of \(\gamma_{\kappa}(\lambda)^{-1}\)._ Proof.: Let \(\lambda\in\mathbb{C}\) is in the residual spectrum, by definition, there exists an eigenfunction \(u\) such that \(\int_{Y}u(x,y)\,dy\neq 0\) and \[\big{(}\widehat{\mathcal{L}}^{-1}_{\delta}\mathbbm{1}_{\Omega}\langle\cdot \rangle_{Y}+\kappa\mathcal{P}_{\varepsilon}\mathcal{L}^{-1}_{\omega,y}\big{)} u=\lambda u, \tag{3.4}\] apply \(\mathcal{P}_{\varepsilon}\) on both sides of above, and note that \(\mathcal{P}_{\varepsilon}\) commutates with \(\mathcal{L}^{-1}_{\omega,y}\), we get \[\kappa\mathcal{L}^{-1}_{\omega,y}\mathcal{P}_{\varepsilon}u-\lambda\mathcal{P }_{\varepsilon}u=-\mathcal{P}_{\varepsilon}\widehat{\mathcal{L}}^{-1}_{ \delta}\mathbbm{1}_{\Omega}\langle u\rangle_{Y},\] fixed \(x\), we though the above equation only depends on \(y\), then we solve it as \[\mathcal{P}_{\varepsilon}u(x,y)=b_{\kappa,\lambda}(y)\mathcal{P}_{\varepsilon }\widehat{\mathcal{L}}^{-1}_{\delta}\mathbbm{1}_{\Omega}\langle u\rangle_{Y}( x), \tag{3.5}\] where \(b_{\kappa,\lambda}(y)=-(\kappa\mathcal{L}^{-1}_{\omega,y}-\lambda)^{-1}[I_{m} ](y)\). Now substitute (3.5) to (3.4) we obtain \[\widehat{\mathcal{L}}^{-1}_{\delta}\mathbbm{1}_{\Omega}\langle u\rangle_{Y}(x )+(\lambda b_{\kappa,\lambda}(y)-I_{m})\mathcal{P}_{\varepsilon}\widehat{ \mathcal{L}}^{-1}_{\delta}\mathbbm{1}_{\Omega}\langle u\rangle_{Y}(x)=\lambda u (x,y), \tag{3.6}\] then integrate (3.6) with respect to \(y\) in \(Y\), and apply \(\mathbbm{1}_{\Omega}\), we get that \[\widehat{\mathcal{L}}^{-1}_{\delta}\mathbbm{1}_{\Omega}\langle u\rangle_{Y}( x)+\big{(}I_{m}-\lambda^{-1}\gamma(\kappa,\lambda)^{-1}\big{)}\mathbbm{1}_{ \Omega}(\mathcal{P}_{\varepsilon}-\mathrm{Id})\widehat{\mathcal{L}}^{-1}_{ \delta}\mathbbm{1}_{\Omega}\langle u\rangle_{Y}(x)=\gamma_{\kappa}(\lambda)^{- 1}\mathbbm{1}_{\Omega}\langle u\rangle_{Y}(x).\] This shows that \(\gamma_{\kappa}(\lambda)^{-1}\) is the (matrix-valued) eigenvalue of \(\widehat{\mathcal{L}}^{-1}_{\delta}+B_{\varepsilon,\delta,\lambda}\). We assume that \(u_{1},\cdots,u_{n}\) are independent eigenfunctions of \(\big{(}\widehat{\mathcal{L}}^{-1}_{\delta}\mathbbm{1}_{\Omega}\langle\cdot \rangle_{Y}+\kappa\mathcal{P}_{\varepsilon}\mathcal{L}^{-1}_{\omega,y}\big{)}\) for eigenvalue \(\lambda\) and \(a_{1}\mathbbm{1}_{\Omega}\langle u_{1}\rangle_{Y}+\cdots a_{n}\mathbbm{1}_{ \Omega}\langle u_{n}\rangle_{Y}=0\), then by (3.6) we obtain that \(a_{1}u_{1}+\cdots a_{n}u_{n}=0\), hence \(a_{i}=0\). This shows that the multiplicity of \(\lambda\) is not larger than that of \(\gamma(\kappa,\lambda)^{-1}\). Conversely, suppose that \(\gamma(\kappa,\lambda)^{-1}\) is an eigenvalue of \(\widehat{\mathcal{L}}^{-1}_{\delta}+B_{\varepsilon,\delta,\lambda}\), there exists a nonzero \(f\in L^{2}(\Omega)\) such that \[\widehat{\mathcal{L}}^{-1}_{\delta}f+B_{\varepsilon,\delta,\lambda}f=\gamma( \kappa,\lambda)^{-1}f,\] we define \[u(x,y):=\lambda^{-1}\widehat{\mathcal{L}}^{-1}_{\delta}f(x)+\big{(}b_{\kappa, \lambda}(y)-\lambda^{-1}I_{m}\big{)}\mathcal{P}_{\varepsilon}\widehat{ \mathcal{L}}^{-1}_{\delta}f(x),\] then \(\mathbbm{1}_{\Omega}\langle u\rangle_{Y}=f\), in particular, \(\langle u\rangle_{Y}\neq 0\). Moreover, \[\widehat{\mathcal{L}}^{-1}_{\delta}\mathbbm{1}_{\Omega}\langle u \rangle_{Y}-\kappa\mathcal{P}_{\varepsilon}\mathcal{L}^{-1}_{\omega,y}u\] \[=\widehat{\mathcal{L}}^{-1}_{\delta}f(x)-\kappa\lambda^{-1} \mathcal{L}^{-1}_{\omega,y}[I_{m}](y)\mathcal{P}_{\varepsilon}\widehat{ \mathcal{L}}^{-1}_{\delta}f(x)-\kappa\big{(}\mathcal{L}^{-1}_{\omega}b_{\kappa, \lambda}(y)-\lambda^{-1}\mathcal{L}^{-1}_{\omega}[I_{m}](y)\big{)}\mathcal{P}_{ \varepsilon}\widehat{\mathcal{L}}^{-1}_{\delta}f(x)\] \[=\widehat{\mathcal{L}}^{-1}_{\delta}f(x)+\big{(}\lambda b_{ \kappa,\lambda}(y)-I_{m}\big{)}\mathcal{P}_{\varepsilon}\widehat{\mathcal{L}}^{-1}_{ \delta}f(x)\] \[=\lambda u,\] which shows that \(\lambda\) is in the residual spectrum. Moreover, if \(f_{1},\cdots,f_{n}\) are independent eigenfunctions, then of course \(u_{1},\cdots,u_{n}\) are independent since \(f=\mathbb{1}_{\Omega}\langle u\rangle_{Y}\), so the multiplicity of \(\lambda\) is not smaller than the multiplicity of \(\gamma_{\kappa}(\lambda)^{-1}\). The proof is complete. Proof of Theorem 1.3.: We define an inner product on \(H^{-1}(\Omega)\) by \[\langle u,v\rangle_{\delta}:=\langle u,\widehat{\mathcal{L}}_{\delta}^{-1}v \rangle_{H^{-1}(\Omega),H_{0}^{1}(\Omega)}, \tag{3.7}\] for \(u,v\in H^{-1}(\Omega)\). We show that \(\|\cdot\|_{\delta}\) is equivalent to \(\|\cdot\|_{H^{-1}(\Omega)}\). For any \(u\in H^{-1}(\Omega)\), since \(\widehat{\mathcal{L}}_{\delta}^{-1}\) is a bijection from \(H^{-1}(\Omega)\) to \(H_{0}^{1}(\Omega)\), we have \[\|u\|_{H^{-1}(\Omega)}\sim\|\widehat{\mathcal{L}}_{\delta}^{-1}u\|_{H_{0}^{1}( \Omega)}.\] It then follows from the definition of \(\|\cdot\|_{\delta}\) that \[\|u\|_{\delta}^{2}\leq\|u\|_{H^{-1}(\Omega)}\|\widehat{\mathcal{L}}_{\delta}^ {-1}u\|_{H_{0}^{1}(\Omega)}\leq C\|u\|_{H^{-1}(\Omega)}^{2}, \tag{3.8}\] which yields that \[\|u\|_{H^{-1}(\Omega)}=\sup_{0\neq v\in H_{0}^{1}(\Omega)}\frac{\langle u,v \rangle_{H^{-1}(\Omega),H_{0}^{1}(\Omega)}}{\|v\|_{H^{1}(\Omega)}}=\sup_{0 \neq w\in H^{-1}(\Omega)}\frac{\langle u,\widehat{\mathcal{L}}_{\delta}^{-1}w \rangle_{H^{-1}(\Omega),H_{0}^{1}(\Omega)}}{\|w\|_{H^{-1}(\Omega)}}\leq\|u\|_ {\delta}. \tag{3.9}\] (3.8) and (3.9) imply that \(\|\cdot\|_{\delta}\) is equivalent to \(\|\cdot\|_{H^{-1}(\Omega)}\). Since \(\|\cdot\|_{\delta}\) is equivalent to \(\|\cdot\|_{H^{-1}(\Omega)}\) and \(\widehat{\mathcal{L}}_{\delta}^{-1}\) is compact on \(H^{-1}(\Omega)\), we obtain that \(\widehat{\mathcal{L}}_{\delta}^{-1}+B_{\varepsilon,\delta,\lambda}:(H^{-1}( \Omega),\|\cdot\|_{\delta})\to(H^{-1}(\Omega),\|\cdot\|_{\delta})\) is compact. The self-adjointness follows from \[\langle\widehat{\mathcal{L}}_{\delta}^{-1}u,v\rangle_{\delta}=\langle\widehat {\mathcal{L}}_{\delta}^{-1}u,\widehat{\mathcal{L}}_{\delta}^{-1}v\rangle_{H^{ -1}(\Omega),H_{0}^{1}(\Omega)}=\langle u,\widehat{\mathcal{L}}_{\delta}^{-1} v\rangle_{\delta}\] and \[\langle\mathbb{1}_{\Omega}\mathcal{P}_{\varepsilon}\widehat{\mathcal{L}}_{ \delta}^{-1}u,v\rangle_{\delta}=\langle\mathcal{P}_{\varepsilon}\widehat{ \mathcal{L}}_{\delta}^{-1}u,\widehat{\mathcal{L}}_{\delta}^{-1}v\rangle_{H^{-1 }(\Omega),H_{0}^{1}(\Omega)}=\langle\widehat{\mathcal{L}}_{\delta}^{-1}u, \mathcal{P}_{\varepsilon}\widehat{\mathcal{L}}_{\delta}^{-1}v\rangle_{H^{-1}( \Omega),H_{0}^{1}(\Omega)}=\langle u,\mathbb{1}_{\Omega}\mathcal{P}_{ \varepsilon}\widehat{\mathcal{L}}_{\delta}^{-1}v\rangle_{\delta}.\] Therefore, \(\widehat{\mathcal{L}}_{\delta}^{-1}+B_{\varepsilon,\delta,\lambda}:(H^{-1}( \Omega),\|\cdot\|_{\delta})\to(H^{-1}(\Omega),\|\cdot\|_{\delta})\) is a compact self-adjoint operator on \((H^{-1}(\Omega),\|\cdot\|_{\delta})\). The conclusion of Theorem 1.3 immediately follows from the standard stability theorem for compact self-adjoint operators and the following estimates: \[\|B_{\varepsilon,\delta,\lambda}\,f\|_{\delta} \leq C\|B_{\varepsilon,\delta,\lambda}\,f\|_{L^{2}(\Omega)}\] \[\leq C\big{(}1+|\lambda|^{-1}\|\gamma_{\kappa}(\lambda)^{-1}\| \big{)}\|\mathcal{P}_{\varepsilon}-\mathrm{Id}_{L^{2}(\widehat{\Omega}_{ \varepsilon})}\|_{H^{1}(\Omega)\to L^{2}(\widehat{\Omega}_{\varepsilon})}\| \widehat{\mathcal{L}}_{\delta}^{-1}f\|_{H^{1}(\Omega)}\] \[\leq C\varepsilon\big{(}1+|\lambda|^{-1}\|\gamma_{\kappa}(\lambda) ^{-1}\|\big{)}\|f\|_{\delta}.\] We are done. ## 4. The scalar case of \(m=1\) We assume that \(\mathcal{L}_{\omega,y}^{-1}\,\psi_{i}=\beta_{i}\psi_{i}\), where the associated normalized eigenfunction \(\psi_{i}\) has nonzero mean. Then \[(\kappa\mathcal{L}_{\omega,y}^{-1}-\lambda)^{-1}[I_{m}](y)=\left\{\begin{aligned} & \sum_{i}\frac{\int_{Y}\psi_{i}(y)\,dy}{\kappa\beta_{i}-\lambda}\otimes\psi_{i }&&\text{in }\omega,\\ &-\lambda^{-1}I_{m}&&\text{in }Y\setminus\omega.\end{aligned}\right. \tag{4.1}\] The \(pq\)-th element of \(\gamma_{\kappa}(\lambda)\) is \[(\gamma_{\kappa}(\lambda))_{pq}=\sum_{i\geq 1}\frac{(\int_{Y}\psi_{i}^{p}(y)\,dy)( \int_{Y}\psi_{i}^{q}(y)\,dy)}{\lambda-\kappa\beta_{i}}+\frac{1-\theta}{\lambda }\delta_{pq}.\] We denote \(\beta_{\kappa}(\lambda)=\gamma_{\kappa}(\lambda^{-1})\). From now, we assume that \(m=1\), then \[\beta_{\kappa}(\lambda)=\lambda\sum_{i\geq 1}\frac{(\int_{Y}\psi_{i}(y)\,dy)^{2}}{ 1-\kappa\beta_{i}\lambda}+(1-\theta)\lambda. \tag{4.2}\] then \(\beta_{\kappa}(\lambda)\) is increasing in each interval \((\beta_{i},\beta_{i+1})\), and \(\beta_{\kappa}^{\prime}(\lambda)\) has a lower bound \(1-\theta\), which is easily seen by a simple computation \[\beta_{\kappa}^{\prime}(\lambda)=\sum_{k=1}^{\infty}\frac{(\int_{Y}\psi_{i}(y )\,dy)^{2}}{(1-\kappa\beta_{i}\lambda)^{2}}+1-\theta\geq 1-\theta>0.\] **Theorem 4.1**.: _For any \(i\geq 0\) and \(j\geq 1\), let \(\lambda_{i,j}\) be the unique solution of \(\beta_{\kappa}(\lambda)=\theta_{j}\) in the interval \(\big{(}(\kappa\beta_{i})^{-1},(\kappa\beta_{i+1})^{-1}\big{)}\), where \(\theta_{j}\) is the \(j\)-th eigenvalue of \((\widehat{\mathcal{L}}_{\delta}^{-1}+B_{\varepsilon,\delta,\lambda})^{-1}\). There exists \(C>0\), depends only in \(\Omega\), such that if_ \[\varepsilon<\frac{C}{\theta_{j}}, \tag{4.3}\] _then_ \[|\lambda_{i,j}-\lambda_{i,j}^{\varepsilon}|\leq C\varepsilon\theta_{j}\big{(} \theta_{j}+(\kappa\beta_{i+1})^{-1}\big{)}. \tag{4.4}\] Proof.: By Theorem 1.3 for \(m=1\), there exists \(M>1\), depends only on \(\Omega\), such that \[|\beta_{\kappa}(\lambda_{i,j})^{-1}-\beta_{\kappa}(\lambda_{i,j}^{\varepsilon })^{-1}|\leq M\varepsilon\big{|}1-\lambda_{i,j}^{\varepsilon}\,\beta_{\kappa} (\lambda_{i,j}^{\varepsilon})^{-1}\big{|}, \tag{4.5}\] then since \(\beta_{\kappa}\) is smooth in each interval \(\big{(}(\kappa\beta_{i})^{-1},(\kappa\beta_{i+1})^{-1}\big{)}\), we have \[|\lambda_{i,j}-\lambda_{i,j}^{\varepsilon}| \leq\frac{1}{\inf_{\Lambda}\beta_{\kappa}^{\prime}(\lambda)}|\beta_ {\kappa}(\lambda_{i,j})-\beta_{\kappa}(\lambda_{i,j}^{\varepsilon})|\] \[\leq\frac{|\beta(\lambda_{i,j})\beta(\lambda_{i,j}^{\varepsilon}) |}{1-\theta}\left|\frac{1}{\beta(\lambda_{i,j})}-\frac{1}{\beta(\lambda_{i,j}^ {\varepsilon})}\right| \tag{4.6}\] \[\leq\frac{M\varepsilon}{1-\theta}\beta(\lambda_{i,j})|\beta( \lambda_{i,j}^{\varepsilon})-\lambda_{i,j}^{\varepsilon}|.\] Then we have three cases: 1. If \(|\beta(\lambda_{i,j}^{\varepsilon})|\leq 2M\lambda_{i,j}^{\varepsilon}\), then (4.6) gives that \[|\lambda_{i,j}-\lambda_{i,j}^{\varepsilon}|\leq C\varepsilon\beta(\lambda_{i,j})\lambda_{i,j}^{\varepsilon}\leq C\varepsilon\theta_{j}(\kappa\beta_{i+1} )^{-1}.\] (4.7) 2. If \(\beta(\lambda_{i,j}^{\varepsilon})>2M\lambda_{i,j}^{\varepsilon}\), note that \(M>1\), then (4.5) implies that \[\frac{1}{\beta(\lambda_{i,j})}-\frac{1}{\beta(\lambda_{i,j}^{\varepsilon})} \leq M\varepsilon^{\frac{1}{2}}\Big{(}\frac{\beta(\lambda_{i,j}^{\varepsilon })-\lambda_{i,j}^{\varepsilon}}{\beta(\lambda_{i,j}^{\varepsilon})}\Big{)},\] which yields that \[\big{(}\beta(\lambda_{i,j}^{\varepsilon})-\lambda_{i,j}^{\varepsilon}\big{)} \big{(}1-M\varepsilon^{\frac{1}{2}}\beta(\lambda_{i,j})\big{)}\leq\beta( \lambda_{i,j})-\lambda_{i,j}^{\varepsilon},\] (4.8) we choose \(C\) in (4.3) by letting \[C=\frac{1}{2M},\] that \(2M\varepsilon^{\frac{1}{2}}\beta(\lambda_{i,j})<1\), then (4.8) implies that \[\beta(\lambda_{i,j}^{\varepsilon})-\lambda_{i,j}^{\varepsilon}\leq 2\big{(} \beta(\lambda_{i,j})-\lambda_{i,j}^{\varepsilon}\big{)},\] and (4.6) gives that \[|\lambda_{i,j}-\lambda_{i,j}^{\varepsilon}|\leq C\varepsilon^{\frac{1}{2}} \beta(\lambda_{i,j})\big{(}\beta(\lambda_{i,j})-\lambda_{i,j}^{\varepsilon} \big{)}\leq C\varepsilon\beta^{2}(\lambda_{i,j})=C\varepsilon\theta_{j}^{2}.\] (4.9) 3. If \(\beta(\lambda_{i,j}^{\varepsilon})<-2M\lambda_{i,j}^{\varepsilon}\), then (4.5) implies that \[\frac{1}{\beta(\lambda_{i,j})}+\frac{1}{|\beta(\lambda_{i,j}^{\varepsilon})|} \leq M\varepsilon\Big{(}\frac{|\beta(\lambda_{i,j}^{\varepsilon})|+\lambda_{i, j}^{\varepsilon}}{|\beta(\lambda_{i,j}^{\varepsilon})|}\Big{)},\] which yields that \[\big{(}|\beta(\lambda_{i,j}^{\varepsilon})|+\lambda_{i,j}^{\varepsilon}\big{)} \big{(}1-M\varepsilon^{\frac{1}{2}}\beta(\lambda_{i,j})\big{)}\leq\lambda_{i,j}^{\varepsilon}-\beta(\lambda_{i,j}),\] (4.10) we still choose \(C\) in (4.3) by letting \[C=\frac{1}{2M},\] then \[|\beta(\lambda_{i,j}^{\varepsilon})|+\lambda_{i,j}^{\varepsilon}\leq 2\big{(} \lambda_{i,j}^{\varepsilon}-\beta(\lambda_{i,j})\big{)},\] and (4.6) gives that \[|\lambda_{i,j}-\lambda_{i,j}^{\varepsilon}|\leq C\varepsilon^{\frac{1}{2}} \beta(\lambda_{i,j})\big{(}\lambda_{i,j}^{\varepsilon}-\beta(\lambda_{i,j}) \big{)}\leq C\varepsilon^{\frac{1}{2}}\theta_{j}(\kappa\beta_{i+1})^{-1}.\] (4.11) In summary, if the constant \(C=(2M)^{-1}\) in (4.3), we have \[|\lambda_{i,j}-\lambda_{i,j}^{\varepsilon}|\leq C\varepsilon\theta_{j}(\theta_ {j}+(\kappa\beta_{i+1})^{-1}),\] the proof is complete.
2301.02792
Linguistic-style-aware Neural Networks for Fake News Detection
We propose the hierarchical recursive neural network (HERO) to predict fake news by learning its linguistic style, which is distinguishable from the truth, as psychological theories reveal. We first generate the hierarchical linguistic tree of news documents; by doing so, we translate each news document's linguistic style into its writer's usage of words and how these words are recursively structured as phrases, sentences, paragraphs, and, ultimately, the document. By integrating the hierarchical linguistic tree with the neural network, the proposed method learns and classifies the representation of news documents by capturing their locally sequential and globally recursive structures that are linguistically meaningful. It is the first work offering the hierarchical linguistic tree and the neural network preserving the tree information to our best knowledge. Experimental results based on public real-world datasets demonstrate the proposed method's effectiveness, which can outperform state-of-the-art techniques in classifying short and long news documents. We also examine the differential linguistic style of fake news and the truth and observe some patterns of fake news. The code and data have been publicly available.
Xinyi Zhou, Jiayu Li, Qinzhou Li, Reza Zafarani
2023-01-07T06:48:41Z
http://arxiv.org/abs/2301.02792v1
# Linguistic-style-aware Neural Networks for ###### Abstract We propose the hierarchical recursive neural network (HERO) to predict fake news by learning its linguistic style, which is distinguishable from the truth, as psychological theories reveal. We first generate the hierarchical linguistic tree of news documents; by doing so, we translate each news document's linguistic style into its writer's usage of words and how these words are recursively structured as phrases, sentences, paragraphs, and, ultimately, the document. By integrating the hierarchical linguistic tree with the neural network, the proposed method learns and classifies the representation of news documents by capturing their locally sequential and globally recursive structures that are linguistically meaningful. It is the first work offering the hierarchical linguistic tree and the neural network preserving the tree information to our best knowledge. Experimental results based on public real-world datasets demonstrate the proposed method's effectiveness, which can outperform state-of-the-art techniques in classifying short and long news documents. We also examine the differential linguistic style of fake news and the truth and observe some patterns of fake news.3 Footnote 3: The code and data are available at [https://github.com/Code4Graph/HERO](https://github.com/Code4Graph/HERO). fake news, neural network, linguistic style ## I Introduction "Fake news," as deceptive and misleading news articles (or statements at times), has been broadly discussed along with its influence on democracies and economies [1]. Public health has also been negatively impacted, especially with the "infodemic" that we face along with the pandemic [2]. Effective fake news detection has thus become an urgent task to mitigate such detrimental impacts. Psychological theories, such as _Undeutsch hypothesis_[3], have suggested that the linguistic style of fake news is distinguishable from that of the truth. Therefore, effective techniques can be designed to identify fake news by analyzing the linguistic style of news articles [1]. Linguistic style can be captured by looking at the writer's usage of words (lexically and semantically) and the way these words are further formed into sentences (syntactic level) and the document (discourse level) [1]. Within a machine learning framework, existing studies have captured a news article's linguistic style by computing the frequencies of each word [4, 5], part of speech (POS, at the syntactic level) [6, 7, 5], and rhetorical relationship (RR, at the discourse level) [8, 5]. These frequencies form a news article's representation, which is further classified by, e.g., support vector machines (SVM) and random forests to predict the news as fake news or the truth. These studies have advanced linguistic-style-aware fake news prediction. However, translating a news article's linguistic style into the appearances of words, POSs, and RRs overlooks the linguistic structure that reveals how the article's words, POSs, and RRs are assembled. Specifically, we can form a hierarchical linguistic tree for each news article; see Section III-A for the details and Figure 1 for an illustrated tree for the news piece "Vitamin D determines severity in COVID-19, so government advice needs to change, experts urge: Researchers point to changes in government advice in Wales, England, and Scotland." This tree explicitly presents _the order of words_ used in the article, _syntactic structure_ revealing how these words are recursively structured as the elementary discourse units (EDUs, which are meaningful phrases, sentences, or paragraphs) through POSs, and discourse structure exhibiting how these EDUs are recursively structured as the entire article through RRs. Previous approaches paid full attention to the tree's _node_ information by looking if this news piece used a specific word (e.g., "COVID-19"), POS (e.g., "NNP"), or RR (e.g., "NS-elaboration") in the corpus or how many times it appears without considering the _relational (edge)_ information among the nodes. Although Zhou et al. [9] and Perez-Rosas et al. [4] computed the frequencies of production rules (at the syntactic level only), each rule can merely show the structure within a fundamental component of the tree (i.e., parent-children, such as VP \(\rightarrow\) VBZ NP). Each fundamental component is investigated independently by overlooking how components are connected to form the tree; the tree's structure is hence preserved _locally_ rather than _globally_. In addition, the representation of news articles obtained by the frequencies of these local structures is often high-dimensional and sparse, which can be adverse to the prediction task. **Present work.** To address the above problems, we propose the hierarchical recursive neural network (HERO) for fake news prediction. The architecture of the proposed neural network adaptively preserves the global structure of the hierarchical linguistic tree of various news articles. To our best knowledge, this is the first work that develops hierarchical linguistic trees. Leveraging the developed trees, the proposed neural network can learn the linguistic-style-aware representations of news articles by explicitly capturing the writers' usage of words and the linguistically meaningful ways in which these words are structured as phrases, sentences, paragraphs, and, ultimately, the documents. We conduct extensive experiments on real-world datasets with well-established and state-of-the-art approaches, which demonstrate the effectiveness of the proposed neural network in predicting fake news. Additionally, we examine the differential linguistic style of fake news and the truth and identify statistically significant and consistent patterns of fake news across datasets. The rest of this paper is organized as follows. We review related work in Section II. We introduce the proposed method in Section III and detail the experiments designed and conducted to evaluate the proposed method in Section IV. We conclude in Section V. ## II Related Work Fake news prediction methods can be categorized as content-based or propagation-based depending on whether the method focuses on investigating news content or its propagation on social media. Propagation-based methods can utilize rich auxiliary social media information, including news spreaders' intent [2] or profiles [10], relationships between news spreaders and their posts [11], social feedback [12, 13, 14], social networks [15], and propagation paths [16, 17]. Nevertheless, they can only be deployed after news articles published on news outlets have been disseminated on social media. In comparison, content-based methods have the primary advantage of predicting fake news early when news articles have been published online but have not been spread [5]. Additionally, an effective content-based method can be easily extended further by incorporating social context information. With this consideration, we focus on analyzing news content to predict fake news and review related work on content-based fake news prediction approaches. As news articles are mainly text, content-based methods start by manually extracting linguistic features and predicting fake news using common classifiers such as SVM [4]. Such linguistic features have been related to lexicons (e.g., bag-of-words) [5], POSs [5, 6], context-free grammars (production rules) [4, 5], RRs [5, 8], readability [6, 18], and \(n\)-grams that preserve the sequences of words or POSs [7]. Though news features can be easily interpreted within this machine learning framework, features cannot be automatically extracted, which can significantly impact the prediction performance; hence, the performance heavily relies on experts' involvement and experience. More importantly, as detailed in Section I, it is difficult to capture the global structure of news text (language) at any of the syntactic and discourse levels with these hand-crafted features. Compared to these methods, the proposed neural network can learn the features of news articles, which capture the global and hierarchical structures that news linguistic styles carry. Recently, neural networks (e.g., Bi-LSTM [7, 19] and Text-CNN [20]) have been frequently employed to identify fake news. These models can learn the features of news text (sometimes, combined with other modalities in news content, such as images [20, 21]). These neural networks Fig. 1: The hierarchical linguistic tree for the news piece “Why did the US in 2017 give $3.7m to the Wuhan Lab in China? Such grants were prohibited in 2014. Did President Obama grant an exception?” verified as a false statement by PolitiFact.\({}^{2}\)Blue nodes: RRs. Green nodes: POSs. have focused on the sequentiality or locality of news text but not on its linguistic structure. In comparison, the proposed neural network explicitly catches this structure; it also captures text's sequentiality and locality, which will be detailed in Section III-B. We point out that the proposed neural network provides a fundamental approach to news text representation learning and thus can be easily extended for multimodal fake news prediction. ## III Methodology We specify the proposed model in this section, which can be divided into three steps. For each news document, we first construct its hierarchical linguistic tree (see Section III-A), then extract its features via the proposed hierarchical recursive neural network that preserves the hierarchical linguistic tree information (see Section III-B), and finally predict it as fake news or the truth (see Section III-C). Figure 2 presents the framework overview. ### _Hierarchical Linguistic Tree Construction_ Given a news document \(D\), we first generate its hierarchical linguistic tree. The tree can explicitly present the order of words used in the document and how these words shape EDUs (meaningful phrases, sentences, or paragraphs) and further shape the entire document. An example is shown in Figure 1. Specifically, our first attempt is to obtain \(D\)'s discourse (rhetorical) structure, which identifies \(D\)'s EDUs and reveals how these EDUs recursively form the document \(D\). To this end, we first utilize Standford CoreNLP [22] to segment \(D\) into EDUs. Then, we apply a modified transition-based system [23] to identify span (S) and nuclearity (N), based on which \(D\)'s rhetorical structure can be obtained without recognizing specific RRs (e.g., _elaboration_ in Figure 1). This semi-naked tree structure allows us to divide each RR node into within-sentence, across-sentence, and across-paragraph levels in terms of its left and right subtrees, extract structural features for each RR node, and ultimately adopt level-specific SVM classifiers [23] to predict the node attribute (i.e., the specific RR). This multi-stage approach outperforms the well-established one [24] in our experiments, where [24] works as an integrated system. Finally, we employ a state-of-the-art discriminative constituency parser for each identified EDU of the document \(D\) to obtain its syntactic structure [25]. The parser consists of a self-attentive encoder [25] and a chart decoder [26] (see Figure 2 for the detailed architecture). The syntactic structure reveals how the EDU's words recursively form the entire EDU. ### _Feature Extraction via Hierarchical Recursive Neural Network_ We propose the hierarchical recursive neural network to extract features of news documents, whose architecture adaptively maintains the global structure of the hierarchical linguistic trees of news documents. Fig. 2: Framework overview, which contains a top-bottom building process of hierarchical linguistic trees, a bottom-top feature extraction process using the proposed hierarchical recursive neural network, and a classifier to predict fake news. The neural network’s architecture adaptively preserves various news documents’ global and hierarchical linguistic tree structures. The Bi-GRU aggregator catches text’s local sequentiality that is linguistically valuable and often short (explained in Section III-B), which is more effective than self-attention here (see Section IV-B for details). Given a news document \(D\), its feature extraction using the hierarchical recursive neural network is bottom-top. We first encode \(D\)'s words, which are the leaf nodes of \(D\)'s hierarchical linguistic tree. Then, we aggregate the obtained embeddings of the words that attach to the same parent node, forming the embedding of their parent node. The hierarchical recursive neural network will repeat such aggregations from the lower (syntactic) level to the upper (discourse) level until the document \(D\) as the tree's root node is embedded. Hence, the question arises of how the aggregator performs on a recurring fundamental component (i.e., a depth-one parent-children structure). Note that for each parent node in a hierarchical linguistic tree, its children contain _local_ and _sequential_ information of the corresponding news document. The information is local because it reveals partial information about the overall news content that is linguistically valuable. It is sequential as we keep the order of words of the news document. Naturally, recurrent neural networks can be adopted as the aggregator to catch the sequentiality of children sharing the same parent. The information locality further relieves the pressure on recurrent neural networks to keep the dependency of long entities since the number of children for a parent node is no more than the EDU length and essentially less than the document length. As seen from Figure 1, the maximum length that recurrent neural networks require to process is four, whereas the document has 29 tokens. With the above considerations, we develop Bi-GRU (bidirectional gated recurrent unit), one of the well-established recurrent neural networks [27], to aggregate the embeddings of all the child nodes to represent their parent node. We also empirically compare Bi-GRU with multi-head self-attention that has remarkably performed in many tasks; Bi-GRU is more effective for the proposed model. Formally, the embedding of a parent node is computed as \[\textbf{x}_{p}=\frac{\sum_{c\in\mathcal{C}_{p}}[\overrightarrow{GRU}\{ \textbf{x}_{c}\}\oplus\overleftarrow{GRU}\{\textbf{x}_{c}\}]}{|\mathcal{C}_{ p}|}, \tag{1}\] where \(p\) denotes the parent node and \(\mathcal{C}_{p}\) is the set of child nodes of \(p\). Vectors \(\textbf{x}_{c}\in\mathbb{R}^{d}\) and \(\textbf{x}_{p}\in\mathbb{R}^{d}\) refer to the features of the child and parent node, respectively. The operator \(\oplus\) denotes concatenation. The GRU is formulated as follows: \[\begin{array}{l}\textbf{r}_{i}=\sigma(\textbf{W}_{r}\textbf{x}_{i}+\textbf{U }_{r}\textbf{h}_{i-1}),\\ \textbf{z}_{i}=\sigma(\textbf{W}_{z}\textbf{x}_{i}+\textbf{U}_{z}\textbf{h}_{ i-1}),\\ \hat{\textbf{h}}_{i}=\tanh(\textbf{W}_{h}\textbf{x}_{i}+\textbf{U}_{h}(\textbf{h}_{i-1} \odot\textbf{r}_{i})),\\ \textbf{h}_{i}=(1-\textbf{z}_{i})\odot\textbf{h}_{i-1}+\textbf{z}_{i}\odot \hat{\textbf{h}}_{i},\end{array} \tag{2}\] where \(\textbf{h}_{i}\in\mathbb{R}^{d/2}\) is the output hidden state of the \(i\)-th child, with \(\textbf{h}_{0}=\textbf{0}\). The symbol \(\odot\) denotes Hadamard product. Matrices \(\textbf{W}_{*}\in\mathbb{R}^{(d/2)\times d}\) and \(\textbf{U}_{*}\in\mathbb{R}^{(d/2)\times(d/2)}\) (\(*\in\{r,z,h\}\)) are learnable parameters. \(\textbf{r}_{i}\) and \(\textbf{z}_{i}\) are the reset gate and update gate, respectively. \(\sigma\) and \(\tanh\) are the sigmoid and hyperbolic tangent activation functions, respectively. In a nutshell, the above architecture first employs the Bi-GRU to capture "deep" sequential feature interactions of all the child features and then uses a mean pooling layer over all the hidden states to obtain the parent node's features. After determining the aggregation within each recurring fundamental component, we introduce three specific hierarchical recursive neural networks (HEROs): * _Unified HERO_: The first hierarchical recursive neural network is the one with unified aggregators. In other words, all the Bi-GRUs in the neural network share the same set of \(\textbf{W}_{*}\) and \(\textbf{U}_{*}\) (\(*\in\{r,z,h\}\), see Equation (2)). * _Level-specific HERO_: It is the hierarchical recursive neural network with level-specific aggregators. As detailed, the hierarchical linguistic tree presents both syntactic and rhetorical structures of news content, and the hierarchical recursive neural network preserves such structures. Hence, all the Bi-GRUs in a hierarchical recursive neural network can be grouped by the level (syntax or discourse) they belong to the corresponding tree. We define \(L(v)\) as the function that maps a certain vertex in the hierarchical linguistic tree to its linguistic level (i.e., \(L(v)\in\{\text{syntax},\text{discourse}\}\)). Then, Equation (1) can be reformulated as \(\textbf{x}_{p}=\frac{1}{|\mathcal{C}_{p}|}\sum_{c\in\mathcal{C}_{p}}[ \overrightarrow{GRU}_{L(c)}\{\textbf{x}_{c}\}]\). * _Attribute-specific HERO_: It stands for the hierarchical recursive neural network with attribute-specific aggregators. In other words, we categorize the hierarchical recursive neural network's recurring fundamental components according to the attributes of their parent nodes in the corresponding hierarchical linguistic tree, which can be various POSs and RRs. We deploy the same Bi-GRU for the components within each category and the different Bi-GRU for the components falling into different categories. Mathematically, we define \(A(v)\) as the function that maps a certain vertex in the hierarchical linguistic tree to its attributes. Assume there are \(m\) different POSs and \(n\) RRs, we have \(A(v)\in\{POS_{i},RR_{j}:i=1,2,\cdots,m,j=1,2,\cdots,n\}\). The root vertex would be assigned with a RR in discourse parsing. For EDU vertices, they would not be assigned with any RRs in discourse parsing but with some POSs in constituency parsing. In this way, Equation (1) is rewritten as \(\textbf{x}_{p}=\frac{1}{|\mathcal{C}_{p}|}\sum_{c\in\mathcal{C}_{p}}[ \overrightarrow{GRU}_{A(p)}\{\textbf{x}_{c}\}\oplus\overleftarrow{GRU}_{A( p)}\{\textbf{x}_{c}\}]\). ### _Fake News Prediction_ We add a softmax classifier on the top of the proposed hierarchical recursive neural network to predict the document \(D\) as fake news or the truth. Let \(\textbf{h}_{D}\) denote \(D\)'s features extracted via the proposed hierarchical recursive neural network. The softmax function maps \(\textbf{h}_{D}\) to the probability of \(D\) being a fake news document by \(p_{D}=\text{Softmax}(\textbf{W}\textbf{h}_{D}+\textbf{b})\), where \(\textbf{W}\) and \(\textbf{b}\) are learnable parameters. To learn the parameters \(\Theta=\{\textbf{W}_{*},\textbf{U}_{*},\textbf{W},\textbf{b}\}\) within the neural network and classifier, we employ cross-entropy to calculate the classification loss in the model training process. Assume we have \(q\) verified news documents \(\mathcal{D}=\{D_{i}\}_{i=1}^{q}\) with the ground-truth labels \(\mathcal{Y}=\{y_{i}:y_{i}\in\{0,1\}\}_{i=1}^{q}\) (\(y_{i}=0\) for true news, and \(y_{i}=1\) for fake news), the loss is computed by \(L=-\frac{1}{q}\sum_{i=1}^{q}[y_{D_{i}}\log p_{D_{i}}+(1-y_{D_{i}})\log(1-p_{D_{i}})]\). Based on it, the parameter set \(\Theta\) is estimated by \(\tilde{\Theta}=\arg\min_{\Theta}L\). ## IV Empirical Evaluation We aim to evaluate the proposed method by answering the following three questions. 1. How effective is the proposed model in fake news prediction compared to the-state-of-art approaches? 2. Is the hierarchical linguistic structure of news documents essential in representing their linguistic styles? 3. What characterizes the linguistic style of fake news as distinguishable from the truth? To that end, we first detail our experimental setup in Section IV-A and then compare the proposed unified, level-specific, and attribute-specific HEROs in predicting fake news (see Section IV-B). Subsequently, we compare the proposed model with the baselines to verify its effectiveness in predicting fake news (to answer RQ1, see Section IV-C) and conduct the ablation study to assess the importance of our developed hierarchical linguistic trees (to answer RQ2, see Section IV-D). Finally, we characterize the linguistic style of fake news as distinguishable from the truth by doing quantitative and comparative analyses (to answer RQ3, see Section IV-E). ### _Experimental Setup_ We first introduce the datasets used for evaluation (see Section IV-A1), followed by the baselines for comparison (see Section IV-A2). Finally, we detail our implementation details in Section IV-A3. #### Iv-A1 Datasets We conduct experiments on two benchmark datasets in fake news prediction: Recovery [9] and MM-COVID [28]. Both datasets contain labeled news documents. Differently, news documents collected in Recovery are articles (long text, often including multiple paragraphs) but in MM-COVID are statements (short text, often formed by one or two sentences). We present the detailed statistics of two datasets in Table I. #### Iv-A2 Baselines We involve the following well-received and state-of-the-art methods as baselines in our experiments. * _HCLF_[5]: HCLF stands for hand-crafted linguistic feature. Each news document's HCLFs include the frequencies of words (i.e., bag-of-word features), POSs, RRs, and production rules. The extracted features are used to predict fake news by employing well-established classifiers. Here we examine a comprehensive list of classifiers-logistic regression, SVM, \(k\)-nearest neighbors, decision trees, naive Bayes, random forest, and AdaBoost-and select the one performing best. * _EANN_[20]: The event adversarial neural network contains three components: feature extraction by Text-CNN (for text) and VGG-19 (for images), event discrimination to learn event-invariant features of news content, and fake news prediction. We exclude the visual features for a fair comparison. * _HAN_[29]: HAN exploits attention-GRU for news classification. It captures the hierarchical sequence of documents; i.e., each document is a sequence of its sentences, and each of its sentences is a sequence of words. * _DRNN_[30]: DRNN is a discourse-structure-aware neural network, which focuses on the tree with rhetorical relationships as edge attributes and leverages an attention mechanism for news classification. In other words, DRNN differs from HERO in the aggregation rule. Compared to DRNN's tree, the hierarchical linguistic tree integrates syntax-level structures and has RRs as nodes and non-attributed edges at the discourse level. DRNN is developed to categorize news documents with more than one elementary discourse unit; otherwise, it is reduced to Bi-LSTM. * _Text-GCN_[31]: The approach develops the graph convolutional neural network for news classification. The graph investigates the co-occurrence relationship among news documents and the words within the documents. * _Transformer_[32]: It is a deep neural network model with a self-attention-based encoder-decoder architecture, which has excellently performed in diverse natural language processing tasks. Here, we consider Transformer's encoder-applicable for classification tasks-as a baseline to predict fake news, a non-pretrained version rather than pretrained models (e.g., BERT) for a fair comparison as the pretrained Transformers have learned large-scale external resources. #### Iv-A3 Implementation Details We randomly divide each dataset into 0.7:0.1:0.2 proportions for model training, validation, and testing. Macro-F1, micro-F1, and AUC are used to evaluate the performance of methods in news classification. The discourse parser is pretrained using RST-DT [23], and the constituency parser is pretrained using the Penn Treebank [25]. For the neural-network-based models, we uniformly utilize the pretrained GloVe [33] to obtain semantic-aware embeddings of words, with 100 as the embedding dimension. The hidden dimension within neural networks is set as 100, correspondingly. We deploy Adam optimizer to learn parameters, with 50 as the maximum number of epochs. We perform a grid search over the learning rate \(\in\{0.1,0.01,0.001,0.0001\}\) with validation data. In the end, 0.0001 performs best for our models and most of the baselines other than Transformer (0.001) and Text-GCN (0.01). All the experiments of the neural networks are implemented with PyTorch and are conducted on an NVIDIA Quadro RTX 6000 GPU (\(24\) GB memory), Intel(R) Xeon(R) Gold 6248R CPU (\(3.00\) GHz), and with \(64\) GB of RAM. For \begin{table} \begin{tabular}{l r r} \hline \hline & **Recovery** & **MM-COVID** \\ \hline **\# news documents** & 2,029 & 3,536 \\ **- true news** & 1,364 & 1,444 \\ **- fake news** & 665 & 2,092 \\ **Avg. \# words per EDU** & 24 & 17 \\ **Avg. \# EDUs per document** & 38 & 2 \\ **Avg. \# words per document** & 841 & 16 \\ \hline \hline \end{tabular} \end{table} TABLE I: Data statistics. HCLFs, classifiers are used with the default hyperparameters presented in the scikit-learn library. \(Z\)-score normalization is applied for the feature matrix to enhance the classification performance. ### _Determining the Best HERO_ We compare the performance of the proposed neural networks with unified, level-specific, and attribute-specific Bi-GRUs in predicting fake news. Table II presents the results. The results indicate that with Recovery data, the performance ranking is attribute-specific HERO \(>\) level-specific HERO \(>\) unified HERO. Specifically, attribute-specific HERO correctly predicts news as fake or true with 0.85 macro-F1 and 0.87 micro-F1 and AUC, outperforming unified HERO by \(\sim\)4% and level-specific HERO by \(\sim\)3%. With MM-COVID data, the performance ranking is attribute-specific HERO \(\approx\) unified HERO \(>\) level-specific HERO. Attribute-specific and unified HEROs achieve \(\sim\)0.89-0.90% in macro-F1, micro-F1, and AUC, outperforming level-specific HERO by \(\sim\)1%. In conclusion, _attribute-specific HERO_ performs best in classifying long articles and short statements as fake news or the truth. This result demonstrates the importance of the node attributes (POSs or RRs) in developed hierarchical linguistic trees. Additionally, we compare Bi-GRU and self-attention (#heads=10) as aggregators in the proposed hierarchical recursive neural network for fake news prediction. The results indicate that Bi-GRU performs better than self-attention by at least 1% in AUC on both datasets. ### _Comparing HERO with Baselines_ We compare the proposed model with the baselines in predicting fake news. The results presented in Table III reveal that the proposed model can generally outperform the baselines. Specifically, the proposed model has an AUC score approaching 0.87, outperforming HAN by more than 2%, Text-GCN by more than 3%, Transformer by more than 5%, EANN by more than 7%, HCLF by 12%, and DRNN by 17%. With MM-COVID data, the proposed model has an AUC score approaching 0.90, outperforming EANN by more than 6%, DRNN and HAN by \(\sim\)5%, Text-GCN and Transformer by \(\sim\)8-9%, and HCLF by more than 30%. From the table, we also observe that the proposed model outperforms EANN by 6-7% in macro-F1 and AUC but underperforms it by \(\sim\)3% in micro-F1 on MM-COVID. This result suggests that EANN tends to predict given news statements as the major class. ### _Ablation Study_ We compare the proposed model, HERO, which contains hierarchical linguistic (syntax- and discourse-level) structures with its following variants. * _HERO\(\backslash\)Dis_: It stands for the variant of HERO with only syntax-level structures. In this variant, the embedding of a news document is obtained by averaging its embeddings of EDUs. * _HERO\(\backslash\)Syn_: It stands for the variant of HERO with only discourse-level structures. In this variant, the embedding of each EDU of a news document is obtained by averaging its words. * _HERO\(\backslash\)(Syn+Dis)_: It stands for the variant of HERO with no structures, the embedding of a news document is directly obtained by averaging its word embeddings. \begin{table} \begin{tabular}{r c c c c c c} \hline \hline & \multicolumn{3}{c}{**Recovery**} & \multicolumn{3}{c}{**MM-COVID**} \\ \cline{2-7} **HERO** & **MAF1** & **MIF1** & **AUC** & **MAF1** & **MIF1** & **AUC** \\ \hline **Unified** & 0.801 & 0.822 & 0.827 & 0.889 & 0.891 & **0.899** \\ **Level-specific** & 0.817 & 0.838 & 0.841 & 0.878 & 0.878 & 0.892 \\ **Attribute-specific** & **0.850** & **0.869** & **0.866** & **0.894** & **0.896** & 0.896 \\ \hline \hline \end{tabular} \end{table} TABLE II: Performance of unified, level-specific, and attribute-specific HEROs in fake news prediction. Attribute-specific HERO performs best, demonstrating that the node attributes (POSs or RRs) in hierarchical linguistic trees are essential. MAF1: Macro-F1. MIF1: Micro-F1 Fig. 3: Ablation study. (a) The proposed HERO outperforms HERO\(\backslash\)Dis by 1% in AUC for unified HERO and by 3% for level- and attribute-specific HEROs. It outperforms HERO\(\backslash\)Syn by 7–9% and HERO\(\backslash\)Syn by 30%+ in AUC. (b) HERO performs similarly to HERO\(\backslash\)Dis as MM-COVID contains short statements having minimal discourse structures (i.e., syntax-level structures dominate). It outperforms HERO\(\backslash\)Syn and HERO\(\backslash\)(Syn+Dis) by 10%+ in AUC. Thus, syntax- and discourse-level structures are both essential. \begin{table} \begin{tabular}{r c c c c c c} \hline \hline & \multicolumn{3}{c}{**Recovery**} & \multicolumn{3}{c}{**MM-COVID**} \\ \cline{2-7} & **MAF1** & **MIF1** & **AUC** & **MAF1** & **MIF1** & **AUC** \\ \hline **HCLF** & 0.752 & 0.801 & 0.746 & 0.566 & 0.624 & 0.577 \\ **Transformer** & 0.774 & 0.793 & 0.810 & 0.804 & 0.809 & 0.806 \\ **Text-GCN** & 0.841 & 0.869 & 0.835 & 0.826 & 0.836 & 0.817 \\ **EANN** & 0.811 & 0.864 & 0.795 & 0.825 & **0.926** & 0.833 \\ **HAN** & 0.847 & 0.869 & 0.844 & 0.840 & 0.856 & 0.846 \\ **DRNN** & 0.711 & 0.778 & 0.698 & 0.845 & 0.846 & 0.848 \\ **HERO** & **0.850** & **0.869** & **0.866** & **0.894** & 0.896 & **0.896** \\ \hline \hline \end{tabular} \end{table} TABLE III: Performance of the proposed model, HERO, and baselines in fake news prediction. HERO outperforms the baselines by 2–17% in AUC on Recovery and by 3–30% in MM-COVID. MAF1: Macro-F1. MIF1: Micro-F1 The results are visualized in Figure 3. We observe that with Recovery data, the proposed HERO outperforms HERO\(\backslash\)Dis by 1% in AUC for unified HERO and by 3% for level- and attribute-specific HEROs. It outperforms HERO\(\backslash\)Syn by 7-9% and notably outperforms HERO\(\backslash\)Syn by above 30% in AUC. With MM-COVID data, the proposed HERO performs similarly to HERO\(\backslash\)Dis since the statements presented in MM-COVID are short with two EDUs on average and hence have minimal discourse structures (i.e., syntax-level structures dominate hierarchical linguistic structures). Meanwhile, it outperforms HERO\(\backslash\)Syn and HERO\(\backslash\)(Syn+Dis) by more than 10% in AUC. Therefore, we conclude that the proposed HERO is better than its variants, demonstrating the importance of hierarchical linguistic structures. ### _Characterizing Linguistic Style of Fake News_ Fake news has been theoretically identified with a linguistic style distinguishable from the truth [3]. This experiment aims to specify this different linguistic style of fake news. We compare the hierarchical linguistic trees generated by fake news and the truth, which we develop to represent the linguistic style of news documents systematically. The comparison is from the (i) children of parent nodes, (ii) attributes of nodes, and (iii) size, width, and depth of trees. Children of Parent NodesWe compare fake news with real news in the average and the maximum number of children of parent nodes in hierarchical linguistic trees. Results that are statistically significant with a \(p\)-value \(<0.001\) (using t-test, unless otherwise specified) are in Figure 3(a). We observe that the hierarchical linguistic trees of fake news have more child nodes for each parent node than true news on average. News here indicates long news articles in the Recovery dataset and short statements in the MM-COVID dataset. Attributes of NodesConsidering the nodes within a hierarchical linguistic tree can indicate the document (as the root), RRs, EDUs, POSs, and words (as the leaf nodes), we first compare fake news with the truth in the proportion of RRs, EDUs, POSs, and words, respectively. The reason for computing their proportions rather than the numbers is to eliminate the impact of the size of trees (discussed in the next paragraph). We observe that compared to true news, the hierarchical linguistic trees of fake news have a significantly smaller proportion of EDU nodes (\(p\)-value \(<0.05\)) and POS nodes indicating NNS (noun in the plural, \(p\)-value \(<0.001\)) but have a significantly larger proportion of nodes indicating specific POSs such as IN (preposition or subordinating conjunction), PP (prepositional phrase), and DT (determiner, \(p\)-value \(<0.001\)). We illustrate the results in Figure 3(b). Size, Width, Depth of TreesWe compare fake news with the truth in the size, maximum width, and depth of hierarchical linguistic trees. Since hierarchical linguistic trees contain two-level structures, we also compare fake and true news in the size, maximum width, and depth of discourse- and syntactic-level trees. We observe that the syntactic-level tree of fake news is generally greater with more nodes, broader, and deeper than true news. In particular, the syntactic-level tree of fake news has more leaf nodes than true news, which reveals that fake news often has longer EDUs with more words than true news. The above conclusions hold for long news articles (using Recovery data) and short statements (with MM-COVID data) with a \(p\)-value \(<0.01\); news files in both datasets are rich in syntactic information. Figure 3(c) (the upper ones) presents Fig. 4: Hierarchical linguistic trees of fake and true news. Orange solid line: Median. Green dashed line: Mean. the details. Moreover, we observe that fake news articles generate smaller and narrower discourse-level trees that lead to smaller and narrower hierarchical linguistic trees than true news articles (\(p\)-value \(<0.01\), see the bottom figures in Figure 4c). We point out that the discourse structures of short statements are plain with two EDUs on average and hence have trivial impacts on the shape of the entire hierarchical linguistic structures. Lastly, we point out that comparing trees' maximum and average widths leads to the same conclusions. Comparing the longest (i.e., depth) and the average distance between the root and leaves also leads to the same conclusions. ## V Conclusion We propose a psychology-informed neural network to predict fake news. The proposed neural network learns the linguistic style of news documents represented by hierarchical linguistic trees, which explicitly captures the writers' usage of words and the linguistically meaningful ways these words are structured as phrases, sentences, paragraphs, and, ultimately, documents. We conduct experiments on public real-world datasets. The results demonstrate the effectiveness of the proposed neural network, with 0.87-0.90 AUC scores, and the importance of the developed hierarchical linguistic tree. The proposed neural network can outperform the previous (recurrent, convolutional, graph, and self-attentive) neural networks and feature-engineering-based approach in predicting news-as long articles or short statements-as fake news or the truth. We observe from the data that the hierarchical linguistic trees of fake news can significantly differ from true news in the children of parent nodes, the attributes of nodes, and the size, width, and depth of the trees. In our future work, we aim to enhance the proposed model's performance with multimodal and social-context information.
2303.14789
5 wave interactions in internal gravity waves
We use multiple-scale analysis to study a 5-wave system (5WS) composed of two different internal gravity wave triads. Each of these triads consists of a parent wave and two daughter waves, with one daughter wave common between the two triads. The parent waves are assumed to have the same frequency and wavevector norm co-existing in a region of constant background stratification. Such 5-wave systems may emerge in oceans, for example, via tide-topography interactions, generating multiple parent internal waves that overlap. Two 2D cases are considered: Case 1(2) has parent waves with the same horizontal (vertical) wavenumber but with different vertical (horizontal) wavenumber. For both cases, the 5WS is more unstable than triads for $f/\omega_1\gtrapprox0.3$, where $\omega_1$ and $f$ are the parent wave and the local Coriolis frequency, respectively. For $f/\omega_1\gtrapprox0.3$, the common daughter wave's frequency is $\approx \omega_1-f $ and $f$ respectively for Cases 1 and 2. For 3D cases, 5WSs become more unstable as the angle ($\theta$) between the horizontal wavevectors of the parent waves is decreased. Moreover, for any $\theta$, 5WSs have higher growth rates than triads for $f/\omega_1\gtrapprox0.3$. Numerical simulations match the theoretical growth rates of 5WSs for a wide range of latitudes, except when $f/\omega_1\approx0.5$ (critical latitude). More than three daughter waves are forced by the two parent waves when $f/\omega_1\approx0.5$. We formulate a reduced order model which shows that for any $\theta$, the maximum growth rate near the critical latitude is approximately twice the maximum growth rate of all triads.
Saranraj Gururaj, Anirban Guha
2023-03-26T18:08:22Z
http://arxiv.org/abs/2303.14789v1
[ ###### Abstract We use multiple scale analysis to study a 5-wave system (5WS) composed of two different internal gravity wave triads. Each of these triads consists of a parent wave and two daughter waves, with one daughter wave common between the two triads. The parent waves are assumed to have the same frequency and wavevector norm co-existing in a region of constant background stratification. Such 5-wave systems may emerge in oceans, for example, via tide-topography interactions, generating multiple parent internal waves that overlap. Two 2D cases are considered: Case 1(2) has parent waves with the same horizontal (vertical) wavenumber but with different vertical (horizontal) wavenumber. For both cases, the 5WS is more unstable than triads for \(f/\omega_{1}\gtrapprox 0.3\), where \(\omega_{1}\) and \(f\) are the parent wave and the local Coriolis frequency, respectively. For \(f/\omega_{1}\gtrapprox 0.3\), the common daughter wave's frequency is \(\approx\omega_{1}-f\) and \(f\) respectively for Cases 1 and 2. For 3D cases, 5WSs become more unstable as the angle (\(\theta\)) between the horizontal wavevectors of the parent waves is decreased. Moreover, for any \(\theta\), 5WSs have higher growth rates than triads for \(f/\omega_{1}\gtrapprox 0.3\). Numerical simulations match the theoretical growth rates of 5WSs for a wide range of latitudes, except when \(f/\omega_{1}\approx 0.5\) (critical latitude). More than three daughter waves are forced by the two parent waves when \(f/\omega_{1}\approx 0.5\). We formulate a reduced order model which shows that for any \(\theta\), the maximum growth rate near the critical latitude is approximately twice the maximum growth rate of all triads. 5 wave interactions in internal gravity waves] 5 wave interactions in internal gravity waves Saranraj Gururaj\({}^{1}\)+ and Anirban Guha\({}^{1}\) Footnote †: Email address for correspondence: [email protected] ]Saranraj Gururaj\({}^{1}\)+ and Anirban Guha\({}^{1}\) ## 1 Introduction Internal waves play a major role in sustaining Meridional Overturning Circulation by causing diapycnal mixing (Munk & Wunsch, 1998; Ferrari & Wunsch, 2009). Wave-wave interaction is estimated to be one of the most dominant mechanisms through which internal waves' energy cascades to small length scales (de Lavergne _et al._, 2019), where it can cause mixing. As a result, understanding wave-wave interactions can be important to model the internal waves' energy cascade. The stability of a plane internal gravity wave has been studied extensively. A primary/parent internal wave with small steepness is unstable to secondary (daughter) waves through triad interactions if the secondary waves' frequencies are lesser than the parent wave's frequency (Hasselmann, 1967). Moreover, the three waves' frequencies and wavevectors should also satisfy the resonant triad conditions: \(\mathbf{k}_{1}=\mathbf{k}_{2}+\mathbf{k}_{3}\) and \(\omega_{1}=\omega_{2}+\omega_{3}\)(Thorpe, 1966; Davis & Acrivos, 1967; Hasselmann, 1967), where daughter waves are denoted by subscripts 2 and 3, while the parent wave by subscript 1. For parent waves with small steepness, a 2D stability analysis is sufficient to find the most dominant instability (Klostermeyer, 1991), and the most unstable daughter wave combination depends on \(\omega_{1}/N\)(Sonmor & Klaassen, 1997), kinematic viscosity (\(\nu\)) (Bourget _et al._, 2013, 2014) and Coriolis frequency (\(f\)) (Young _et al._, 2008; Maurer _et al._, 2016). Without any rotational effects and under inviscid conditions, for \(\omega_{1}/N<0.68\), the wavevectors of the most unstable daughter wave combination satisfy \(|\mathbf{k}_{3}|<|\mathbf{k}_{1}|<|\mathbf{k}_{2}|\). However, for \(\omega_{1}/N>0.68\), the most unstable daughter waves' wavevectors satisfy \(|\mathbf{k}_{2}|\approx|\mathbf{k}_{3}|\gg|\mathbf{k}_{1}|\). This instability is called as Parametric Subharmonic Instability (PSI) (MacKinnon & Winters, 2005; Young _et al._, 2008). PSI is a special type of triad interaction where \(\omega_{2}\approx\omega_{3}\approx\omega_{1}/2\). For internal wave triads, rotational effects can be very important for a wide range of latitudes and especially near the critical latitude. Near the critical latitude (where \(f\approx\omega_{1}/2\)), for any \(\omega_{1}/N\), the primary wave gives its energy to waves whose frequency is close to the inertial frequency. Moreover, the inertial waves have very small vertical length scales which can lead to increased kinetic energy dissipation (Richet _et al._, 2018). Semidiurnal mode-1 internal waves have been observed to lose a non-negligible portion of their energy as they pass through the critical latitude (MacKinnon & Winters, 2005; Alford _et al._, 2007; Hazewinkel & Winters, 2011). Moreover, when semidiurnal internal wave modes interact with an ambient wave field that follows Garrett-Munk spectrum, their decay is fastest near the critical latitude (Hibiya _et al._, 1998; Onuki & Hibiya, 2018; Olbers _et al._, 2020). In this paper, we study the stability of two weakly nonlinear plane parent waves that coexist in a region. The motivation for this study stems from the fact that parent internal waves generated in different locations often meet/overlap in the oceans. For example, tide-topography interactions result in the generation of internal waves that propagate in horizontally opposite directions, and these waves overlap/coexist above the topography, c.f. Nikurashin & Legg (2011, figure 7). When two energetic parent waves meet in a region, they can resonantly interact with each other. Internal wave beam collision is an example of such direct interaction between the parent waves, and it has been studied extensively over the last few decades (Tabaei _et al._, 2005; Jiang & Marcus, 2009; Akylas & Karimi, 2012). Parent waves, however, do not always resonantly interact with each other and form a triad. In the absence of direct interaction, each parent wave would still be susceptible to triad interactions leading to the growth of daughter waves, and this is the setting explored in this paper. Specifically, we focus on the 5-wave system instability. In this instability, five waves (two parent waves and three daughter waves) are involved, and two distinct triads are formed between the five waves. Note that this implies one daughter wave is forced by both parent waves and is a part of two different triads. Some examples of parent waves overlapping are given in figures 1(a)-1(b), and the examples shown can easily occur in the oceans when internal waves are generated by tide-topography interactions. In both figures, the region enclosed inside the green box would be a potential location for a 5-wave system. The wavevector and frequency conditions satisfied in a 5-wave system is given in figure 1(c). In the context of internal gravity waves, 5-wave systems have been studied recently (Pan _et al._, 2021\(a\), _b_). Pan _et al._ (2021_a_) focus on 5-wave systems where the same parent wave generates four different daughter waves, which is not the focus of this paper. Pan _et al._ (2021_b_) explore 5-wave interactions that consist of two parent waves and three daughter waves, but their focus is on rogue wave generation. They study the 5-wave systems in a 2D setting without the rotational effects. Moreover, no detailed study was conducted on the growth rates. In this paper, we consider a 3D setting with rotational effects, which is observed to be important in our case. The primary focus is on the growth rates of the daughter waves and to understand scenarios in which the 5-wave system instability is faster than the 3-wave system instability (standard triads). In our study, the frequencies of the two parent waves are always assumed to be the same, and this assumption can be important in an oceanographic context since internal waves generated by the same tide have the same frequency. The paper is organized as follows. In SS2, we use multiple scale analysis to simplify the 3D, Boussinesq Navier-Stokes equations in the \(f-\)plane and derive the wave amplitude equations. Expressions for growth rates are provided. In SS3, theoretical comparisons between the growth rates of 3-wave systems and 5-wave systems for different combinations of parent waves are provided. In SS4, numerical validations are provided for the 5-wave systems, and specific focus is also given to the fate of the parent waves near the critical latitude. The paper is summarized in SS5. ## 2 Governing equations The 3D, incompressible, Boussinesq, Navier-Stokes equations in the \(f-\)plane in primitive variables are given by \[\frac{\mathrm{D}\mathbf{u}}{\mathrm{D}t}+f\hat{z}\times\mathbf{u} =-\frac{1}{\rho_{0}}\nabla p+b\hat{z}+\nu\Delta\mathbf{u}, \tag{2.1a}\] \[\frac{\mathrm{D}b}{\mathrm{D}t}+N^{2}w =\kappa\Delta b,\] (2.1b) \[\nabla.\mathbf{u} =0. \tag{2.1c}\] Here \(\mathbf{u}=(u,v,w)\), where the components respectively denote the zonal, meridional, and vertical velocities. Moreover, \(f\) is the local Coriolis frequency, \(\rho_{0}\approx 1000\,\mathrm{kg}\,\mathrm{m}^{-3}\) is the reference density, \(p\) is the perturbation pressure, \(b\) is the buoyancy perturbation, \(\nu\) is the kinematic viscosity, \(N\) is the background buoyancy frequency, and \(\kappa\) is the diffusion coefficient. The operator \(\Delta\) is defined as \(\Delta\equiv\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}+\partial^ {2}/\partial z^{2}\), while D/D\(t\) is the material derivative. Figure 1: Examples of different orientations of propagating parent waves: in (a) vertically and (b) horizontally opposite directions, with the intersection region marked in green. (c) Frequency and wavevector triad conditions that are satisfied between the 5 waves that are involved in the interaction region. Waves 1 and 5 are parent waves, while waves 2,3, and 4 are daughter waves, with wave-3 being the common daughter wave. We intend to study wave-wave interactions using multiple-scale analysis. To this end, we combine equations (1)-(1) into a single equation. After some simple manipulations, the single equation describing the evolution of vertical velocity is written in a compact form: \[\frac{\partial^{2}}{\partial t^{2}}(\Delta w)+N^{2}(\nabla_{h}^{2}w)+f^{2}\frac {\partial^{2}w}{\partial z^{2}}+\mathrm{NLT}\quad=\quad\mathrm{VT}, \tag{2}\] where \(\nabla_{h}^{2}\equiv\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}\). NLT denotes all the nonlinear terms: NLT \[=\nabla_{h}^{2}\frac{\partial(\mathbf{u}.\nabla w)}{\partial t}+ \nabla_{h}^{2}(\mathbf{u}.\nabla b)-\frac{\partial^{3}(\mathbf{u}.\nabla u)}{ \partial x\partial z\partial t}\] \[\quad+f\frac{\partial^{2}(\mathbf{u}.\nabla u)}{\partial y \partial z}-f\frac{\partial^{2}(\mathbf{u}.\nabla v)}{\partial x\partial z}- \frac{\partial^{3}(\mathbf{u}.\nabla v)}{\partial y\partial z\partial t}.\] (3) Moreover, VT denotes viscous and molecular diffusion terms: \[\mathrm{VT}=\nu\frac{\partial}{\partial t}\left(\Delta^{2}w\right)+\nabla_{h }^{2}(\kappa\Delta b)+f\nu\frac{\partial^{2}(\Delta u)}{\partial y\partial z }-f\nu\frac{\partial^{2}(\Delta v)}{\partial x\partial z}. \tag{4}\] For simplicity, we assume \(\kappa=0\). Furthermore, we mainly focus on plane waves. Similar to the procedure used in Bourget _et al._ (2013), the vertical velocity of the \(j-\)th wave (\(j=1,2,\ldots,5\)) is assumed to be a product of a rapidly varying phase and an amplitude that slowly varies in time. Mathematically this can be written as \[w_{j}(x,y,z,t)=a_{j}(\epsilon_{t}t)\exp\{[\mathrm{i}(k_{j}x+l_{j}y+m_{j}z- \omega_{j}t)]\}+\mathrm{c.c.}, \tag{5}\] where \(k_{j},l_{j},m_{j}\), and \(\omega_{j}\) are respectively the zonal wavenumber, meridional wavenumber, vertical wavenumber, and frequency of the \(j-\)internal wave. 'c.c.' denotes the complex conjugate. The amplitude is assumed to evolve on a slow time scale \(\epsilon_{t}t\), where \(\epsilon_{t}\) is a small parameter. Moreover, \(a_{j}\) itself is \(\mathcal{O}(\epsilon_{a})\), where \(\epsilon_{a}\ll 1\). For weakly nonlinear wave-wave interactions, wave steepness should be a small quantity (Koudella & Staquet, 2006), and \(\epsilon_{a}\) is chosen accordingly. On substituting (5) in (2), at the leading order (\(\mathcal{O}(\epsilon_{a})\)) we obtain the dispersion relation in 3D: \[\omega_{j}^{2}=\frac{N^{2}(k_{j}^{2}+l_{j}^{2})+f^{2}m_{j}^{2}}{k_{j}^{2}+l_{j }^{2}+m_{j}^{2}}. \tag{6}\] All 5 waves involved in the interaction must satisfy this dispersion relation. Energy transfer between the waves due to weakly nonlinear wave-wave interactions occurs at \(\mathcal{O}(\epsilon_{a}^{2})\). For the \(j\)-th wave, the amplitude evolution equation reads \[\mathcal{D}_{j}\frac{\partial a_{j}}{\partial t}=-\mathrm{NLT}_{j}+\mathrm{VT }_{j}, \tag{7}\] where \(\mathcal{D}_{j}\equiv 2\mathrm{i}\omega_{j}(k_{j}^{2}+l_{j}^{2}+m_{j}^{2})\) is defined for convenience. \(\mathrm{NLT}_{j}\) and \(\mathrm{VT}_{j}\) represent all the nonlinear and viscous terms with the phase of the \(j-\)th wave, respectively. The expression for \(\mathrm{VT}_{j}\) is given by \[\mathrm{VT}_{j}=-\mathcal{D}_{j}\nu/2\left(\frac{f^{2}m_{j}^{2}}{\omega_{j}^{2 }}+m_{j}^{2}+l_{j}^{2}+k_{j}^{2}\right). \tag{8}\] \(\mathrm{NLT}_{j}\) is obtained by substituting the fields \((u_{j},v_{j},w_{j},b_{j})\) in NLT, and by retaining all the nonlinear terms that have the same phase as the \(j-\)th wave. Nonlinear terms that do not have the phase of any of the five waves are the 'non-resonant terms' and are neglected. From \(w_{j}\), we can obtain \(u_{j},v_{j}\), and \(b_{j}\) by using the polarisation relations: \[\begin{bmatrix}u_{j}\\ v_{j}\\ b_{j}\end{bmatrix}=\begin{bmatrix}U_{j}\\ V_{j}\\ B_{j}\end{bmatrix}w_{j}=\begin{bmatrix}-m_{j}(\omega_{j}k_{j}+\mathrm{i}l_{j}f_{j })/[\omega_{j}(k_{j}^{2}+l_{j}^{2})]\\ -m_{j}(\omega_{j}l_{j}-\mathrm{i}k_{j}f_{j})/[\omega_{j}(k_{j}^{2}+l_{j}^{2})] \\ -\mathrm{i}N^{2}/\omega_{j}\end{bmatrix}w_{j}. \tag{9}\] Polarisation expressions are also used to evaluate \(\mathrm{NLT}_{j}\), the expressions for which are given in appendix A. ### Wave-amplitude equations and growth rates The amplitude evolution of each of the 5 waves can be obtained from (7): \[\frac{da_{1}}{dt}=\mathcal{M}_{1}a_{2}a_{3}-\mathcal{V}_{1}a_{1},\qquad\quad \frac{da_{2}}{dt}=\mathcal{M}_{2}a_{1}\bar{a}_{3}-\mathcal{V}_{2}a_{2} \tag{10}\] \[\frac{da_{5}}{dt}=\mathcal{N}_{5}a_{4}a_{3}-\mathcal{V}_{5}a_{5}\qquad\quad \frac{da_{4}}{dt}=\mathcal{N}_{4}a_{5}\bar{a}_{3}-\mathcal{V}_{4}a_{4} \tag{11}\] \[\frac{da_{3}}{dt}=\mathcal{M}_{3}a_{1}\bar{a}_{2}+\mathcal{N}_{3}a_{5}\bar{a} _{4}-\mathcal{V}_{3}a_{3} \tag{12}\] where \(\mathcal{V}_{j}=\nu/2\left(f^{2}m_{j}^{2}/\omega_{j}^{2}+m_{j}^{2}+l_{j}^{2}+k _{j}^{2}\right)\). As depicted in figure 1(c), wave-1,-2, and -3 form a triad, whose nonlinear coefficients are given by \(\mathcal{M}_{j}\). Likewise, wave-3, -4, and -5 also form a triad, whose nonlinear coefficients are given by \(\mathcal{N}_{j}\). Expressions for \(\mathcal{M}_{j}\) and \(\mathcal{N}_{j}\) are given in appendix A. Wave-3, therefore, becomes the common daughter wave in two different triads. Moreover, wave-3 can be thought of as the daughter wave in a triad with wave-1 and -5 as the two parent waves. Using pump wave approximation (McEwan & Plumb, 1977; Young _et al._, 2008), (10)-(12) can be simplified to a set of linear differential equations which are given below in a compact form: \[\begin{bmatrix}\frac{d\bar{a}_{2}}{dt}\\ \frac{d\bar{a}_{4}}{dt}\\ \frac{da_{3}}{dt}\end{bmatrix}=\begin{bmatrix}-\mathcal{V}_{2}&0&\bar{ \mathcal{M}}_{2}\bar{A}_{1}\\ \\ 0&-\mathcal{V}_{4}&\bar{\mathcal{N}}_{4}\bar{A}_{5}\\ \\ \mathcal{M}_{3}A_{1}&\mathcal{N}_{3}A_{5}&-\mathcal{V}_{3}\end{bmatrix} \begin{bmatrix}\bar{a}_{2}\\ \\ \bar{a}_{4}\\ a_{3}\end{bmatrix}. \tag{13}\] Note that \(a_{1}(a_{5})\) has been changed to \(A_{1}(A_{5})\) to denote the fact that they are now constants. By assuming \(da_{j}/dt=\sigma a_{j}\), we arrive at the equation \[(\sigma+\mathcal{V}_{2})(\sigma+\mathcal{V}_{3})(\sigma+\mathcal{V}_{4})-\bar {\mathcal{N}}_{4}\mathcal{N}_{3}|A_{5}|^{2}(\sigma+\mathcal{V}_{2})-\bar{ \mathcal{M}}_{2}\mathcal{M}_{3}|A_{1}|^{2}(\sigma+\mathcal{V}_{4})=0, \tag{14}\] where \(\sigma\) is the growth rate of the system of equations given in (13). A real, positive \(\sigma\) implies the daughter waves can extract energy from the parent wave. For \(\nu=0\) (inviscid flow), the growth rate has a simple expression given by \[\sigma=\sqrt{\bar{\mathcal{M}}_{2}\mathcal{M}_{3}|A_{1}|^{2}+\bar{\mathcal{N} }_{4}\mathcal{N}_{3}|A_{5}|^{2}}. \tag{15}\] Note that by setting either \(A_{1}=0\) or \(A_{5}=0\), we arrive at the standard growth expression for triads (3-wave systems). Moreover, we can also obtain the condition \[\sqrt{\bar{\mathcal{M}}_{2}\mathcal{M}_{3}|A_{1}|^{2}+\bar{\mathcal{N}}_{4} \mathcal{N}_{3}|A_{5}|^{2}}\leqslant\sqrt{2}\widehat{\sigma}_{1}\quad\text{ or }\quad\sqrt{\bar{\mathcal{M}}_{2}\mathcal{M}_{3}|A_{1}|^{2}+\bar{\mathcal{N}}_{ 4}\mathcal{N}_{3}|A_{5}|^{2}}\leqslant\sqrt{2}\widehat{\sigma}_{5} \tag{16}\] where \(\widehat{\sigma}_{1}(\widehat{\sigma}_{5})\) is the maximum growth rate of all 3-wave systems of parent wave-1(5). If both the parent waves have the same amplitude (\(A_{1}=A_{5}\)), frequency, and wavevector norm, then \(\widehat{\sigma}_{1}=\widehat{\sigma}_{5}\). In such cases, (16) implies that any 5-wave system's growth rate could, in principle, be higher (maximum being \(\sqrt{2}\) times) than the maximum growth rate of all 3-wave systems. For all the parent wave combinations considered in this paper, \(A_{1}=A_{5}\) is consistently taken for the analysis. ### 5-wave system identification For a resonant 5-wave system, all three daughter waves should satisfy the dispersion relation. This leads to 3 constraints, which are given below: \[\omega_{3}^{2} =\frac{N^{2}(k_{3}^{2}+l_{3}^{2})+f^{2}m_{3}^{2}}{k_{3}^{2}+l_{3} ^{2}+m_{3}^{2}}, \tag{17a}\] \[\omega_{2}^{2} =\frac{N^{2}(k_{2}^{2}+l_{2}^{2})+f^{2}m_{2}^{2}}{k_{2}^{2}+l_{2} ^{2}+m_{2}^{2}},\] (17b) \[\omega_{4}^{2} =\frac{N^{2}(k_{4}^{2}+l_{4}^{2})+f^{2}m_{4}^{2}}{k_{4}^{2}+l_{4} ^{2}+m_{4}^{2}}. \tag{17c}\] The following triad conditions also add additional constraints: \[\omega_{2} =\omega_{1}-\omega_{3}, \mathbf{k}_{2} =\mathbf{k}_{1}-\mathbf{k}_{3}, \tag{18a}\] \[\omega_{4} =\omega_{5}-\omega_{3}, \mathbf{k}_{4} =\mathbf{k}_{5}-\mathbf{k}_{3}, \tag{18b}\] where \(\mathbf{k}_{j}=(k_{j},l_{j},m_{j})\) is the wavevector of the \(j-\)wave. Substitution of (18a) in (17b), and (18b) in (17c) lead to \[\omega_{3}^{2} =\frac{N^{2}(k_{3}^{2}+l_{3}^{2})+f^{2}m_{3}^{2}}{k_{3}^{2}+l_{3} ^{2}+m_{3}^{2}}, \tag{19a}\] \[(\omega_{1}-\omega_{3})^{2} =\frac{N^{2}((k_{1}-k_{3})^{2}+(l_{1}-l_{3})^{2})+f^{2}(m_{1}-m_{3 })^{2}}{(k_{1}-k_{3})^{2}+(l_{1}-l_{3})^{2}+(m_{1}-m_{3})^{2}},\] (19b) \[(\omega_{5}-\omega_{3})^{2} =\frac{N^{2}((k_{5}-k_{3})^{2}+(l_{5}-l_{3})^{2})+f^{2}(m_{5}-m_{3 })^{2}}{(k_{5}-k_{3})^{2}+(l_{5}-l_{3})^{2}+(m_{5}-m_{3})^{2}}. \tag{19c}\] Solutions for (19a)-(19c) would provide resonant 5-wave systems, and they are found by varying \((\omega_{3},k_{3},l_{3},m_{3})\). Hereafter we always assume \(|\mathbf{k}_{1}|=|\mathbf{k}_{5}|\), however, \(\mathbf{k}_{1}\neq\mathbf{k}_{5}\), and \(\omega_{1}=\omega_{5}=0.1N\). Such small frequency values appear in many other studies, for example, Nikurashin & Legg (2011); Mathur _et al._ (2014). ## 3 Results from the reduced-order model ### Parent waves in the same vertical plane #### 3.1.1 \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(k_{1},0,-m_{1})\) We first consider the scenario where the two parent waves have the same horizontal wavevector \((k,l)\) but travel in vertically opposite directions, see figure 1(a). For simplicity, the meridional wavenumbers of the parent waves (\(l_{1}\) and \(l_{5}\)) are assumed to be 0. Internal waves propagating in vertically opposite directions are ubiquitous in the oceans. For example, internal wave beams getting reflected from the bottom surface of the ocean, or from the air-water interface, or even from the pycnocline, will result in scenarios where parent waves travelling in vertically opposite directions meet. For the given set of parent waves, a resonant 5-wave system is possible only when \(\omega_{3}\approxeq\omega_{1}-f\). No other resonant 5-wave systems were found for \(0<f/\omega_{1}<0.5\). Hence, the 5-wave system always consists of (a) two parent waves, each with frequency \(\omega_{1}\) (as per our assumption), (b) a common daughter wave with frequency \(\omega_{1}-f\), and (c) two inertial (frequency \(f\)) daughter waves, which also propagate in vertically opposite directions. Next we study the growth rates of the 5-wave system. First, we decide on the viscosity values in a non-dimensionalised form. In this regard we choose \(|A_{1}|/k_{1}\nu=10^{4}\) and \(|A_{1}|/k_{1}\nu=10^{7}\), which are used throughout the paper. At \(|A_{1}|/k_{1}\nu=10^{7}\), viscous effects are usually negligible, hence \(|A_{1}|/k_{1}\nu=10^{4}\) is also considered to see what 5-wave systems are affected by the viscosity. We note in passing that \(|A_{1}|/k_{1}\nu\sim\mathcal{O}(10^{6})\) was used by Bourget _et al._ (2013) to study triads with realistic oceanic parameters. Figure 2(a) shows how the maximum growth rate of the 5-wave system and 3-wave systems vary with \(f/\omega_{1}\) for different \(\nu\). The growth rates are evaluated by fixing \(k_{1}\) and \(A_{1}\) as \(f\) is varied. Figures 2(b)-2(c) respectively show the horizontal and the vertical wavenumbers of the daughter waves involved in the 5-wave interaction. Note that the common daughter wave's horizontal wavevector \((k_{1},0)\) is always the same as that of the parent waves. This is expected since the other two daughter waves are inertial waves. Moreover, the common daughter wave can have a positive or negative vertical wavenumber, and both cases have the same growth rate. For low \(f\) values, the 3-wave system has a higher growth rate, hence it is the dominant instability. This is because the two 3-wave systems that combine to form the 5-wave system always contain inertial daughter waves. Moreover, the growth rate of 3-wave systems containing inertial waves is much smaller than the maximum possible growth rate (Richet _et al._, 2018, figure 8). As a result, the resonant 5-wave system is of little significance at low latitudes. As \(f\) is increased, the 5-wave system's growth rate becomes higher than the maximum growth rate of all 3-wave systems. The transition occurs near \(f/\omega_{1}\approx 0.3\), see figure 2(a). For high values of \(f/\omega_{1}\), 5-wave systems may be faster in locations where an internal wave beam gets reflected from a flat bottom surface or from a nearly flat air-water surface. However, for inclined reflecting surfaces, the results presented here (which are based on the assumption that the two parent waves have the same wavevector norm) may not be valid since inclination results in a significant change in wavevector norm (Phillips, 1977). Finally, we note that the parent waves combination considered in this subsection produces a field that resembles an internal wave mode in a vertically bounded domain. Hence, the predictions made in this section should also hold for modes in a bounded domain. However, in a vertically bounded domain, only a discrete set of vertical wavenumbers are allowed for a particular frequency. As a result, for a resonant 5-wave system to exist, the vertical wavenumbers of the daughter waves should be a part of the discrete vertical wavenumber spectrum. #### 3.1.2 \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\) Here we focus on the scenario where the two parent waves have the same vertical wavenumber but travel in horizontally opposite directions, as given in figure 1(b). Moreover, \(l_{1}=l_{5}=0\) is again assumed. For this particular combination of parent wavevectors, resonant 5-wave systems are possible for \(\omega_{3}\in(f,0.53\omega_{1})\). For \(f/\omega_{1}=0.01\), resonant 5-wave systems exist up to \(\omega_{3}\approx 0.53\omega_{1}\). As \(f\) increases, the maximum possible value of \(\omega_{3}\) slowly reduces to \(0.5\omega_{1}\). We define two branches: 5-wave systems where the common daughter wave has a positive (negative) vertical wave number is defined as Branch-1(2). Figures 3(a)-3(b) show how the maximum growth rate for each of these two branches varies with \(f\) for two different viscosity values. The maximum growth rate of 3-wave systems is once again plotted so as to provide a clear comparison between 5-wave and 3-wave systems. For lower \(f\) values, resonant 5-wave systems have a lesser maximum growth rate than the maximum growth rate of 3-wave systems (\(\sigma/\sigma_{\rm ref}<1\)). However, the 5-wave instability is faster than the 3-wave instability for the higher \(f\) values. The transition once again occurs near \(f\approx 0.3\omega_{1}\). All these observations are similar to that in figure 2(a). For high \(f\) values, the maximum growth rate of both branches is almost the same. Viscosity has a non-negligible effect only when \(f\approx\omega_{1}/2\), where the daughter waves have a high vertical wavenumber. Figures 4(a)-4(c) show how the growth rate of both the branches vary with \(\omega_{3}/\omega_{1}\) for three different \(f/\omega_{1}\) values that are greater than 0.3. Figure 4 reveals that growth rates always decrease as \(\omega_{3}\) is increased, indicating the maximum growth rate is at \(\omega_{3}=f\). For \(f/\omega_{1}>0.3\), the common daughter wave is always an inertial wave in the most unstable 5-wave system. Interestingly, as \(\omega_{3}\) is increased from \(f\), the meridional wavenumber of the common daughter wave increases, hence making the instability 3D. Moreover, for the three \(f\) values analysed in figure 4, the zonal wavenumber of the common daughter wave (\(k_{3}\)) is nearly zero for all the 5-wave systems. Note that the maximum growth rate occurs at \(\omega_{3}\approx f\) where \((k_{3},l_{3})\to 0\). As a result, the system's most unstable mode can be studied/simulated by considering a 2D system. The effects of viscosity are more apparent for \(|A_{1}|/k_{1}\nu=10^{4}\) as expected, and Branch-1 is affected by viscous effects more than Branch-2. For high \(f\) values, inertial waves have been observed to be the daughter waves of a parent internal wave with semidiurnal frequency, see Yi _et al._ (2017); Richet _et al._ (2017, 2018); Chen _et al._ (2019). In topographic generation of internal waves, internal wave beams intersecting each other is quite common. The locations where internal wave Figure 2: (a) Variation of 5-wave system’s (denoted by 5WS) growth rate and 3-wave systems’ (denoted by 3WS) maximum growth rate with \(f\) for \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(k_{1},0,-m_{1})\) for two different viscosity values. \(\sigma_{\rm ref}\) is the maximum growth rate of a 3-wave system at \(f/\omega_{1}=0.01\) and \(|A_{1}|/k_{1}\nu=10^{4}\). The horizontal and vertical wavenumbers of the daughter waves in the 5-wave system are respectively shown in (b) and (c). Note that in (b), \(k_{2}=k_{4}=0\). beams intersect can serve as spots where a single inertial wave can extract energy from two different internal wave beams. ### Oblique parent waves In the oceans, parent waves that are not on the same vertical plane can also propagate amidst each other. Here we study the maximum growth rate for 5-wave systems where the parent waves have a non-zero meridional wavenumber. The parent wavevectors are given by \[\mathbf{k}_{1}=(k_{1}\sin{(\theta/2)},k_{1}\cos{(\theta/2)},m_{1}),\qquad \quad\mathbf{k}_{5}=(-k_{1}\sin{(\theta/2)},k_{1}\cos{(\theta/2)},m_{1}), \tag{3.1}\] where the parameter \(\theta\) is used to vary the angle between the two parent wavevectors in the \((k,l)\) plane. Note that \(\theta=\pi\) leads to the wavevector combination \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\) considered in SS3.1.2. Following (3.1), the condition \(|\mathbf{k}_{1}|=|\mathbf{k}_{5}|\) will be automatically satisfied. The direction of the parent wavevectors can be changed by varying \(\theta\), and how that impacts the growth rates of 5-wave systems will be explored and analysed. Figures 5(a)-5(c) show the variation of the maximum growth rate of 3-wave systems and 5-wave systems with \(f\) respectively for three different \(\theta\) values: \(\pi/4\), \(\pi/2\), and \(3\pi/4\). Increasing \(\theta\) results in 5-wave systems being less effective than 3-wave systems in the lower latitudes. For \(\theta=\pi/4\), the 5-wave system is the dominant instability regardless of the latitude. A similar result is observed for \(\theta=\pi/2\), however, the difference between the 5-wave and 3-wave systems is clearly reduced compared to \(\theta=\pi/4\). For \(\theta=3\pi/4\), the 5-wave system is the dominant instability only for \(f/\omega_{1}\gtrapprox 0.25\). Note that regardless of the \(\theta\) value, 5-wave instability is expected to be faster than the 3-wave instability when \(f/\omega_{1}\gtrapprox 0.3\) considering the results from SS3.1.2. For \(\theta=\pi\) and \(f/\omega_{1}\gtrapprox 0.3\), the maximum growth rate for 5-wave systems occurs when \(\omega_{3}=f\). However, for \(\theta=\pi/4\) and \(\pi/2\), in the most unstable 5-wave system, wave-3 is not an inertial wave even for \(f/\omega_{1}=0.45\). Hence, as \(\theta\) is reduced, it is not necessary that the most unstable 5-wave system contains inertial waves. Note that the predictions for the 5-wave system will fail as \(\theta\to 0\) since both parent waves will have the same wavevector. ## 4 Numerical simulations Here we present results from numerical simulations conducted to validate the predictions from reduced-order analysis presented in SS3, with the primary focus being on SS3.1.1 and SS3.1.2. Equations (2.1)-(2.1) are solved with an open source pseudo-spectral code Dedalus (Burns _et al._, 2020). For numerical validations, we only consider 2D situations, i.e. \(\partial/\partial y=0\), implying \(l_{j}=0\). The details of the simulations are as follows: we fix the parent waves' horizontal wavenumber at \(k_{1}=1/H\), where \(H=500\)m. We consistently use \(N=10^{-3}\)s\({}^{-1}\) and \(\omega_{1}/N=0.1\). However \(f/\omega_{1}\) is varied, and hence the vertical wavenumber of the parent waves (\(m_{1}\)) is a function of \(f/\omega_{1}\). The amplitude of the Figure 5: Variation of maximum growth rate with \(f\) for 5-wave systems and 3-wave systems for a oblique set of parent waves. (a) \(\theta=\pi/4\), (b) \(\theta=\pi/2\), and (c) \(\theta=3\pi/4\). parent waves is chosen such that the maximum zonal velocity (_u_) is always \(0.001\)ms\({}^{-1}\). Computational time is variable and depends on the simulation in question. For all simulations, 4-th order Runge-Kutta method is used as the time-stepping scheme with a time step size of \((2\pi/\omega_{1})/800\) (i.e. \(800\) steps for one time period of the parent wave). All the fields are expressed using Fourier modes in the horizontal direction, and either \(64\) or \(128\) modes are used per one horizontal wavelength of the parent wave. Moreover, the vertical direction is resolved using Chebyshev polynomials or Fourier modes, and the resolution is varied from a minimum of \(96\) to a maximum of \(512\) grid points per one vertical wavelength of the parent wave. All simulations are initialised with a small amplitude noise, the spectrum of which is given by \[\mathcal{R}_{\rm noise}(x,z)=\int_{0}^{k_{\rm noise}}\int_{m_{\rm lowest}}^{m_ {\rm noise}}A_{\rm noise}\sin(kx+mz+\phi_{\rm noise}(k,m))dmdk, \tag{10}\] where \(\phi_{\rm noise}(k,m)\in[0,2\pi]\) is the random phase part, which is generated using the 'rand' function in Matlab for each \((k,m)\). Unless otherwise specified, \(k_{\rm noise}=48k_{1}\) and \(m_{\rm noise}=48m_{1}\). Moreover, \(m_{\rm lowest}=2\pi/Lz\), where \(Lz\) is the length of the domain in the \(z\)-direction. Equation (10) is added to the \(b\) or \(v\) field. The noise amplitude \(A_{\rm noise}\) is at least \(10^{-3}\) times smaller than the primary waves' corresponding amplitude. Unless otherwise mentioned, \(\nu=10^{-6}\)m\({}^{2}\)s\({}^{-1}\) is taken. Equation (10) is added to the \(b\) or \(v\) field. The noise amplitude \(A_{\rm noise}\) is at least \(10^{-3}\) times smaller than the primary waves' corresponding amplitude. Unless otherwise mentioned, \(\nu=10^{-6}\)m\({}^{2}\)s\({}^{-1}\) is taken. ### \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(k_{1},0,-m_{1})\) We first focus on the parent wavevector combination \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(k_{1},0,-m_{1})\). As mentioned previously, the combination of wavevectors \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(k_{1},0,-m_{1})\) leads to fields that are very similar to an internal wave mode in a vertically bounded domain. As a result, we also simulate low modes (modes-1 and 2) in a vertically bounded domain to observe whether there is an emergence of the '5-wave instability'. The decay of the parent waves are simulated at specific latitudes where the daughter waves' vertical wavenumbers in the resonant 5-wave system are multiples of \(m_{1}/3\) or \(m_{1}/2\). This choice helps in reducing the computational resources required for the simulations. To estimate the energy in different wavevectors, we simply use the Fast Fourier Transform (FFT) for both \(x\) and \(z\) directions in simulations where the parent waves are plane waves. In a vertically bounded domain, FFT is used only in the \(x\) direction, while for the \(z-\)direction, the orthogonal nature of the modes is exploited. As a measure of the energy contained in a wavevector, a non-dimensionalised energy \(\widehat{E}\) is introduced: \[\widehat{E}(k,0,m,t)=\frac{|\hat{u}(k,0,m,t)|^{2}+|\hat{w}(k,0,m,t)|^{2}+|\hat {v}(k,0,m,t)|^{2}+|\hat{b}(k,0,m,t)|^{2}/N^{2}}{E_{\rm ref}} \tag{11}\] where the hat variables \((\hat{u},\hat{w},\hat{v},\hat{b})\) respectively denote the Fourier amplitudes of \((u,w,v,b)\). \(E_{\rm ref}\) serves as the measure of parent waves' energy at \(t=0\) and is defined as \[E_{\rm ref}=\left(|\hat{u}(k_{1},0,m_{1})|^{2}+|\hat{w}(k_{1},0,m_{1})|^{2}+| \hat{v}(k_{1},0,m_{1})|^{2}+|\hat{b}(k_{1},0,m_{1})|^{2}/N^{2}\right)\bigg{|}_ {t=0} \tag{12}\] We simulate a total of 6 cases: 2 cases for parent waves in an unbounded domain (plane waves), and 2 cases each for mode-1 and mode-2 waves in a vertically bounded domain. For mode-1, \(m_{\rm noise}=96m_{1}\) is chosen. For every simulation, a different \(f\) value is used, and hence the resonant 5-wave system is different in each case. Figure 6 shows the exponential growth of daughter waves at 6 different latitudes due to 5-wave interactions. Figures 6(a)-6(e) plot four different wavevectors. The wavevector \((|k_{1}|,0,|m_{1}|)\) contains the energy of both parent waves, while the other three wavevectors indicate the daughter waves. All three daughter waves grow exponentially, which provides clear evidence that this is a 5-wave system. In 6(f), two different 5-wave systems emerge and both of them are plotted. Note that as the \(f\) value increases, the daughter waves' vertical wavenumber also increases in the simulations (see the legend) which is in line with the theoretical predictions given in figure 2(c). In all six simulations, inertial waves are present (\(k=0\)). The growth rate of the daughter waves is calculated by estimating \(d\ln{(\widehat{E})}/dt\). The comparison of growth rates from simulations and theory is presented in figure 7, which shows a reasonably good agreement. For all the cases, the average of the three daughter waves' growth rate in a particular 5-wave system is taken. Moreover, figure 7 reveals that the growth rates are well above the maximum growth rate of all 3-wave systems. Note that the 5-wave interactions can happen for standing modes only at specific latitudes because the vertical wavenumbers are discrete, not continuous. However, for plane waves, there is no such constraint. As per the predictions in SS3, 5-wave interactions should be faster than the 3-wave interactions provided \(f/\omega_{1}\gtrapprox 0.3\). It was observed that as \(f/\omega_{1}\to 0.5\), multiple daughter wave combinations grow and extract a considerable amount of energy from the parent waves. This can even be seen in figure 6(f), where two different 5-wave systems emerge and extract a significant amount of energy. As \(f/\omega_{1}\to 0.5\), multiple 5-wave systems can become coupled and grow at a rate which is faster than any single 5-wave system (discussed in detail in SS4.2). Hence, the growth rates predicted from a single 5-wave interaction will not be accurate when \(f\approx\omega_{1}/2\). As \(f\rightarrow\omega_{1}/2\), the growth rate for a mode-1 wave with zonal velocity amplitude \(0.002\)ms\({}^{-1}\) will approach \(2\sigma_{\rm cl}\) instead of \(\sqrt{2}\sigma_{\rm cl}\), where \(\sigma_{\rm cl}\) is the maximum growth rate for a plane wave with zonal velocity amplitude \(0.001\)ms\({}^{-1}\) at the critical latitude (Young _et al._, 2008). We realise that in Young _et al._ (2008), the mode-1 wave was considered in the presence of a non-constant \(N\). However, their prediction is still expected to hold in the present scenario (constant \(N\)). Our numerical simulations (results not shown here) also show the growth rates of the daughter waves being well above \(\sqrt{2}\sigma_{\rm cl}\) for \(f\approx\omega_{1}/2\). ### \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\) We now validate 5-wave interactions for parent waves propagating in horizontally opposite directions. In this regard, we focus on latitudes where the daughter waves' vertical wavenumbers are multiples of \(m_{1}/2\). Figure 8 shows the growth of daughter waves for four different \(f/\omega_{1}\) values. Figures 8(a)-8(b) show energy in three wavevectors growing exponentially. The three wavevectors encompass both branch-1 and branch-2 daughter waves' wavevectors, and the simulation results are in line with theoretical predictions. The green curve (the wave with non-zero horizontal wavenumber) contains the energy of both leftward and rightward propagating waves. The growth rates estimated from the simulations are much higher than what is expected for a 3-wave interaction. For example, at \(f=0.298\omega_{1}\), the growth rate of the daughter waves is \(\approx 30\%\) more than the growth rate of the individual 3-wave interactions that combine to form the 5-wave interaction. Figure 8(c) shows only two daughter waves, which are part of the Branch-2 5-wave system. In this case, Branch-1 did not have a growth comparable to Branch-2. Finally, 8(d) has three distinct 5-wave systems * System-1 (daughter waves): \((k_{1},0,4m_{1})\), \((-k_{1},0,4m_{1})\), \((0,0,-3m_{1})\), * System-2 (daughter waves): \((k_{1},0,4.5m_{1})\), \((-k_{1},0,4.5m_{1})\), \((0,0,-3.5m_{1})\), * System-3 (daughter waves): \((k_{1},0,-4.5m_{1})\), \((-k_{1},0,-4.5m_{1})\), \((0,0,5.5m_{1})\). System-1 is also present for \(f/\omega_{1}=0.476\). This 5-wave system is present in both \(f/\omega_{1}=0.476\) and \(0.48\) because the change in \(f\) is not that significant and hence the specific interaction is not expected to be detuned significantly. As a result, the system has an exponential growth. Even though the growth rates of System-2 and System-3 are observed to be higher than the growth rate of System-1, System-1 drains the largest amount of energy from the parent waves because the daughter waves in this system have a slightly higher energy at \(t=0\). Growth rates obtained from the reduced order models are once again compared with the growth rates obtained from the numerical simulations, see figure 9. When there are multiple branches growing, the average growth rate of the (two) branches is taken since both branches have nearly the same growth rate. For \(f/\omega_{1}=0.48\) in figure 8(d), the average of system-2 and system-3's growth rates is compared with the theoretical growth rate since these are the two resonant Branch-1 and Branch-2 systems at \(f/\omega_{1}=0.48\). It can be seen that theoretical predictions match reasonably well with the simulations. Moreover, similar to SS4.1, the growth rates of 5-wave systems are well Figure 6: 5-wave interactions for plane waves, mode-1, and mode-2 different \(f\) values (i.e. latitudes), plotted in ascending order. \(\widehat{t}\equiv t\omega_{1}/2\pi\). Figure 7: Comparison between theoretical growth rates and growth rates obtained from the simulations for \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(k_{1},0,-m_{1})\). Red (blue) markers indicate results from the simulations (theory), see legend. The black curve plots the variation of maximum growth rate of 3-wave systems with \(f\). above the maximum growth of 3-wave systems (shown by the black curve in figure 9) for \(f/\omega_{1}>0.4\). ### Simulations and analysis for \(f\approx\omega_{1}/2\) In SS4.1, we saw that the theoretical growth rates of 5-wave systems are not accurate for \(f\approx\omega_{1}/2\). To test whether the 5-wave systems' growth rate holds near the critical latitude for \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\), we ran simulations for three different \(f/\omega_{1}\) values: \(f/\omega_{1}=0.496,0.498\) and \(0.499\). Moreover, for each \(f\), we ran three simulations: one with \(\nu=10^{-6}\)m\({}^{2}\)s\({}^{-1}\), one with \(\nu=0.25\times 10^{-6}\)m\({}^{2}\)s\({}^{-1}\), and finally one simulation with hyperviscous terms instead of viscous terms (i.e. by setting \(\nu=0\)). The hyperviscous operator \(-\nu_{H}\Delta^{4}()\) is added to right hand side of (2.1a)-(2.1b) with \(\nu_{H}=0.25\times 10^{-6}\)m\({}^{8}\)s\({}^{-1}\). Hyperviscous terms are intended to make the simulation nearly inviscid, and they have been used previously to study PSI (Hazewinkel & Winters, 2011). All simulations are run for 150 time periods of the parent wave. The simulations are Figure 8: Four different 5-wave interactions for parent waves with wavevectors \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\). Figure 9: Comparison between theoretical growth rates and growth rates obtained from the simulations for \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\). Red markers indicate results from the simulations. Blue and green markers are predictions from the reduced order model. The black curve plots the variation of maximum growth rate of all 3-wave systems with \(f\). stopped before the small-scale daughter waves attain energy comparable to the parent waves. The small-scale waves will break in such cases, and the ensuing turbulence is not resolved and is also not the focus of this study. We are only interested in the growth rate of the daughter waves. Figure 10 shows the non-dimensionalised growth rates (\(\sigma/\sigma_{\rm cl}\)) of the daughter waves for all nine cases. In figure 10, each row is for a different \(f\) value. Moreover, for each column, \(\nu\) or \(\nu_{H}\) is held constant. For the hyperviscous simulations and simulations with the lower viscosity, it can be seen that the non-dimensionalised growth rates are well above \(\sqrt{2}\) for all three \(f\)-values (second and third column of figure 10). Daughter waves with \(m=20-40m_{1}\) have \(\sigma/\sigma_{\rm cl}\approx 1.85\) in the simulations with hyperviscous terms. For each \(f\), simulations with \(\nu=10^{-6}\)m\({}^{2}\)s\({}^{-1}\) have considerably lower growth rates (especially for higher wavenumbers) compared to the other simulations because of the viscous effects. We provide the reason for \(\sigma/\sigma_{\rm cl}\) being well above \(\sqrt{2}\) using the reduced order model. The dispersion relation for the daughter waves can be rewritten as \[(f+\delta\omega)^{2}=\frac{N^{2}(nk_{1})^{2}+f^{2}m^{2}}{(nk_{1})^{2}+m^{2}}, \tag{4.4}\] where \(\delta\omega\) is the difference between the wave's frequency (\(f+\delta\omega\)) and the inertial frequency (\(f\)), and \(n\) is some constant (but for our purposes, will primarily be an integer). Note that \(k_{1}\) is the zonal wavenumber of the parent waves, but \(m\) is _not_ the vertical wavenumber of parent waves. Near the critical latitude, in a wave-wave interaction, any daughter wave's frequency would be approximately equal to the inertial frequency, implying \(\delta\omega\ll f\). Hence (4.4) leads to \[\frac{\delta\omega}{f}\approx\frac{(N^{2}-f^{2})(nk_{1})^{2}}{2f^{2}[(nk_{1}) ^{2}+m^{2}]}\ll 1. \tag{4.5}\] In scenarios where \(N^{2}\gg f^{2}\), this yields \[m^{2}\gg\frac{N^{2}(nk_{1})^{2}}{2f^{2}}. \tag{4.6}\] Near the critical latitude, \(2f\approx\omega_{1}\). As a result, the sum of two daughter waves' frequencies would be \(\approx\omega_{1}\) provided their wavenumbers satisfy (4.6). As a consequence of this special scenario, a chain of coupled triads is possible as shown in figure 11. Every box contains the wavevector of a daughter wave. The absolute value of the horizontal wavenumber is lowest at the center of the chain, and it increases in either direction. However, the vertical wavenumber takes only two values. Note that \(n\) would be an integer considering how the absolute value of the horizontal wavenumber increases in either direction of the central box \((0,0,m)\). Any two boxes that are connected by the same blue line add up to give a parent wave's wavevector. For example, \((2k_{1},0,m)+(-k_{1},0,m_{1}-m)\) gives \((k_{1},0,m_{1})\), which is the wavevector of one of the parent waves. Moreover, \((-k_{1},0,m_{1}-m)+(0,0,m)\) gives \((-k_{1},0,m_{1})\), which is the other parent wave's wavevector. Except for the daughter waves at the ends of the chain, every daughter wave would be forced by both parent waves. Assuming the wavenumbers of the daughter waves in the chain satisfy (4.6), the sum of any two waves' frequencies would be \(\approx\omega_{1}\), thus satisfying all the required triad conditions. For a fixed \(m\), \(\delta\omega\) would increase as \(n\) is increased, which is evident from (4.4). Hence for very large \(n\), the daughter wave's frequency (\(f+\delta\omega\)) cannot be approximated by \(f\) and the sum of two daughter waves' frequencies cannot be approximated by \(\omega_{1}\) simply because \(\delta\omega\) would be large. As a result, the triad conditions would not be satisfied for very large \(n\). Assuming \(\delta\omega\) is negligible up to some \(n\), the wave amplitude equations for the \(2n+1\) daughter waves shown in figure 11 can be written in a compact way as follows: \[\frac{d\mathbf{a}}{dt} =\mathcal{H}\bar{\mathbf{a}} \tag{4.7}\] \[\mathbf{a} =[a_{-n}\;\;a_{1-n}\;\;\ldots\;\;a_{n-1}\;\;a_{n}]^{T}\] (4.8) \[\mathcal{H} =\begin{bmatrix}-\mathcal{V}_{-n}&\mathcal{M}_{(-n,1-n)}A_{1}&0& 0&0\\ \mathcal{M}_{(1-n,-n)}A_{1}&-\mathcal{V}_{1-n}&\mathcal{N}_{(1-n,2-n)}A_{5}&0&0 \\ \vdots&\ddots&\ddots&\ddots&\vdots\\ 0&0&\mathcal{M}_{(n-1,n-2)}A_{1}&-\mathcal{V}_{n-1}&\mathcal{N}_{(n-1,n)}A_{ 5}\\ 0&0&0&\mathcal{N}_{(n,n-1)}A_{5}&-\mathcal{V}_{n}\end{bmatrix} \tag{4.9}\] where the coefficients \(\mathcal{N}_{(i,j)}\) and \(\mathcal{M}_{(i,j)}\) are given by \[\mathcal{N}_{(i,j)}=\frac{\mathfrak{N}_{(i,5,j)}}{\mathcal{D}_{i}},\;\;\;\; \;\mathcal{M}_{(i,j)}=\frac{\mathfrak{N}_{(i,1,j)}}{\mathcal{D}_{i}}. \tag{4.10}\] The expression for \(\mathfrak{N}_{(i,*,j)}\) is given in Appendix A. Equation (4.7) is an extension of the system given in (2.13) to an arbitrary number of daughter waves. Note that using \(n=1\) in (4.7) would result in equation (2.13). The growth rate for the system given in (4.7) can be found by calculating the eigenvalues of \(\mathcal{H}\). In addition to the \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\) case, we also analyze the theoretical growth rates for oblique parent waves near the critical latitude using (4.7). To this end, we consider four \(\theta\) values: \(\theta=\pi/4\), \(\pi/2\), \(3\pi/4\), and \(\pi\) (see (3.1) for the definition of \(\theta\)). For \(\theta\neq\pi\), the parent waves have a non-zero meridional wavenumber (\(l_{1}\)). In such cases, the meridional wavenumber of all the daughter waves in the chain is simply assumed to be \(l_{1}/2\). For all four \(\theta\) values, figure 12 shows the gradual increase of the growth rate as \(n\) increases for two different \(m\) values. Figure 10: Growth rate contours (\(\sigma/\sigma_{\rm cl}\)) for the parent waves with wavevectors \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) and \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\) near the critical latitude (\(f/\omega_{1}\approx 0.5\)). \(f/\omega_{1}=0.496\) for Row-1 ((a), (b) and (c)), \(f/\omega_{1}=0.498\) for Row-2 ((d), (e) and (f)), and \(f/\omega_{1}=0.499\) for Row-3 ((g), (h) and (i)). Viscosity/hyperviscosity values used are as follows: \(\nu=10^{-6}\)m\({}^{2}\)s\({}^{-1}\) for Column-1 ((a), (d) and (g)), \(\nu=0.25\times 10^{-6}\)m\({}^{2}\)s\({}^{-1}\) for Column-2 ((b), (e) and (h)), and \(\nu_{H}=0.25\times 10^{-6}\)m\({}^{8}\)s\({}^{-1}\) for Column-3 ((c), (f) and (i)). The vertical wavenumbers are chosen to be a large value so that they satisfy (4.6) up to \(n=7\). For all the \(\theta\) values, \(\sigma/\sigma_{\rm cl}\approx 2\) for the higher \(n\) values, which is what we observed in the simulation results shown in figure 10. Moreover, for \(n=1\), \(\sigma/\sigma_{\rm cl}\approx\sqrt{2}\) which is what we would expect for a 5-wave system with three daughter waves. Interestingly, for an oblique set of parent waves, the results are similar to the 2D case. Hence, 5-wave system growth rates do not apply near the critical latitude for an oblique set of parent waves as well. Note that even though high values of \(m\) are used in the reduced order model, simulations show that the resonance can occur even at \(m=20-40m_{1}\). As a result, near the critical latitude, regardless of the \(\theta\) value, two parent waves force daughter waves as if they are a single wave with approximately twice the amplitude. ## 5 Conclusions Wave-wave interactions play a major role in the energy cascade of internal gravity waves. In this paper, we use multiple scale analysis to study wave-wave interactions of two plane parent waves co-existing in a region. The main instability mechanism that is focused on is the 5-wave system instability that involves two parent waves and three daughter waves. The 5-wave system is composed of two different triads (3-wave systems) with one daughter wave being a part of both triads (see figure 1(c)). For parent waves with wavevectors \((k_{1},0,m_{1})\) and \((k_{1},0,-m_{1})\), the 5-wave system is only possible when Figure 11: A simplified schematic showing how different daughter waves are coupled. Any two wavevectors (boxes) connected by the same blue line can act as a daughter wave combination for the wavevector \(\mathbf{k}_{1}=(k_{1},0,m_{1})\) or \(\mathbf{k}_{5}=(-k_{1},0,m_{1})\). Figure 12: Variation of maximum growth rate with \(n\) for triad chains near the critical latitude. (a) \(\theta=\pi/4\), (b) \(\theta=\pi/2\), (c) \(\theta=3\pi/4\), and (d) \(\theta=\pi\). Two different \(m\) values are shown for each \(\theta\). the common daughter wave's frequency is almost equal to \(\omega_{1}-f\) (where \(\omega_{1}\) is the parent wave's frequency). The other two daughter waves are inertial waves that always propagate in vertically opposite directions. The growth rate of the above-mentioned 5-wave system is higher than the maximum growth rate of 3-wave systems for \(f/\omega_{1}\gtrapprox 0.3\). For parent waves with wavevectors \((k_{1},0,m_{1})\) and \((-k_{1},0,m_{1})\) (parent waves that propagate in horizontally opposite directions), similar to the previous parent wave combination, the maximum growth rate of 5-wave systems is higher than the maximum growth rate of 3-wave systems for \(f/\omega_{1}\gtrapprox 0.3\). For \(f/\omega_{1}\gtrapprox 0.3\), the common daughter wave's frequency is nearly equal to \(f\) in the most unstable 5-wave system. Moreover, as the common daughter wave's frequency is increased from \(f\), the meridional wavenumber increases significantly while the zonal wavenumber of the common daughter wave stays negligible. We also study 5-wave systems for cases where the two parent waves are not confined to the same vertical plane. In such scenarios, the dominance of the 5-wave systems increase as the angle between the horizontal wavevectors of the parent waves (denoted by \(\theta\)) is decreased. Moreover, for any \(\theta\), the 5-wave system's instability is more dominant than the 3-wave system's instability for \(f\gtrapprox 0.3\omega_{1}\). Numerical simulations are conducted to test the theoretical predictions, and the theoretical growth rate of the 5-wave systems matches reasonably well with the results of the numerical simulations for a wide range of \(f-\)values. However, for all the 2D parent wave combinations considered, the growth rates from the simulations do not match the theoretical 5-wave systems' growth rate near the critical latitude where \(f\approx\omega_{1}/2\). Near the critical latitude, more than two triads become coupled, hence a chain of daughter waves is forced by the two parent waves. By modifying the reduced order model to account for a chain of daughter waves, the maximum growth rate is shown to be twice the maximum growth rate of all 3-wave systems. Moreover, the reduced order model showed similar results for parent waves that are not on the same vertical plane. Hence, near the critical latitude, the 5-wave systems' prediction is not expected to hold for oblique parent waves as well. ## Appendix A Nonlinear coupling coefficients The quantities \(\mathfrak{N}_{(j,p,d)}\) and \(\mathfrak{O}_{(j,b,c)}\) are defined so that the nonlinear coefficients can be written in a compact form: \[\mathfrak{N}_{(j,p,d)}= -(\omega_{p}-\omega_{d})k_{j}m_{j}\left[\left(U_{p}\bar{U}_{d}(k _{j})+U_{p}\bar{V}_{d}l_{p}-V_{p}\bar{U}_{d}d_{d}+U_{p}m_{p}-m_{d}\bar{U}_{d} \right)\right]\] \[-(\omega_{p}-\omega_{d})l_{j}m_{j}\left[\left(V_{p}\bar{V}_{d}(l _{j})-U_{p}\bar{V}_{d}k_{d}+V_{p}\bar{U}_{d}k_{p}+V_{p}m_{p}-m_{d}\bar{V}_{d} \right)\right]\] \[+(\omega_{p}-\omega_{d})(l_{j}^{2}+k_{j}^{2})\left[\bar{V}_{d}l_ {p}-l_{d}V_{p}+m_{j}+k_{p}\bar{U}_{d}-U_{p}k_{d}\right]\] \[+\mathrm{i}(l_{j}^{2}+k_{j}^{2})\left[\bar{U}_{d}B_{p}k_{p}-U_{p} \bar{B}_{d}k_{d}+\bar{V}_{d}B_{p}l_{p}-V_{p}\bar{B}_{d}l_{d}+B_{p}m_{p}-\bar{B }_{d}m_{d}\right]\] \[+\mathrm{i}f{l}j_{m}m_{j}\left[\left(U_{p}\bar{U}_{d}(k_{j})+U_{p }\bar{V}_{d}l_{p}-V_{p}\bar{U}_{d}l_{d}+U_{p}m_{p}-m_{d}\bar{U}_{d}\right)\right]\] \[-\mathrm{i}f{k}j_{m}j_{m}\left[\left(V_{p}\bar{V}_{d}(l_{j})-U_{p }\bar{V}_{d}k_{d}+V_{p}\bar{U}_{d}k_{p}+V_{p}m_{p}-m_{d}\bar{V}_{d}\right) \right], \tag{1}\] \[\mathfrak{O}_{(j,b,c)}= -(\omega_{b}+\omega_{c})k_{j}m_{j}\left[(U_{b}U_{c}(k_{j})+U_{b}V_{c} l_{b}+V_{b}U_{c}l_{c}+U_{b}m_{b}+U_{c}m_{c})\right]\] \[-(\omega_{b}+\omega_{c})l_{j}m_{j}\left[(V_{b}V_{c}l_{j}+U_{b}V_{c }k_{c}+V_{b}U_{c}k_{b}+V_{b}m_{b}+m_{c}V_{c})\right]\] \[+(\omega_{b}+\omega_{c})(l_{j}^{2}+k_{j}^{2})\left[V_{c}l_{b}+l_{ c}V_{b}+m_{j}+k_{b}U_{c}+U_{b}k_{c}\right]\] \[+\mathrm{i}(l_{j}^{2}+k_{j}^{2})\left[U_{c}B_{b}k_{b}+U_{b}B_{c}k _{c}+V_{c}B_{b}l_{b}+V_{b}B_{c}l_{c}+B_{b}m_{b}+B_{c}m_{c}\right]\] \[+\mathrm{i}fl_{j}m_{j}\left[(U_{b}U_{c}k_{j}+U_{b}V_{c}l_{b}+V_{b }U_{c}l_{c}+U_{b}m_{b}+m_{c}U_{c})\right]\] \[-\mathrm{i}fk_{j}m_{j}\left[(V_{b}V_{c}l_{j}+U_{b}V_{c}k_{c}+V_{b }U_{c}k_{b}+V_{b}m_{b}+m_{c}V_{c})\right],\] (A.2) where the indices \((j,p,d,b,c)\) are used to denote waves. Using (A.1) and (A.2), the nonlinear terms and coefficients used in wave amplitude equations can be written as \[\mathcal{M}_{1} =\frac{\mathfrak{O}_{(1,2,3)}}{\mathcal{D}_{1}},\ \ \ \ \mathcal{M}_{2}=\frac{\mathfrak{N}_{(2,1,3)}}{\mathcal{D}_{2}},\ \ \ \ \mathcal{M}_{3}=\frac{\mathfrak{N}_{(3,1,2)}}{\mathcal{D}_{3}},\] (A.3) \[\mathcal{N}_{5} =\frac{\mathfrak{O}_{(5,3,4)}}{\mathcal{D}_{5}},\ \ \ \ \mathcal{N}_{4}=\frac{\mathfrak{N}_{(4,5,3)}}{ \mathcal{D}_{4}},\ \ \ \ \mathcal{N}_{3}=\frac{\mathfrak{N}_{(3,5,4)}}{ \mathcal{D}_{3}},\] (A.4) \[\mathrm{NLT}_{1} =\mathcal{M}_{1}\mathcal{D}_{1}a_{2}a_{3},\ \ \ \ \ \mathrm{NLT}_{5}=\mathcal{N}_{5}\mathcal{D}_{5}a_{3}a_{4},\] (A.5) \[\mathrm{NLT}_{4} =\mathcal{N}_{4}\mathcal{D}_{4}a_{5}\bar{a}_{3},\ \ \ \ \mathrm{NLT}_{3}= \mathcal{N}_{3}\mathcal{D}_{3}a_{5}\bar{a}_{4}+\mathcal{M}_{3}\mathcal{D}_{3 }a_{1}\bar{a}_{2},\ \ \ \ \mathrm{NLT}_{2}=\mathcal{M}_{2}\mathcal{D}_{2}a_{1}\bar{a}_{3}.\] (A.6)
2306.06553
Hinting Pipeline and Multivariate Regression CNN for Maize Kernel Counting on the Ear
Maize is a highly nutritional cereal widely used for human and animal consumption and also as raw material by the biofuels industries. This highlights the importance of precisely quantifying the corn grain productivity in season, helping the commercialization process, operationalization, and critical decision-making. Considering the manual labor cost of counting maize kernels, we propose in this work a novel preprocessing pipeline named hinting that guides the attention of the model to the center of the corn kernels and enables a deep learning model to deliver better performance, given a picture of one side of the corn ear. Also, we propose a multivariate CNN regressor that outperforms single regression results. Experiments indicated that the proposed approach excels the current manual estimates, obtaining MAE of 34.4 and R2 of 0.74 against 35.38 and 0.72 for the manual estimate, respectively.
Felipe Araújo, Igor Gadelha, Rodrigo Tsukahara, Luiz Pita, Filipe Costa, Igor Vaz, Andreza Santos, Guilherme Fôlego
2023-06-11T00:58:38Z
http://arxiv.org/abs/2306.06553v1
# Hinting Pipeline and Multivariate Regression CNN for Maize Kernel Counting on the Ear ###### Abstract Maze is a highly nutritional cereal, widely used for human and animal consumption and also a raw material by the biofuels industries. This highlights the importance of precisely quantifying the corn grain productivity in season, helping the commercialization process, operationalization, and critical decision-making. Considering the manual labor cost of counting maize kernels, we propose in this work a novel preprocessing pipeline named _hinting_ that guides the attention of the model to the center of the corn kernels and enables a deep learning model to deliver better performance, given a picture of one side of the corn ear. Also, we propose a multivariate CNN regressor that outperforms single regression results. Experiments indicated that the proposed approach excels the current manual estimates, obtaining \(MAE\) of \(34.4\) and \(R^{2}\) of \(0.74\) against \(35.38\) and \(0.72\) for the manual estimate, respectively. Corn kernel counting, Hinting pipeline, CNN, Multivariate regression model ## I Introduction Maze (_Zea mays_\(L\)) is considered one of the most important cereals in the world due to its high production potential, chemical composition, and nutritional value. These qualities result in wide use for animal and human food, and drive high-tech industries to produce biofuels [1]. Currently, the process of quantifying maize grain production per unit of area in field conditions is laborious and time-consuming. The development of fast methods that are equivalent in accuracy and precision, based on computer vision algorithms, has already proved effective in some studies [2, 3]. They can significantly improve the sampling process in time and space, also considering the most heterogeneous crops, and the best estimation of the volume of mass and quantity of grains within a given plot. In this sense, in this work, we propose the design and development of a novel preprocessing pipeline named _hinting_, depicted in Figure 1, which guides the attention of the model to the center of the maize kernels and enables a deep learning model to deliver better performance, assuming only pictures of a single side of each ear, which is an important characteristic. We also designed a Multivariate Convolutional Neural Network (CNN) regression model for quantifying kernels of maize. For this task, we created a dataset with pictures of a single side of each ear, considering one ear per image, cultivated in two contrasting edaphoclimatic regions, considering a wide variability of genetic materials used in the South and Southeast regions of Brazil. Thus, the main contributions of this work are: * a dataset with pictures of a wide variability of genetic materials used in the South and Southeast regions of Brazil; * a color- and contour-based image preprocessing pipeline to extract background and generate a maize image dataset; * a multivariate CNN regressor for maize kernel counting on the ear, assuming only one single side of each corn ear; * an image preprocessing pipeline to improve deep learning results, named _hinting_; * the automation of a laborious and time-consuming job achieving better results with less time and effort. This article is organized as follows. Section II presents related works for maize kernel counting. The proposed approach is detailed in Section III. Experiments and their results are described in Sections IV and V, respectively. Finally, we conclude the paper in Section VI. ## II Related Works Previous works adopting digital image analysis and deep learning for maize kernel counting were based on images of maize kernels on plain surfaces, already removed from the ears [2, 4, 5]. Miller et al. [6] retrieved information about the average space of kernels along the cob axis, the number of kernels, and measures about the kernel's size axis. Li et al. [3] proposed a maximum likelihood estimator algorithm to classify normal and damaged maize kernels using images of kernels removed from the cob, handcrafted features, and line profile-based segmentation (LPSA) to isolate touching maize kernels. Grift et al. [7] proposed a semi-automated approach to estimate the kernels in a corn ear. The authors built a special box to photograph the ears using controlled lighting, background, and camera conditions. This box can also rotate the ear to place all kernels in front of the camera. The approach segments kernels by applying Otsu thresholding, morphological operations, kernel's pixel area, and kernel's center of mass. Khaki et al. [2] used a Deep Learning approach for detecting and counting maize kernels in a single corn ear image. First, a detection algorithm is performed to find the regions with maize kernels. Then, the authors use a regression CNN model to determine \((x,y)\) coordinates of the center of the kernels from the windows classified as positive by the previous step. Finally, they count the number of kernel centers found on the corn image. An improvement was reported on [8], in which the authors calculated density maps and image threshold processing and used this output to count the number of kernels in a corn ear image. To estimate the number of kernels in the ear, the authors multiply the kernel counting by a constant. Different from the aforementioned works, we propose an approach to estimate the total number of kernels of maize in a semi-controlled environment using only a one-sided picture of the maize. We require a dark background picture with a standard digital camera or smartphone. The estimation is performed using a multivariate CNN regressor, which will be presented in the next section. ## III Proposed Approach This work proposes a new approach for estimating the total number of maize kernels based on a CNN regression model. To fully evaluate the proposed approach, we conducted several experiments to analyze the ability of the method to predict the number of on-ear corn kernels based on a single-side input image. ### _Hinting Pipeline_ The Hinting Pipeline consists of a set of processing image operations that indicates the center of every kernel of maize. This pipeline performs according to the following steps, depicted in Figure 1. * **Maze kernel segmentation:** We segment the regions with pixels representing the yellow color, defined by a range in the Hue channel of HSV space. Then, we create a binary mask by selecting the largest connected component to extract the ear from the original image. * **Grayscale and image improvement:** we convert the input image to grayscale and improve its brightness and contrast using Contrast Limited Adaptive Histogram Equalization (CLAHE) [9]. We also remove noise by applying median filtering; * **Thresholding:** We binarize the image through an adaptive threshold and perform a bitwise AND operation between the thresholded image and the binary mask to remove the background. * **Morphological operations:** We apply morphological operations over the generated binary image to remove some artifacts caused by the previous process and to evidence the contours of the kernels in the maize ear. * **Maize kernel center marking:** Finally, we detect the center coordinates of all contours on the ear in the resulting image of the pipeline and mark these coordinates in the image. ### _Multivariate CNN Regressor_ We developed a CNN model that receives the output of the hinting pipeline for an image of one side of the maize ear and outputs an estimate of the total number of kernels. To this end, we implemented a custom residual CNN architecture [10] to perform a regression task. For this, we implemented the residual block as depicted in Figure (a)a. First, a convolutional layer extract features from the input layer. Then, the extracted features are normalized with batch normalization and processed by the Leaky ReLU non-linear activation function. Finally, the output of the activation function is summed with the input of the residual block, creating an effective residual connection that allows the gradient flow throughout the residual block. Besides the residual blocks, we also used standard convolutional blocks composed of a convolutional layer followed by batch normalization, Leaky ReLU, and Max Pooling, as depicted in Figure (b)b. The convolutional and residual blocks are concatenated to constitute a _combined_ block. We configured our residual architecture with six combined blocks of \(32\), \(64\), \(128\), \(256\), \(512\), and \(1024\) channels. We also applied a global average pooling in the output of the last combined block. Finally, a dense block (Figure (c)c) formed by a dense layer followed by batch normalization, Leaky ReLU (factor of \(0.3\)), and Dropout (factor of \(0.2\)), processes the features, and another dense layer with linear activation provides the regression output of the model. We also verified the performance of the proposed approach with additional outputs as a multivariate regression. We used the image also to predict the number of vertical rows in the entire ear and the number of kernels in two vertical rows. Such modification was done on the last dense layer, changing the number of outputs from one to four. The hypothesis is that providing more information to the model will improve the optimization process to boost performance results. Fig. 1: Hinting Pipeline steps: (a) original image; (b) maize kernel segmentation; (c) grayscale conversion and image improvement; (d) thresholding; (e) morphologic operations; (f) maize kernel center marking taking the connected components centroids. The blue Post-it on the figure was not used in this work. ## IV Experimental Setup This section presents details about the experiments we performed in this work. We detail the proposed dataset, the evaluation metrics, and some model training details. ### _Image Data Collection_ Two gatherings were carried out during the 2019/2020 growing season. The first gathering consisted of \(46\) commercial maize hybrids in Castro (Parana, Brazil), \(1050m\) altitude, and Cfb Koppen-Geiger climate classification [11]. The second gathering contained \(17\) commercial hybrids and was carried out in Itabera (Sao Paulo, Brazil), \(700m\) altitude, and Cfa Koppen-Geiger climate classification. Five maize ears of each hybrid were randomly collected between the physiological stage of maturation and the harvest date. These samples were placed on a dark frame for photographic capture with a \(12\) MP camera embedded in an Android \(8.0\) smartphone. The maize ears were rotated 180 degrees on their long axis, and a new image was acquired, resulting in two photos per ear. To estimate the performance of the proposed approach to unseen data, we divided the whole dataset into three distinct groups without overlap. We split the ears following the proportions of \(60\%\), \(20\%\), and \(20\%\) used to train, validate, and test, respectively. We also employed a stratification procedure to guarantee that we have at least one sample of each hybrid in each of the three subsets. In this manner, the training subset comprises \(189\) maize ears from \(50\) hybrids, leading to \(378\) images being front and back of the maize ear. The evaluation and test subsets include \(63\) maize ears from the same \(50\) hybrids and a total of \(126\) images for each group. ### _Evaluation Metrics_ We follow the standard guidelines generally used to evaluate regression models to estimate the behavior of our approach. Therefore, we use the Mean Absolute Error (MAE), and R-squared (\(R^{2}\)), to assess the regression quality in the three distinct subsets of the dataset. ### _Training Details_ Each RGB image of the dataset was preprocessed by cutting the images to centralize the maize ear and resizing them to obtain images with size \(512\times 128\). This way, we maintained the average aspect ratio of a maize ear. Vertical and horizontal random flip is also used in a data augmentation strategy to increase the number of diverse images. The data was grouped in batches of size \(64\), considering a random shuffling for the training dataset. We trained each model for \(100\) epochs, keeping the model in the epoch that got the best \(R^{2}\) score in the validation set. We used Adam optimizer [12], starting with a learning rate of \(0.0001\). We also used a reduce-on-plateau strategy, with a factor of \(0.1\), to guarantee a better convergence of the gradient. The training loss used to update the gradient was the Mean Absolute Error (MAE). The training and testing steps were done using an NVIDIA Tesla T4 GPU with \(16\) GB of RAM. ## V Results and discussion We present the results in three subsections: multivariate regression, hinting pipeline, and manual counting _vs._ CNN regression. To better estimate the global differences between the compared approaches, each CNN model was trained \(30\) times. We summarize the obtained results for the total number of kernels estimation output in Table I. Fig. 3: Salience maps of a sample of our database. We evaluated three cases in our study: Original image, Control (Random Dots), and _Hints_ (Section III-A). Fig. 2: Implemented blocks for custom residual architecture ### _Multivariate Regression_ First, we explore the effects of including additional outputs. We refer to the standard approach, with a single image as input and a single regression variable (total number of kernels) as output, as the _baseline_ regression. Similarly, we nominate the regressor with multiple outputs as _multivariate_. In this approach, the model receives a single image and approximates multiple outputs, _i.e_, the total number of kernels, the number of rows, and the number of kernels in two of these rows. Results are presented in Table I. A Mann-Whitney test indicated a greater \(R^{2}\) score for the proposed regression approach with multiples outputs (\(Mdn=0.72\)) than the baseline regression model (\(Mdn=0.69\)), \(U=189\), \(p<.001\). The multivariate also presented a lower estimated distance from the true MAE value (\(Mdn=35.7\)) compared to the _baseline_ model (\(Mdn=38\)), \(U=130\), \(p<.001\). We argue that modeling additional information correlated to the desired output increases the quality of the gradient throughout the network layers, increasing the efficiency of the training and leading to a better approximation. We perform the following experiments with the multivariate model, given its better results. ### _Hinting pipeline_ To validate our hinting pipeline hypothesis, we trained the model using three different preprocessing techniques and applied the proposed multivariate regression format. We start with a standard preprocessing pipeline as a baseline approach, which is basically the maize kernel segmentation step described in Section III-A. Then, we create a control group with blue dots added randomly to the baseline image. The number of blue dots added is equal to the median value of the total number of kernels from all images in the training dataset (\(240\)). Finally, we employ the full hinting pipeline detailed in Section III-A. Results of this experiment are reported in Table I. Kruskal-Wallis H-test [14] indicated a significant difference between the groups for \(R^{2}\) metric, \(H=23.5\), \(p<.001\), and also indicated a significant difference between groups for MAE metric, \(H=19\), \(p<.001\). Post-hoc analyses using Mann-Whitney test for pairwise group comparison [13] indicated that the \(R^{2}\) metric of the control group (\(Mdn=0.71\)) performed worse than the models trained using the standard (\(Mdn=0.72\)), \(U=291\), \(p<0.01\) and hint pipeline (\(Mdn=0.74\)), \(U=144\), \(p<.001\). The same behavior was observed for the MAE metric, where the control group achieved the largest prediction distance (\(Mdn=36.9\)) compared to the standard (\(Mdn=35.7\)), \(U=321\), \(p=.028\), and hinting pipelines (\(Mdn=34.4\)), \(U=170\), \(p<.001\). Secondly, the hints played an important role in helping the model better generalize maize kernels' counting. To assess the relevance of the hinting preprocessing procedure for the model decision-making, we perform a qualitative analysis of the corn images from our train set, calculating their salience maps [15] and evaluating them. Figure 3 depicts one sample of corn and its salience map, assuming the aforementioned preprocessing scenarios. This visual investigation reinforces our expectations that the hinting procedure improved the network attention for the pixels located around the center of kernels, allowing us to achieve better results, increasing the performance of the recognition of corn kernels and contributing to a higher generalization within unseen data represented by the test set. ### _Manual counting vs. regression CNN_ The manual estimation of the total number of kernels uses an approximation rule based on the number of kernels in two rows and the number of rows in a maize ear, given by Equation 1. \[\text{Total\leavevmode\nobreak\ results}=\frac{\text{ kernels\leavevmode\nobreak\ to\leavevmode\nobreak\ now\leavevmode\nobreak\ }1+\text{ kernels\leavevmode\nobreak\ in\leavevmode\nobreak\ now\leavevmode\nobreak\ }2}{2}\times\text{\leavevmode\nobreak\ number of\leavevmode\nobreak\ rows} \tag{1}\] Table II presents a comparison between the proposed approach and the manual approximation obtained by applying the above equation to the test dataset. This means that our method is not only faster, requiring less manual labor, but it also gets closer to the real count value compared to the manual approximation. ## VI Conclusions In this paper, we investigated the use of a multivariate CNN regression model for quantifying kernels of maize, considering a wide variability of genetic materials. The proposed Hinting Pipeline combined with the designed CNN-based approach has demonstrated the potential to solve plant phenotyping issues, in particular counting kernels in maize ears. Results from experiments reveal the feasibility of determining the number of maize kernels per ear from an image taken with a mobile phone. To the best of our knowledge, this is the first work that enhances the performance of a multivariate regression model through a hinting pipeline technique for maize kernel counting. Our findings showed that the hinting pipeline guided the attention of the model for the center of the ear kernels and led the model to achieve significant results related to the computed metrics. Finally, we highlight the need to conduct new studies focusing on the thousand-grain weight of maize. This variable, along with the count of kernels per ear is the most important in the sensitivity analysis of the corn yield equation. ## Acknowledgments This work was performed as part of the _PlatIAgro_ project, which is conducted by CPQD in partnership with RNP (National Teaching and Research Network) and funded by the Ministry of Science, Technology and Innovation of Brazil.
2307.09549
Dead Man's PLC: Towards Viable Cyber Extortion for Operational Technology
For decades, operational technology (OT) has enjoyed the luxury of being suitably inaccessible so as to experience directly targeted cyber attacks from only the most advanced and well-resourced adversaries. However, security via obscurity cannot last forever, and indeed a shift is happening whereby less advanced adversaries are showing an appetite for targeting OT. With this shift in adversary demographics, there will likely also be a shift in attack goals, from clandestine process degradation and espionage to overt cyber extortion (Cy-X). The consensus from OT cyber security practitioners suggests that, even if encryption-based Cy-X techniques were launched against OT assets, typical recovery practices designed for engineering processes would provide adequate resilience. In response, this paper introduces Dead Man's PLC (DM-PLC), a pragmatic step towards viable OT Cy-X that acknowledges and weaponises the resilience processes typically encountered. Using only existing functionality, DM-PLC considers an entire environment as the entity under ransom, whereby all assets constantly poll one another to ensure the attack remains untampered, treating any deviations as a detonation trigger akin to a Dead Man's switch. A proof of concept of DM-PLC is implemented and evaluated on an academically peer reviewed and industry validated OT testbed to demonstrate its malicious efficacy.
Richard Derbyshire, Benjamin Green, Charl van der Walt, David Hutchison
2023-07-18T18:48:47Z
http://arxiv.org/abs/2307.09549v1
# Dead Man's PLC: Towards Viable Cyber Extortion for Operational Technology ###### Abstract For decades, operational technology (OT) has enjoyed the luxury of being suitably inaccessible so as to experience directly targeted cyber attacks from only the most advanced and well-resourced adversaries. However, security via obscurity cannot last forever, and indeed a shift is happening whereby less advanced adversaries are showing an appetite for targeting OT. With this shift in adversary demographics, there will likely also be a shift in attack goals, from dandestine process degradation and espionage to over cyber extortion (Cy-X). The consensus from OT cyber security practitioners suggests that, even if encryption-based Cy-X techniques were launched against OT assets, typical recovery practices designed for engineering processes would provide adequate resilience. In response, this paper introduces Dead Man's PLC (DM-PLC), a pragmatic step towards viable OT Cy-X that acknowledges and weaponises the resilience processes typically encountered. Using only existing functionality, DM-PLC considers an entire environment as the entity under ransom, whereby all assets constantly poll one another to ensure the attack remains untampered, treating any deviations as a detonation trigger akin to a Dead Man's switch. A proof of concept of DM-PLC is implemented and evaluated on an academically peer reviewed and industry validated OT testbed to demonstrate its malicious efficacy. ## I Introduction There exists a plethora of industry sectors and critical national infrastructure (CNI), from manufacturing to power generation, that use automated monitoring and control of their constituent physical devices via sensors and actuators. Such capability is achieved through the deployment of operational technology (OT), which senses and manipulates the physical world in real time. OT is conceptually found between the information technology (IT) that runs the enterprise part of an organisation and the physical devices that comprise its operational part, as defined by the Purdue Enterprise Reference Architecture [41]. Unlike IT, the specialist hardware and software of OT is somewhat more invisible as it is typically restricted to specific environments, including factories and power plants. For other than site engineers, access to OT assets is usually restricted to simulations or testbeds [14], notably for cyber security practitioners and adversaries. The result of this barrier to access is a fortunate dearth of cyber attacks targeting OT when compared with attacks that target IT, even in spite of the manufacturing sector being the predominant target for IT attacks in 2022 [3]. Looking at IT-targeted cyber attacks in more detail, it is evident that a significant proportion of them use cyber extortion (Cy-X) tactics [3]. Historically, Cy-X is known for ransomware, which prolifically encrypts data across an IT asset or network of assets, rendering them unusable, and therefore causing a denial of service effect until a ransom is paid for the decryption key. However, there is a shift towards more creative ways by which cyber criminals extort their victims, hence the encompassing term Cy-X. The evolution of IT Cy-X tactics includes not only encrypting the victim's data, but exfiltrating the unencrypted data first for the purpose of threatening to leak it, something which is proving effective in the face of the regulatory pressures of data protection [3]. In the rare occurrences of cyber attacks targeting OT, they have often been conducted by either insider threats or state-sponsored adversaries [6, 23]. However, 2022 saw an increased interest in conducting OT-targeted cyber attacks from less typical adversary types, such as cyber criminals, whose tactical focus is Cy-X [11, 13, 18]. Such endeavours will be lucrative to cyber criminals because of the victims' increased willingness to pay to avoid the complexity and scale of the costs associated with OT outages [17]. Although the delicate and real-time nature of OT means that it faces specific challenges in addressing cyber security risks, typical solutions to engineering issues mean that dedicated OT assets may be resilient to targeted, encryption-based Cy-X attacks like ransomware [39]. More specifically, existing practices of replacing a faulty programmable logic controller (PLC) with a new one, and uploading the correct configuration, would likely prove to be effective against encryption based ransomware in the way that regular backups are used to recover from similar IT-targeted ransomware attacks. This paper introduces Dead Man's PLC (DM-PLC), a pragmatic first step towards a viable Cy-X technique that directly targets OT devices while circumventing the resilience of existing response and recovery tactics. DM-PLC utilises existing functionality within an OT environment to simultaneously do the following: create a covert monitoring network of PLCs and engineering workstations (EWs) that constantly poll one another; monitor for any deviations from the attack's behaviour; and deny configuration access to the victim. Should the victim make an attempt to alter the environment under adversary control or not pay their ransom in time, DM-PLC will activate a trigger akin to a Dead Man's switch, causing all PLCs to set their outputs to an "ON" state, resulting in chaos within the victim's physical environment. DM-PLC brings OT-targeted attacks in line with the modus operandi of the emerging demographic of adversaries in the area, i.e. cyber criminals, and reflects the shift from encryption-based Cy-X techniques to more creative ways to conduct ransom attacks. In doing so, this work highlights the fact that an adversary does not require significant investment, experience, or sophisticated root level access to PLCs to conduct such an OT-targeted cyber attack. Rather, DM-PLC demonstrates that an adversary may hold an entire OT environment to ransom by simply using existing communications and security features against the victim. In summary, the main contributions of this work are: 1. The proposal of a technique to perform a Cy-X attack on OT devices, utilising only existing functionality, and circumventing resilience encountered in current best practice. 2. A practical implementation of DM-PLC is demonstrated as a proof of concept on two PLCs and an EW. 3. DM-PLC is then scaled up and evaluated on an academically peer reviewed and industry validated testbed, identifying its strengths and limitations. 4. Mitigation techniques are proposed for OT asset owners to protect themselves against such attacks in the future. The remainder of this paper is structured as follows. Section II chronicles the current state of the art of OT-targeted Cy-X capability. Section III establishes an understanding of the OT under attack and the preconditions for the attack to take place. Section IV provides a discussion of the conceptual approach to DM-PLC. Section V describes a proof of concept implementation on two PLCs and an EW. Section VI presents the evaluation, its results, and ensuing discussion. Section VII proposes the types of controls that may be utilised to mitigate attacks such as DM-PLC. Finally, Section VIII summarises what has been proposed and reflects on the DM-PLC technique in a concluding discussion. ## II Related Work Historically, OT cyber attacks have seldom involved the deployment of malware specifically targeting OT devices, such as PLCs, remote terminal units (RTUs), or human machine interfaces (HMIs) [6, 23], with the notable exceptions of highly complex attacks such as Stuxnet [19]. More recently, OT-specific malware has been discovered, having been deployed to facilitate other advanced attacks, including examples such as CRASHOVERRIDE [7], TRITON [25], Industroyer2 [10], and PIPEDREAM [8]. However, these attacks do not fit the modus operandi of cyber criminals looking to enter the space, who typically pursue financially motivated engagements. Moreover, it is well documented that such attacks would have required a level of process comprehension [15, 16] that would be considered inaccessible to this new, incoming demographic of adversaries. When identifying novel tactics, techniques, and procedures (TTPs) for targeting OT, or the malware used, academic literature is sparse - particularly for examples that are both pragmatic and suitable to cyber criminals looking to profit from their engagements without requiring costly process comprehension. Formby et al. [12] introduce LogicLocker, stating that it is "The first known example of ransomware to target PLCs in industrial control system networks". LogicLocker's approach involves exploiting vulnerabilities in PLCs discovered on Shodan, moving laterally and horizontally within the OT network, locking the affected PLCs, and encrypting the PLCs' configurations, before deploying a "logic bomb". Some practical concerns about this approach include the reliability of discerning PLCs on Shodan (many are honeypots [9, 20]), the noise of using these exploits against the PLCs (particularly within the OT environment), and how some of the internal scanning and lateral movement techniques are achieved without root access to the PLCs. Not dissimilar to LogicLocker is ICS-BROCK, introduced by Zhang et al. [42], which is intended to be stealthy and practical for application in any environment, with any type of PLC. ICS-BROCK's approach follows a similar set of steps to LogicLocker, whereby the malware detects and exploits Windows-based vulnerabilities in the OT network, identifies PLCs via the address resolution protocol (ARP), locks the affected PLCs, deploys a 'logic bomb', and encrypts the engineering workstation. As with LogicLocker, ICS-BROCK will cause significant noise by attempting to exploit common vulnerabilities (MS17-010, MS08-067, CVE-2019-0708). LogicLocker and ICS-BROCK are commendable contributions to the concept of OT-specific Cy-X, but they also leave room for improvement before being considered truly pragmatic in a live engagement. Both put considerable effort into justifying the malware's access to the OT environment. However, there is no shortage of access in modern attacks [11, 13, 18], the real challenge to overcome being the creative use of the OT assets themselves. Both methods purport to exploit commonly known vulnerabilities early on in their approaches, before the payloads have been delivered, which increases the risk of detection prior to the point of extortion. Along with the above assumptions, neither approach considers the non-trivial amount of process comprehension required to ensure the success of even a simple OT-targeted attack [15, 16]. Finally, neither approach considers the prospect of response and recovery processes in an operational environment. Zhang et al. [42] state "Something we need to emphasize is that, once the PLC is locked, the operator will not choose to refash the PLC for it would make ICS stop, stop always means big economic losses." - in fact, this is _exactly_ what the operator would do [39]. DM-PLC, this paper's novel approach for conducting Cy-X against OT assets, learns from LogicLocker and ICS-BROCK, while iterating on the weaknesses discussed above by focusing on employing only existing and expected OT functionality and further using it to counter response and recovery practices typically encountered in an OT environment. ## III Threat Model To effectively introduce DM-PLC, it is first necessary to establish an understanding of the OT under attack as well as the preconditions required for the attack to take place. ### _DM-PLC OT attack surface_ DM-PLC is focused on the existing intercommunication between OT assets, predominantly engineering workstations (EWs) and PLCs. However, it could be extended to additional OT assets such as remote terminal units (RTUs) and human machine interfaces (HMIs) should the adversary so choose. From a technical perspective, EWs are generally Windows-based devices complete with the typical vulnerabilities encountered in enterprise IT assets. The key aspect about an EW is that it is enhanced with industrial configuration software, which allows it to communicate via industrial protocols and configure embedded OT devices such as PLCs. For example, in a Siemens ecosystem, this industrial configuration software would be the Totally Integrated Automation (TIA) Portal [38]. PLCs are embedded devices that can have a variety of architectures. These devices typically have minimal operating systems, such as modified versions of Linux or even bespoke implementations, that are inaccessible to both the operators and potential adversaries. However, at the level which is accessible to the operator, there is a wealth of functionality available such as ICMP, SNMP servers, and HTTPS servers. PLCs are programmed by the industrial configuration software on an EW, using the industrial protocols that are enabled by the software. The program, pushed to the PLC by the EW, controls what the PLC does and how it senses and controls the physical process. It can be written in a variety of languages such as ladder logic and can be entirely bespoke; however, default library functions provided by the PLC vendor are commonly used [15]. ### _DM-PLC attack scenario_ Modern adversaries do not appear to be encountering challenges when navigating to the OT environments of their victims [11, 13, 18], and no value or novelty can be derived from describing that process. Therefore, this paper does not concern itself with initial access or lateral movement prior to deploying DM-PLC. It is instead assumed that the adversary has completed these tactics and is already in position. As such, this is reflected in Figure 1 where the adversary's traversal to the OT environment is depicted in a simplified manner. The physical and logical layout of an OT environment can vary significantly between organisations, and this variation can be further exacerbated between sectors [40]; for that reason, Figure 1 depicts a high level attack scenario complete with the necessary trajectory required for the adversary, up to and including the deployment of DM-PLC. To begin the deployment, the adversary must have access to an EW in the OT environment. From here, they will typically have access to reconfigure a fleet of PLCs controlling at least one process. ## IV The DM-PLC Approach While operating under the assumed compromise of an EW provides a valuable starting point, the DM-PLC approach must remain flexible to allow for context and vendor agnostic development. EWs typically contain a wealth of information, and more specifically a detailed view of all operational PLCs and their associated configuration [16]. This section describes how such information affords a sufficient level of process comprehension to develop bespoke DM-PLC deployments, tailored to the system under attack. Moreover, as EWs offer direct connectivity to each PLC, they present a trusted conduit through which DM-PLC can be deployed and executed. Taking existing work as both inspiration and lessons to be learned, DM-PLC seeks to further develop the existing notion of OT Cy-X through the employment of legitimate, vendor provided PLC functionality. The key feature of DM-PLC is circumventing the current resilience of best practice response and recovery, such that removing affected PLCs is not an option without consequence for the victim. This also means that DM-PLC must not negatively impact the operational process unless it is tampered with or its ransom timer has expired; immediately experiencing negative consequences may dissuade the victim from paying in lieu of replacing the affected devices. Therefore, the following outlines a target set of high-level DM-PLC requirements: 1. Deployable with minimal pre-requisites from an EW. 2. Runs in parallel to existing operational code. 3. Does not impact existing operational code. 4. Is resilient to tampering/response and recovery processes. 5. Includes tamper detection. 6. Can enact undesirable wide-spread operational impact. 7. Requires a key to relinquish control back to system owners. 8. Can be tested prior to being armed. Fig. 1: DM-PLC attack scenario The following subsections adopt these requirements as a baseline, and provide details on the proposal of DM-PLC from a conceptual perspective. The individual phases are grouped into preparing, deploying, and arming DM-PLC. ### _Preparing DM-PLC_ The first group of phases for DM-PLC involve conducting necessary reconnaissance and enumeration to build up the minimum required process comprehension to conduct the attack. #### Iii-A1 Identify and Validate the Current PLC Project As DM-PLC is designed as an extension to existing, trusted PLC code, it is important to first locate the current live codebase, and obtain a better understanding of the overall network architecture (i.e. how many PLCs there are within the environment, with their latest configuration). To do this, PLC vendors typically allow for the collation of multiple PLC configuration objects within the same project [1]. Consolidation of all PLC code provides engineers with a single reference point when diagnosing issues or extending/enhancing operational functionality. This diagnostic capability is of particular importance when validating project files, allowing an adversary to compare "online" live PLC code with "offline" project files, prior to the addition of DM-PLC to each PLCs codebase [34]. Through the use of this feature, the adversary is able to validate the project file and also the EW's ability to connect directly with each operational PLC. The phase described above gathers a significant amount of the required information for DM-PLC's process comprehension, using only the EW. The process comprehension itself allows the attack to be further crafted such that it does not interfere with existing operational code, while running alongside it, and when needed makes the biggest impact it can given its circumstances. Therefore, this phase contributes to satisfying requirements 1, 2, 3, and 6 via preparatory reconnaissance and enumeration. #### Iii-A2 Identify Pre-Existing PLC-PLC Relationships Section IV-A1 established the EW's ability to communicate with operational PLCs via online/offline diagnostic tooling. However, this does not mean each and every PLC will be able to communicate with one another, which is a key requirement to increase the resilience/effectiveness of DM-PLC. Fortunately, as with the online/offline features, vendors have this covered with device and network views [35]. These features provide a network schematic of all PLCs within the project. Reviewing the network schematic, individual device IP address details, and identifying the existence of communication libraries [33] within each PLCs codebase, will allow the adversary to understand existing PLC-PLC relationships, and where new ones could be formed. Through the completion of this phase, the adversary can enhance their level of process comprehension on PLC-PLC communications, a critical requirement when building resilience and tamper proofing into the DM-PLC attack. Furthermore, this is again conducted using only the EW, meaning it contributes to satisfying requirements 1, 4, 5, and 6. #### Iii-A3 Identify Core Code Blocks PLC code operates in cycles, and these cycles need to be understood. Depending on the vendor, PLC programmer, and operational requirements, these will differ both in the terminology used and their function. Taking the Siemens ecosystem as an example, OB1 (Organisation Block 1) is the main code block that is being cyclically executed at all times. However, OB1 can be interrupted by other code blocks, e.g. OB30 (Cyclic Interrupt). OB30 can interrupt OB1 at a regular time interval to execute a separate block of code. It is important for the adversary to understand these code blocks and their sequencing before deploying DM-PLC, thus ensuring that DM-PLC is able to operate as expected in armed mode (i.e. waiting for the victim to pay, and ensuring that they are not enacting response and recovery processes) and triggered mode (i.e. should the victim fail to pay the ransom, resulting in undesirable operational impact) [32]. With the identification of code blocks, the adversary completes the final stages of process comprehension necessary to begin deploying DM-PLC in harmony with the existing PLC code, while ready to interrupt its execution, and cause undesirable operational process disruption should it be tampered with, or exceed the specified time limit. This phase, therefore, contributes to satisfying requirements 1-6. ### _Deploying DM-PLC_ In the second group of phases, the functionality of DM-PLC is built and subsequently deployed to the target devices. #### Iii-B1 Introduce PLC-PLC Communications When holding an entire OT network to ransom, it is important to identify any attempt from the victim to regain control. To do this, DM-PLC requires a covert monitoring network in which all devices communicate with each other to ensure their integrity remains intact and under the control of the adversary. In Section IV-A2 pre-existing relationships were identified, which provided a view on the use of vendor provided communications library functions enabling PLC-PLC communications. These same functions are used here to establish new PLC-PLC sessions, allowing for the exchange of status data, providing holistic visibility of all operational devices. Where no existing PLC-PLC communications exist, vendor provided functions would still be used. However, should one library function fail due to firewall restrictions, for example, re-configuration using an alternative may be required. Vendors typically provide a range of communication functions within their libraries, affording adversaries with multiple options and avoiding the need for custom code development [31, 33]. Unlike prior OT-targeted Cy-X techniques that intend to disrupt PLCs individually [12, 42], DM-PLC focuses on treating the entire process as the entity under ransom. Therefore this phase is crucial to DM-PLC, acting as the heart of its resilience against tampering or any form of response and recovery, and laying the foundations for satisfying requirements 4 and 5. #### Iii-B2 Introduce Engineering Workstation-PLC Communications As with PLC-PLC communications, DM-PLC's covert monitoring network would not be complete without the EW. Section IV-A1 validated the connectivity an EW has with each operational PLC; here DM-PLC leverages this connectivity to provide each PLC with a view of the EW's state, and vice versa. Each PLC vendor will choose a specific network protocol for EW-PLC configuration management, and as this network protocol is permitted through existing network-based controls (e.g. a firewall), it should be used to establish covert EW-PLC communications. Fortunately, there exists a broad range of open source communications libraries that can be used for this purpose with minimal effort [14]. The selected library must be installed on the EW. Fortunately they are often very lightweight and will not impact EW performance. This phase is similar to Section IV-B1 in that it is crucial to DM-PLC's resilience to tampering or response and recovery attempts. Therefore, this phase also contributes to satisfying requirements 4 and 5. #### Iv-B3 Introduce Device Status Checkers Once PLC-PLC and EW-PLC communications have been established, a polling and status checker system must be implemented. Like any software, PLCs allocate memory to store variable states. Here each PLC requires a block of memory to store neighbouring device states and an alert state. Starting with the EW, it is required to send a poll to each PLC every second. To keep things simple, this could be a binary state that changes from a 0 to a 1 and back again. This is written to a designated area of memory in each PLC. Each PLC monitors this area of memory for state changes, and if it fails to see a state change it knows something unexpected has happened to the EW (e.g. the victim has disconnected it from the network while enacting response and recovery processes). Each PLC must also be configured to establish a polling system such as this with its neighbouring PLCs. An additional alert state is also required, especially for scenarios in which some PLCs are isolated from others. For example, if there are three PLCs, and PLC 1 is unable to see PLC 3 directly, it will rely on PLC 2 or the EW. Should PLC 2 fail to send a poll to PLC 3 it will change its alert state to indicate something has gone wrong. Again, to keep things simple, this could be a binary state where 0 is normal, and 1 is alert. PLC 1 would not only be actively polling PLC 2, providing it with its current state, but it would also be monitoring PLC 2's status bit, and upon seeing it change to a 1, would immediately stop any of its own polling, and set its alert state to 1. This would quickly propagate through the covert communications network, with all devices being made aware of the unexpected change in the environment. This phase makes use of the covert PLC-PLC and EW-PLC monitoring networks that have been created to ensure that every asset under ransom has not been tampered with. Therefore, this phase combines with the previous two to satisfy requirements 4 and 5. #### Iv-B4 Introduce Code Supporting Operational Process Disruption Section IV-A3 identified core code blocks within each PLC. There are multiple ways in which these could be manipulated based on the level of process comprehension an adversary has [16]. A basic approach could be to disable its execution and switch all PLC outputs to an "ON" state. Introducing a normally closed contact before each core code block would only permit its execution when the alert is in a 0 (normal) state, and conversely prevent its execution upon switching to a 1 (alert) state. A malicious code block is then required to operate in an opposing manner (i.e. a normally open contact based on the alert state would be introduced as a trigger). A review of each PLC's hardware profile within the project file is required to build a malicious code block, through which identification of all output cards and their associated addressing is made possible. The malicious code in this example would simply use these addresses to manipulate physical outputs, and in turn, the operational devices they are connected to (e.g. motors, conveyor belts, and valves). In addition to the alert state being triggered by the victim's attempts to regain control of their devices, it will also be triggered by a timer. The timer duration is set to match the period of time within which the adversary has given the victim to make a payment. This phase provides DM-PLC with the capability to cause undesirable operational impact should the attack be tampered with or the ransom not be paid in time, which is absolutely necessary for DM-PLC to be a credible Cy-X threat to its victims. Therefore, this phase satisfies requirement 6. #### Iv-B5 Prevent Victims from Reversing all Changes Continuing the approach taken to the development of DM-PLC thus far, preventing the victim from reverting the changes applied to their devices should make use of vendor provided functionality - essentially turning the victim's decision not to use these features against them. For example, PLCs are increasingly being provided with password protection capabilities, something taken advantage of by Zhang et al. [42]. In addition, PLC project files also offer a variety of protection features to password protect and encrypt their contents [36]. Finally, the EW should lock out all trusted users and encrypt its disk using conventional ransomware techniques [27]. These, in conjunction with DM-PLC's covert monitoring network, make response and recovery extremely difficult without experiencing undesirable operational impact. Restricting the victim's access to the PLCs and EWs that have been affected during the attack is the final preparatory phase of DM-PLC. This provides an additional measure to prevent any tampering or response and recovery processes, on top of the covert PLC-PLC and EW-PLC monitoring networks. Not only does the password protection on the PLCs and project files add another layer to requirement 4, but they work in conjunction with the traditional encryption-based ransomware techniques used on the EW to provide a mechanism for satisfying requirement 7. ### _Arming DM-PLC_ Once DM-PLC's functionality has been deployed to the affected devices, the final phase is to arm its Dead Man's switches. #### Iv-C1 Arm DM-PLC A staged approach to the deployment of DM-PLC is required to ensure it operates as expected. Deploying PLC and EW code in a structured way, as described across Sections IV-B1 to IV-B5, provides a starting point. In addition to this, a normally open contact mapped to an "Enable" variable is required before all new PLC code blocks (i.e. communication and malicious code). This will prevent any alerts from being raised until it is set to a 1 (enable) state, which can be done across all devices simultaneously by the EW once the adversary is ready. Collectively, this strategy of gradual code deployment, testing, and enabling will put DM-PLC in an armed state, with the alert state being a trigger to cause undesirable operational process disruption. Upon receipt of payment, the victim will be provided with a key that unlocks the EW, allowing DM-PLC to be disabled via the enable variable in each PLC, and removes all password protections from their PLCs and their PLC project files. The final phase of DM-PLC is focused on testing and safe deployment such that it is not triggered prematurely during the setup phases. The enable variable doubles up as an easy disarm mechanism once the adversary has relinquished control of the OT environment on full receipt of payment. This phase, therefore, assists the one discussed in Section IV-B5 that satisfies requirement 7, along with fulfilling requirement 8. ## V Implementation Section IV introduced the concept of DM-PLC, defining key characteristics and how they map onto a set of baseline requirements. This section builds on DM-PLC as a conceptual construct and demonstrates how it can be deployed in practice on a real-world system. To best describe a practical implementation of DM-PLC and avoid unnecessary complexity, the minimum infrastructure required to conduct the attack is used; just two PLCs and one EW. The PLCs are a Siemens ET200S and a S7-1200, and the EW is running the Siemens TIA Portal V17 programming agent [38]. To provide consistency with Section IV, the following subsections provide a mirrored structure, populated with practical, applied content. Figures depicting the implementation process are referred to throughout this section for additional clarity. ### _Preparing DM-PLC_ #### V-A1 Identify and Validate the Current PLC Project Upon loading TIA Portal, the adversary will be presented with a list of recently opened projects. If no projects are listed, a basic search of local and remote drives for '.ap17' can be performed. By default, the TIA Portal GUI is split into three sections, with the first section displaying a tree structure of all associated devices and their configurations. In order to validate the applicability of a project with each PLC the "Online" function can be used. By right clicking each PLC and selecting "Go Online", TIA Portal will compare the project codebase with that of the PLC, with a set of green indicators confirming a 100% match (see Figure 2). #### V-A2 Identify pre-existing PLC-PLC Relationships The "Devices and Networks" feature in TIA Portal can be used as a quick reference point in the determination of existing PLC-PLC relationships (see Figure 3 depicting PLC_1 and PLC_2 on the same network). A review of each PLCs IP addressing and core code base can also be undertaken for further validation, and to identify which communications protocols are in use. An example of this would be the PUT communication library function provided by Siemens within TIA Portal (see Figure 5). The identification of a function such as this between two PLCs, confirms the use of S7-Comm as a permitted network protocol. #### V-A3 Identify Core Code Blocks To keep this proof of concept (PoC) simple, there is only one core code block (a tank control Function Block) on PLC_1, residing in OB1, the main program cycle in Siemens PLCs, as described in Section IV-A3 (see Figure 4). There are no pre-requisites to this code block's execution (nothing to the left of the Function Block in Figure 4), meaning it will execute on every cycle of the PLC's codebase. There exist no additional code blocks that can interrupt OB1's cycle on this PLC. PLC_2 contains the same codebase, with no additional interrupts or functions to consider. Fig. 4: Core Code Block Fig. 3: PLC-PLC Relationships Fig. 2: Online Mode ### _Deploying DM-PLC_ #### Iv-B1 Introduce PLC-PLC Communications The first step in developing DM-PLC is to establish PLC-PLC communications. To do this an appropriate communications function must be installed on each PLC. For this PoC, the TIA Portal-provided communications library functions PUT and GET are used. The PUT library function is setup on PLC_1 to send data to PLC_2 (see Figure 5). The GET library function is setup on PLC_1 to retrieve data from PLC_2 (see Figure 6). To ensure communications are flowing between the two devices, testing is required at this stage. Setting up dummy data to send and request gives an adversary confidence in the deployment of any given communications library, prior to its use as a primary method of data exchange within the covert DM-PLC network. #### Iv-B2 Introduce Engineering Workstation-PLC Communications Once the PUT/GET library functions have been deployed, the EW has proven it is able to communicate via the S7-Comm protocol with each PLC (the protocol TIA Portal adopts when reading/writing configuration parameters); thus its use in embedding the EW within DM-PLC's covert network is a logical next step. The open source library Snap7 [24] provides S7-Comm connectivity via its db_read (retrieve data from a PLC, in the same way as GET) and db_write (write data to a PLC, in the same way as PUT) functions. Listing 1 provides an example snippet of Python code, where Snap7 is being used to read one byte of data from PLC_1, more specifically, Data Block 500 (a block of memory in the PLC), Byte 0. Listing 2 provides another example, however here Snap7 is being used to send one byte of data to PLC_1, more specifically, the current value of variable "poll", to Data Block 501, Byte 0. The ability to read/write to and from data blocks over the network is common practice, and is employed by human machine interfaces (HMIs) that provide system operators with access to operational data and control capabilities [15]. ``` importsnap7 client=snap7.client.Client() client.connect(172.21.1.101,0,1) value=client.db_read(500,0,1) client.disconnect() print(value) ``` Listing 1: Snap7 Read from Data Block As with PLC-PLC communications, setting up dummy data to send and request to/from each PLC is required, ensuring the communications library is performing as expected. #### Iv-B3 Introduce Device Status Checkers Having implemented a covert PLC-PLC and EW-PLC network, overlaying a resilient monitoring system to identify any attempt from the victim to recover their system is required. To support this discussion Figure 7 has been included, acting as a reference point to demonstrate how the described approach can be scaled up as required, based on the size of the victim's environment. Each device in the environment behaves in the same way, regardless of the communication library in use (i.e. PUT/GET on the PLCs, and Snap7 on the EW). Using PLC_2 as a reference point for discussion, Figure 7 depicts data entering and exiting this device. The EW will send its poll, a constantly changing boolean variable, to DB501.DBX0.0. This provides the following two key features: * The EW is able to ascertain if PLC_2 is still online and operational. If PLC_2 has been disconnected from the network, the EW will be unable to send this request, and will subsequently cease all polling actions across the entire environment (i.e., to PLC_1, PLC_2, and PLC_3). Alternatively, as DB501.DBX0.0 has been created by the adversary, should it be removed, or have its access restricted in any way, the EW will again be unable to send its poll, and will cease all polling actions. * PLC_2 is constantly monitoring DB501.DBX0.0 for state changes, should they cease (i.e. remain in a 0 or 1 state for more than a second), it will know that communications to the EW has either been removed by the victim, or the EW has been informed of another issue in the environment Fig. 5: S7-Comm PUT Setup Fig. 6: S7-Comm GET Setup (e.g. it has observed an alert on PLC_1, and has ceased all polling actions). PLC_2 will then cease its own polling actions and raise an alert (set DB500.DBX0.0 to 1). Continuing with PLC_2 as an example, should it raise an alert and cease all polling actions, as it is no longer receiving a poll from the EW, the following two events will be realised: * PLC_1 will see the alert as it is constantly monitoring address DB500.DBX0.0 in PLC_2. In seeing this alert, it too will raise its own alert at the same address within its own memory (i.e. DB500.DBX0.0), and cease all of its polling actions. * PLC_3 will see PLC_2 and PLC_1 have both ceased their polling actions, and it too will raise an alert. Figure 8 shows how PLC_1 monitors PLC_2 and raises its own internal alert (i.e., should PLC_1 fail to send/receive data from PLC_2, or if PLC_2 raises an alert). As previously discussed, PLC_1 will then use its own alert as a trigger to cease all of its own polling actions (see Figure 9 where polling is disabled). #### V-B4 Introduce Code Supporting Operational Process Disruption Each PLC contains an alert bit, triggered through the detection of a victim attempting to recover their system, or when the payment timeout window is exceeded (see Figure 10). This alert is used to disable polling across the covert network (see Figure 9). However, its use within DM-PLC extends beyond this, acting as a trigger supporting operational process disruption. As a starting point, all legitimate operational code identified in Section V-A3 must first be disabled when the alert is raised. Figure 11 introduces the alert bit as a prerequisite to the execution of the Tank Control code block (see Figure 4 before the alert prerequisite was introduced). It is also used to activate the operational process disruption code block (see Figure 12). In order to develop the operational process disruption code block, a review of each PLC's hardware profile must be undertaken. This provides a view of the I/O addressing mapped to physical output cards (i.e. Q2.0-2.3 in Figure 13). Once the output addresses have been confirmed, they can be put to use. For this PoC, all discovered digital outputs are set to an ON (1) state (see Figure 14). However, dependent upon the level of process comprehension an adversary is able to develop, this code block could become far more sophisticated, Fig. 11: Disable Core Code Blocks Fig. 8: PLC_2 Monitoring and Alert Raising Fig. 10: Payment Timeout Fig. 7: DM-PLC’s covert monitoring network example with 3 PLCs Fig. 9: Disable Poll when Alert is Raised manipulating outputs to achieve maximum impact [16]. With regard to the EW, when DM-PLC is enabled all files and data on the system are encrypted as would be the case in a traditional encryption-based ransomware attack, including the original DM-PLC source code now that the process is running. The DM-PLC process itself is safe from tampering as any deviations from expected functionality will trigger the alert process. If an alert is raised, the EW fails to poll any of the PLCs, or the allotted payment time is reached, the EW will simply shut down. #### V-B5 Prevent Victims from Reversing all Changes As discussed in Section IV-B5, OT vendors are increasing their cyber security capabilities. Turning unused capabilities against victims delivers not only critical functionality with minimal effort by the adversary, but is a lesson in making the most of what the adversary holds against the victim. Figures 15 and 16 provide a password for both the PLC and the TIA Portal project file. Without these two passwords, the victim is unable to read the malicious codebase from each PLC, reload their original trusted codebase to each PLC, or access the current modified version of their TIA Portal project file. From the perspective of the EW, victims are prevented from reverting any changes due to the deployment of encryption-based ransomware targeting all legitimate files and data. As mentioned in Section V-B4, the DM-PLC source code is subject to encryption-based ransomware and any attempts to tamper with the process will trigger the alert process. ### _Arming DM-PLC_ #### V-C1 Arm DM-PLC Once DM-PLC has been configured across all PLCs and the EW, it needs to be armed. This is executed by the adversary from the EW. For this PoC an "enable" boolean variable is introduced before the main DM-PLC code block on each PLC (see Figure 17). The DM-PLC process running on the EW, providing it with access to the covert monitoring network (see Section V-B2), is used to switch the enable variable to an ON (1) state in each PLC upon startup. This is performed using the Snap7 db_write function from Listing 2. Equally, should the victim make their payment on time, this same function is automatically used to switch each enable variable back to an OFF (0) state, deactivating DM-PLC across the entire estate. Fig. 16: Setting the Project Password Fig. 14: Turn All Digital Outputs On Fig. 13: Digital Output Card Addressing Fig. 17: Enable DM-PLC ## VI Evaluation While DM-PLC is a viable Cy-X technique for OT in concept and it was possible to implement it as expected, it cannot be considered practically viable without being tested in an established environment and having its outcomes evaluated. This section, therefore, describes the evaluation method, its results, and finally presents a discussion of those results and potential limitations of DM-PLC. ### _Method_ DM-PLC was evaluated in the sterile conditions of an academically peer reviewed and industry validated OT testbed [14], affording a sufficient level of realism without inducing any unnecessary risk to a live operational environment. The evaluation utilised 3 PLCs and 1 EW, configured as depicted in Figure 7. The PLCs were Siemens ET200S (PLC 1), S7-300 (PLC 2), and S7-1200 (PLC 3), and the EW was running the Siemens TIA Portal V17 programming agent [38]. Overall, the evaluation took 45 minutes to implement, with the majority of the time taken up by understanding the environment and configuring the first PLC. Once the first PLC was configured, it was possible to duplicate its codebase to further PLCs rapidly with minimal edits, which reduces any additional configuration time and expedites the rest of the process. The evaluation aimed to test three scenarios of DM-PLC: 1. A PLC being removed from the network. 2. The DM-PLC ransom timer expiring. 3. The victim entering a code having 'paid' their ransom. ### _Results_ The following presents the results of the 3 DM-PLC evaluation scenarios. #### Vi-B1 Scenario 1 For the first scenario, DM-PLC was deployed and then armed, before removing the network cable from PLC 3. As depicted in Figure 18, once PLC 3's network cable was removed, it was no longer able to receive its polls and its alert bit became inaccessible from experimental monitoring. The final successful poll to PLC 3 was at 24 seconds, the poll at 25 seconds immediately failed from all devices, which all proceeded to set their alert bits to 1 and monitored sample outputs to "ON". #### Vi-B2 Scenario 2 The second scenario saw a ransom timer expiry being set to 15 seconds during configuration. Figure 19 shows that at 15 seconds, the devices under evaluation interrupted their polling and immediately set their alert bits to 1 and monitored sample outputs to "ON". #### Vi-B3 Scenario 3 The final scenario tested in the evaluation saw the 'victim' disarm DM-PLC, simulating a situation where they had paid their ransom and been given the deactivation key. DM-PLC acted as expected and successfully disarmed, meaning that polling was interrupted and the alert bits were not set to 1 and no outputs were affected. Fig. 19: 3 PLCs and EW after time expiry Fig. 18: PLC 3 being removed from the network ### _Discussion and Limitations_ The three scenarios evaluated with DM-PLC were considered a success in that they provided the expected results. The intended results for scenarios 1 and 2 were for all applicable PLCs and engineering workstations to raise an alert and cause operational impact by setting their outputs to "ON", which was successfully observed. For scenario 3, the intended result was the successful disarmament of DM-PLC, which was also observed. However, this does not mean that DM-PLC is without its limitations, particularly in its PoC form. The rest of the section will discuss such limitations, and where appropriate, provide justification or potential mitigation to them. #### Vi-C1 Dwell time Setting up DM-PLC requires gathering information to form an understanding of the environment and then configuration of the PLCs in that environment to deploy it, all of which takes time. While this may seem like one of DM-PLC's major limitations, the median dwell time for an IT cyber extortion attack was reported to be 9 days by Mandiant [22], which is likely to be sufficient time to prepare and deploy DM-PLC. #### Vi-C2 Scalability The PoC for DM-PLC was conducted on 2 PLCs and its evaluation was conducted on just 3. It is understandable that this may look to be a paltry number of devices to test upon when compared to large scale environments. However, once the preparation is complete and the initial PLC is configured, copying and pasting code blocks and further minimal editing expedites the process considerably. Furthermore, not every device within the OT environment has to be in the covert monitoring network; it may be sufficient to include only a subset of devices and simply apply the password protection to the rest, as once the consequences are enacted they will likely cascade. Another issue of scalability is the concern whether DM-PLC is even possible on a large fleet of PLCs. Depending on the PLC and number of communications processors, the number of simultaneous connections vary, but can potentially be as high as 92 [30]. It is not inconceivable, therefore, that DM-PLC could be scaled to a significant subset, if not all PLCs, even in a large OT environment. #### Vi-C3 Transitory network issues Due to the PoC setting its poll timer to be 1 second, it could be interpreted that DM-PLC is volatile and any transitory network issues could cause the Dead Man's switch to trigger. However, the poll does not necessarily have to be 1 second, it could be set to a more forgiving interval. Alternatively, a deadband could be introduced such that once polling stops, DM-PLC ensures there are multiple consecutive missed polls before triggering. #### Vi-C4 Optimal damage In this PoC of DM-PLC the undesirable operational process disruption simply sets all outputs to an "ON" state, which may not be the most impactful method. Should this be a real attack, the adversary could conduct further process comprehension to identify how to maximise DM-PLC's impact (e.g., turn a selection of pumps on, and open just one valve). #### Vi-C5 Safe shutdowns One way a victim may recover from DM-PLC in its current PoC state is conducting a safe shutdown of their OT environment. However, in the event of a real attack, minimal additional process comprehension could identify the signal to look out for and incorporate this as another trigger for DM-PLC's alert state. #### Vi-C6 PoC weaknesses Two final potential limitations of DM-PLC are found in the deliberate simplicity of the PoC. More specifically, the polling and enable/disable (arming/disarming) of DM-PLC are very simple binary functions. If DM-PLC were to be used for a real attack, with resilience against recovery, these would ideally be more complex than just binary states. ## VII Mitigation The implications of a successful DM-PLC deployment are significant. However, there are a number of steps organisations can take to better defend themselves. The following subsections discuss relevant mitigation techniques that could be employed at both device and network levels. These techniques not only act as preventative measures, but also provide alerting capabilities. Enhancing notifications to cyber security monitoring teams, and enabling the enactment of response and recovery practices forms a critical requirement. This is especially true where offensive techniques, such as DM-PLC, evolve over time, rendering preventative measure ineffective. ### _Network_ Figure 1 depicts the baseline attack scenario applied to the deployment of DM-PLC. While the area under consideration as part of this work excludes inter-connectivity between conventional IT environments and OT systems, it is important to note the criticality of their existence. Stepping up the DM-PLC kill-chain, appropriate management of remote access to EWs must be prioratised. Concepts within existing standards and guidelines, including Zones and Conduits [26] can be used to better understand and classify operational assets and their place within a given infrastructure. At a technical level, next generation security appliances, such as those provided by Check Point [28], offer enhanced industrial protocol recognition. This affords end-users with the ability to apply highly granular active rule-sets to control the flow of network traffic. For example, the Siemens S7Comm protocol used as part of the DM-PLC PoC, has a number of functions including upload, download, read, and write. Check Point is able to permit/deny traffic based on the function in use, its source, and destination. In addition to enhanced industrial protocol recognition, rule-sets can be applied to cover given time windows e.g., in/out of standard working hours where a system is under constant manual user control. The notion of active network based controls within OT environments is often met with concerns over the prevention of legitimate network flows. Therefore, passive network traffic analysis may be more appropriate. Taking Claroty CTD [2] as an example, like Check Point's next generation appliances, it too has enhanced industrial protocol recognition. Again, using Siemens S7Comm as an example, Claorty CTD is able to fully interpret data transactions within this protocol. However, unlike the Check Point active example, Claorty can operate without the need for rule-sets. Instead, through the application of machine learning techniques, Claorty builds a baseline of normal behavior by monitoring traffic flows over a given period of time, then raises alerts when it identifies deviations from this baseline. The covert network deployed as part of DM-PLC would be a prime example of such deviations. ### _Engineering Workstation_ Although EWs exist in an industrial setting, it is important to create a clear distinction between them and neighbouring devices, such as Supervisory Control and Data Acquisition (SCADA) systems, or PLCs, that provide a more critical role in the day to day management of operational processes. Therefore, conventional desktop-based system hardening techniques can be more aggressively applied. This can cover any number of categories, including the management of patches, services, applications, users, EDR, encryption, etc. When considering OT-specific software operating on EWs, additional security features are becoming more prevalent. The most common of which is PLC project file encryption/pass-word protection. As discussed in Section V, the DM-PLC PoC utilises Siemens TIA Portal project encryption as a method to restrict legitimate user access. In addition, software such as this requires a licence to function, which may reside on a vendor provided USB stick. If removed, the software will not operate. Removal of the licence while the system is not in use would further prevent unauthorised access. Finally, engineering software also includes a change log, which could be frequently reviewed for unexpected modifications. ### _PLCs_ Due to the limited computational resources found in PLCs, embedded security features are often limited, however they are increasing. For example, in the DM-PLC PoC, the Siemens password protection feature is applied, protecting against unauthorised configuration uploads/downloads. Dependent upon the PLC in question, features such as this can be further expanded to limited connectivity between other devices, such as HMIs. Its is possible to restrict HMIs to read level permissions only, meaning that they cannot change setpoints, or alter the PLCs overarching configuration. Therefore, should a HMI be compromised, the impact it could cause is limited. With some PLCs, applying protections to specific blocks of code is also possible. Such as know-how protection [37] in the Siemens eco system. This prevents an unauthorised person from accessing the underlying code within a code-block, meaning it cannot be read, or more importantly modified (a key requirement of DM-PLC). While not directly operating on the PLC itself, PLC-focused forensic tooling is available. A recent open-source tool created by Microsoft [29] could be configured to run autonomously, actively connecting to PLCs and comparing their current configurations to project files. Any deviation in the live PLC configuration over the baseline project, such as those applied in DM-PLC, could be flagged for review by a security personnel. ## VIII Conclusion This work has introduced DM-PLC, a new approach to Cy-X targeted specifically against OT, which is in line with the modus operandi of an emerging adversary demographic in the area, viz. cyber criminals [11]. As a counter to existing research that has focused solely on encryption-based ransomware, something that is trivially dealt with by existing response and recovery practices [39], DM-PLC follows existing trends in traditional Cy-X whereby adversaries are moving away from encryption-based ransomware tactics [3]. DM-PLC creates a covert monitoring network of PLCs and EWs that constantly poll one another, and should any asset under adversary control deviate from the attack or the payment timeout window expire, an alert will be triggered akin to a Dead Man's switch, which will propagate throughout the covert monitoring network and turn all outputs to an "ON" state. This is achieved by using only existing legitimate functionality, demonstrating that adversaries do not need to develop complex exploits against OT, therefore significantly reducing the potential cost of an OT cyber attack [4, 5]. The approach presented in this paper should clearly serve as a warning to OT owners and operators that such an attack is feasible and needs special attention to defend against it. Finally, further work in the area should focus on OT being secure by design such that existing functionality can not be weaponised [21, 29], rather than perpetuating existing practices of relying on re-appropriated IT security concepts.
2304.08307
Predicting dynamic, motion-related changes in B0 field in the brain at a 7 T MRI using a subject-specific fine-tuned U-net
Subject movement during the magnetic resonance examination is inevitable and causes not only image artefacts but also deteriorates the homogeneity of the main magnetic field (B0), which is a prerequisite for high quality data. Thus, characterization of changes to B0, e.g. induced by patient movement, is important for MR applications that are prone to B0 inhomogeneities. We propose a deep learning based method to predict such changes within the brain from the change of the head position to facilitate retrospective or even real-time correction. A 3D U-net was trained on in vivo brain 7T MRI data. The input consisted of B0 maps and anatomical images at an initial position, and anatomical images at a different head position (obtained by applying a rigid-body transformation on the initial anatomical image). The output consisted of B0 maps at the new head positions. We further fine-tuned the network weights to each subject by measuring a limited number of head positions of the given subject, and trained the U-net with these data. Our approach was compared to established dynamic B0 field mapping via interleaved navigators, which suffer from limited spatial resolution and the need for undesirable sequence modifications. Qualitative and quantitative comparison showed similar performance between an interleaved navigator-equivalent method and proposed method. We therefore conclude that it is feasible to predict B0 maps from rigid subject movement and, when combined with external tracking hardware, this information could be used to improve the quality of magnetic resonance acquisitions without the use of navigators.
Stanislav Motyka, Paul Weiser, Beata Bachrata, Lukas Hingerl, Bernhard Strasser, Gilbert Hangel, Eva Niess, Dario Goranovic, Fabian Niess, Maxim Zaitsev, Simon Daniel Robinson, Georg Langs, Siegfried Trattnig, Wolfgang Bogner
2023-04-17T14:23:09Z
http://arxiv.org/abs/2304.08307v1
Predicting dynamic, motion-related changes in B\({}_{0}\) field in the brain at a 7 T MRI using a subject-specific fine-tuned U-net ###### Abstract Subject movement during the magnetic resonance examination is inevitable and causes not only image artefacts but also deteriorates the homogeneity of the main magnetic field (\(B_{0}\)), which is a prerequisite for high quality data. Thus, characterization of changes to \(B_{0}\), e.g. induced by patient movement, is important for MR applications that are prone to \(B_{0}\) inhomogeneities. We propose a deep learning based method to predict such changes within the brain from the change of the head position to facilitate retrospective or even real-time correction. A 3D U-net was trained on _in vivo_ brain 7 T MRI data. The input consisted of \(B_{0}\) maps and anatomical images at an initial position, and anatomical images at a different head position (obtained by applying a rigid-body transformation on the initial anatomical image). The output consisted of \(B_{0}\) maps at the new head positions. We further fine-tuned the network weights to each subject by measuring a limited number of head positions of the given subject, and trained the U-net with these data. Our approach was compared to established dynamic \(B_{0}\) field mapping via interleaved navigators, which suffer from limited spatial resolution and the need for undesirable sequence modifications. Qualitative and quantitative comparison showed similar performance between an interleaved navigator-equivalent method and proposed method. We therefore conclude that it is feasible to predict \(B_{0}\) maps from rigid subject movement and, when combined with external tracking hardware, this information could be used to improve the quality of magnetic resonance acquisitions without the use of navigators. \(B_{0}\) inhomogeneities, U-net, patient movement, artificial neural network, deep learning, Magnetic resonance imaging, motion correction ## I Introduction A LL _in vivo_ Magnetic Resonance Imaging (MRI) examinations are sensitive to subject motion. Those requiring prolonged measurement times, are particularly susceptible [1, 2]. A change in the subject's position causes motion artefacts and decreases the homogeneity of the static magnetic field (\(B_{0}\)) [3, 4]. Changes in \(B_{0}\) are increasingly pronounced at ultra-high-field MR scanners (\(B_{0}\geq 7\,\mathrm{T}\)) [5]. A spatially homogeneous--or at least temporarily stable--\(B_{0}\) field is a prerequisite for several MRI methods. For instance, in MR spectroscopy (MRS), intra-voxel \(B_{0}\) inhomogeneities and temporal frequency changes degrade the spectral resolution, which translates into reduced chemical specificity [6]. In MRS imaging, they aggravate artefacts arising from extracranial lipid and unsuppressed water signals [7]. In fast imaging, \(B_{0}\) inhomogeneities cause nonlinear image distortions (e.g., for echo planar imaging) or image blurring (e.g., for spiral acquisitions) [8]. For Chemical exchange saturation transfer (CEST), \(B_{0}\) inhomogeneities induce frequency offsets [9] which cause systematic errors in quantification. Long MRI sequences or those with many repetitions are even more vulnerable to subject motion. To tackle those issues, several MR-based and external tracking methods have been proposed, which provide information about the change of the patient position, and some of them are capable to map the \(B_{0}\) distribution. MR-based tracking methods consist of short MR sequences (termed navigators) for example based on fast gradient echo scans with echo-planar imaging (EPI) read-out [10]. Navigators are temporally interleaved with the main (parent) sequence that requires correction [11]. Simple navigators can monitor subject position and more advanced volumetric navigators (vNavs) can even map changes of the \(B_{0}\) field over time [12, 13]. However, for vNavs, the acquisition alone can be as long as \(500\,\mathrm{ms}\)[14] and can thus not be easily inserted into the majority of sequences, especially not those with short TR (frequently under \(10\,\mathrm{ms}\)) [10]. Simpler navigators are easier to implement, but they can only measure global frequency drift and cannot capture the spatial distribution of \(B_{0}\) changes [3]. Finally, self-navigation allows motion to be monitored e.g. based on repeated resampling of the k-space center via the parent sequence. Self-navigation does not require additional scans, but reduces the SNR efficiency of the sequence and has limited or no ability to characterize \(B_{0}\) field changes depending on the contrast of the main sequence [15]. External tracking methods use additional hardware. Their advantage over navigators is that motion (detected for example by optical tracking [16, 17]) or changes of the \(B_{0}\) field (detected for example by NMR probes [18]) are acquired independently from the MR scanner and are thus compatible with every sequence. However, optical or similar tracking systems do not provide information about \(B_{0}\) field changes. NMR probes can track the \(B_{0}\) field, but only outside the subject, and translating this to accurate \(B_{0}\) estimates inside the body is challenging or needs to be combined with conventional \(B_{0}\) mapping [19]. While optical tracking is already established as a clinically approved commercial product, an NMR probe system is a highly specialized and costly piece of equipment that is not generally supplied as part of an MRI system. Thus, an approach that combines the benefits of external motion tracking (i.e., independence from the parent sequence, highly accurate tracking with high temporal resolution) with that of internal navigators (i.e., accurate dynamic volumetric \(B_{0}\) mapping) without their disadvantages, is highly desirable. Ideally, it would allow improved real-time (or retrospective) correction of both motion and \(B_{0}\) instabilities. In recent years, deep learning methods have proved to be successful in uncovering hidden patterns in image data, which can be leveraged to solve complex problems, provided that sufficient training data are available [20]. For MRI methods, the image reconstruction [21] as well as the segmentation [22] and many other problems [23] can be potentially overcome by deep learning-based methods. In this study, we propose a neural network (NN) approach to predict changes of the \(B_{0}\) field within the brain from observed changes of the head position and orientation. A U-net is used to predict a \(B_{0}\) map from the following input: (i) anatomical MRI at the initial position, (ii) initial \(B_{0}\) map, and (iii) head pose change at a certain time point described via six degrees of freedom. The \(B_{0}\) maps are predicted for each known head position/time point. A general set of weights of the U-net is estimated using the data of 11 volunteers. These weights are then fine-tuned for each volunteer using the acquired \(B_{0}\)-maps of six head positions for that volunteer to include subject-specific information and thus improve the \(B_{0}\)-prediction. The whole proposed method would therefore include the measurement of an anatomical MRI sequence, and the \(B_{0}\)-sequence of six head positions to refine the general network weights for the specific subject in a short training (\(\sim 1\,\text{min}1\)) while the subject is in the scanner. These weights are then used to predict the \(B_{0}\)-changes caused by motion, which can be used to correct the data. ## II Methods ### _Experimental data_ All measurements were carried out on a \(7\,\mathrm{T}\) Magnetom+ MR Scanner (Siemens Healthineers, Erlangen, Germany) with a 32-channel head coil (Nova Medical, Wilmington, MA). 15 healthy volunteers (11 males and 4 females) were included in this study. The study was approved by the Ethics Committee of the Medical University of Vienna and written informed consent was obtained from all volunteers. For each volunteer, an MP2RAGE image [24] was acquired as anatomical reference with nominal resolution of \(1.1\times 1.1\times 1.1\,\mathrm{mm}\), \(\text{FOV}=220\times 220\times 220\,\mathrm{mm}\), TE/TR = \(3.28/5000\,\mathrm{ms}\), \(\text{TI}=700\,\mathrm{ms}\), \(\text{TI}=2700\,\mathrm{ms}\), \(\text{GRAPPA}\) factor \(=4\), TA = \(4:57\,\mathrm{mins}\). The \(B_{0}\) maps were acquired at 30 random head positions per volunteer. All volunteers were asked to change their head positions randomly, to cover the possible range within the head coil. The first head position was identical to that for the MP2RAGE scans. At each head position, two sequences for \(B_{0}\) mapping were run: (i) 2D multi-echo gradient echo (GRE) sequence with nominal resolution of \(1.9\times 1.9\,\mathrm{mm}\), \(\text{FOV}=240\times 240\,\mathrm{mm}\), \(\text{GRAPPA}\) factor \(=4\), \(80\) slices with \(2\,\mathrm{mm}\) thickness, TR = \(1410\,\mathrm{ms}\), \(\text{TE}_{1-5}=3/6/9/12/15\,\mathrm{ms}\), \(\text{Flip}\) angle \(=55^{\circ}\), \(\text{TA}=59\,\mathrm{s}\); (ii) 3D dual-echo echo planar imaging (EPI) sequence with nominal resolution of \(8.0\times 8.0\times 8.0\,\mathrm{mm}\), \(\text{FOV}=256\times 256\,\mathrm{mm}\), \(32\) slices, EPI factor \(=16\), \(\text{TR}=9.0\,\mathrm{ms}\), \(\text{TE}_{1-2}=3.8/4.8\,\mathrm{ms}\), Flip angle \(=2^{\circ}\), \(\text{TA}=0.6\,\mathrm{s}\). ### _Experimental data for physics-driven augmentation_ In a separate experiment, measurements with a spherical phantom were used to map the 1st and the 2nd order spherical harmonics of the shimming system of the \(7\,\mathrm{T}\) Magnetom+ MR Scanner via the same multi-echo GRE sequence described above. \(B_{0}\) shimming was performed using the standard automatic shim procedure and the initial \(B_{0}\) map was measured. The current amplitudes for each spherical harmonic term were manually altered four times from its initial \(B_{0}\) shim setting in a linear fashion (\(-100\,\mu\mathrm{T}/\mathrm{m}^{\mathrm{n}}\), \(-50\,\mu\mathrm{T}/\mathrm{m}^{\mathrm{n}}\), \(50\,\mu\mathrm{T}/\mathrm{m}^{\mathrm{n}}\) and \(100\,\mu\mathrm{T}/\mathrm{m}^{\mathrm{n}}\), where \(n\) is the order of the spherical harmonic). After each modification, another \(B_{0}\) map was acquired. These data were later used for data augmentation in the neural network training. ### _Pre-processing of experimental data_ For each head position, \(B_{0}\) maps were calculated from the GRE sequence and the EPI-based sequence. GRE-based Fig. 1: 3D U-net architecture used in the study. The input to the network has three features: (i) \(B_{0}\) map of the initial position, (ii) Anatomical reference of the initial position, and (iii) Anatomical reference of a new position. The output has one feature: \(B_{0}\) map of the new position. maps were calculated from the magnitude and phase images coil combined by ASPIRE [25], and phase unwrapped using ROMEO [26]. \(B_{0}\) maps from the 2TE-EPI sequence were calculated as Hermitian inner product [27]. GRE-based \(B_{0}\) maps were acquired in high spatial resolution with multiple echo time, thus served as the gold standard method to estimate \(B_{0}\) inhomogeneity. For the neural network training, GRE-based \(B_{0}\) maps were considered the ground truth. Low spatial resolution EPI-based \(B_{0}\) maps are equivalent to the dual-echo navigators, which can be used to estimated \(B_{0}\) inhomogeneity in the dead time of the parent sequence [12]. MP2RAGE data were transformed to different head positions using Flirt from the FSL toolbox [28] by applying transformation matrices from the co-registration of the first position to the other positions using the magnitudes of the first echoes. MP2RAGE datasets were also used to calculate brain masks (BET, FSL toolbox [28]), which were transformed in the same way as described above. Spherical harmonics of the the first and the second order of the shimming system (i.e.X, Y, Z, XY, ZY, Z2, ZX, X2-Y2) [5] were characterized using the five \(B_{0}\) maps with different shim current amplitudes. The measured \(B_{0}\) field associated with each shim term was fitted with the respective analytical spherical harmonic function [29] by a nonlinear curve fitting solver in Matlab [30]. The training dataset consisted of the 319 instances from 11 volunteers. Each instance contained the input: (i) anatomical MRI (i.e., T1-weighted MP2RAGE) at the initial position, (ii) \(B_{0}\) map at initial position, and (iii) the same anatomical MRI, but after applying the 6DoF transformation to the new position. The output consisted of the \(B_{0}\) map at the new position. Augmentation of the training dataset was performed during each epoch of the network training by adding the same, randomly-scaled spherical harmonic \(B_{0}\) fields to both input and output \(B_{0}\) maps of one instance. The spherical harmonics are normally used for \(B_{0}\) shimming of the volume-of-interest. Thus, that data augmentation is physically meaningful since the changes of the \(B_{0}\)-maps with motion should not depend Fig. 2: Comparison of \(B_{0}\) maps of four approaches. On the very left, the anatomical reference is depicted of one volunteer at the initial and the new head position. For both positions three slices from three planes are plotted. The second column consists of \(B_{0}\) maps at the initial position and the residua to the ground truth \(B_{0}\) map of the new position. The next four columns depict \(B_{0}\) maps of three approaches: (i) EPI-based approach (EPI), (ii) predicted \(B_{0}\) map from not fine-tuned NN (PR\({}_{0}\)), (iii) predicted \(B_{0}\) map from fine-tuned NN (PR\({}_{\text{FT}}\)); and the ground truth \(B_{0}\) maps (GT) plus their residua to the ground truth \(B_{0}\) map at the new position. on the \(B_{0}\) shimming, and by adding spherical harmonics \(B_{0}\) fields, the acquisition of the same volunteer under different shimming conditions is simulated. The test dataset consisted of the data of 4 volunteers. The pre-processing was similar to that performed on the training dataset. ### _Architecture and training of neural network_ All calculations were performed on a DGX station equipped with Tesla V100 GPU cards (Nvidia, Santa Clara, CA, US). The PyTorch DL framework [31] was used. The U-net architecture was used [32] because of the ability to extract features from the input data at different spatial resolutions which are later used in the decoder part of the network to form a prediction. The network had 4 levels, with the encoder part at each level containing the two 3D convolution layers each followed by the leakyRelu activation. The spatial resolution was decreased with a max-pooling layer by a factor of two. The bottom of the U-net consists of two 3D convolution and one 3D transposed convolution layers, each followed by leakyRelu activation. The decoder part, at each level, consists of two 3D transposed convolution each followed by leakyRelu. The spatial resolution was increase by a trilinear interpolation by factor of 2. All convolutional layers had a kernel size of 5 in all three spatial dimensions and convolutions were performed with a stride of one. The skipped connection was performed as a concatenation of features from the encoder part to the features of the same spatial resolution in the decoder part. At the end, 3D convolution was performed with a kernel size of one. The architecture is depicted in Figure 1. The training was performed for 2000 epochs with a mini batch of 10. The Adam optimizer was used with a learning rate of 1e-5 and weight decay of 1e-7. For each epoch during the training, the order of the training dataset was randomly permuted and each instance was augmented by the randomly scaled spherical harmonics. The mean-squared error of the unwrapped \(B_{0}\) map and the prediction formed a loss function. ### _Fine-tuning to specific subject_ The U-net trained on the training dataset was fine-tuned to each subject with a very short training (50 epochs). The first six head positions from all volunteers in the test dataset were separated for the fine-tuning training and the 23 head position were kept for evaluation. The Adam optimizer was used with learning rate of 1e-6, and weight decay of 1e-7. ### _Evaluation_ The accuracy of the subject-specific, fined-tuned U-net (PR\({}_{\text{FT}}\)) was compared to three approaches: (i) no-correction (NC), for which the \(B_{0}\) map was not updated and directly compared to the initial head position; (ii) prediction of NN, which was not fine-tuned (PR\({}_{0}\)); (iii) EPI-based approach (EPI), in which the \(B_{0}\) maps were measured at the new position with a navigator-like sequence set up. The \(B_{0}\) maps of all four approaches were compared against the ground truth data (GRE-based \(B_{0}\) maps) and residua maps were calculated. The \(B_{0}\) maps and the residua maps were compared qualitatively. The quantitative analysis was performed using the absolute values of residua maps within the brain mask. For each head position in the testing dataset, median and interquartile range was calculated of the difference to the ground truth. Boxplots of these values were created, which summarize the approach overall performance on the test dataset. The fine-tuning of the NN for a specific subject was analyzed in terms of the required minimum number of head positions used for fine-tuning training and the number of epochs. Fine-tuning was tested with 3, 4, 5, and 6 head positions and in each case the fine-tuning was performed for 50 epochs. The number of epochs was tested with 6 head position and 5, 10, 20, 35, 50, 75, 100, 150 and 200 epochs were tested. The quantitative analysis was run over the residua maps of each fine-tuning test in the same fashion as describe above. ### _Reproducible research_ The code is available on Github [_link to the repository will be available_]. ## III Results ### _Accuracy of network prediction_ The qualitative comparison of \(B_{0}\) maps of 4 approaches and their residua to the ground truth \(B_{0}\) maps are depicted along with the anatomical references in Figure 2. The results are presented in three orthogonal planes. In the axial and coronal planes, residua maps between the initial and the new position show a clear left-right gradient of the error, which is caused by patient movement. In the sagittal plane, a high amplitude hotspot of error is visible in the frontal lobe. The gradient as well as the hotspot in the frontal lobe are not visible for the EPI approach and PR\({}_{\text{FT}}\). The residua maps of the PR\({}_{0}\) contain the slow spatial gradient in all three orthogonal planes. Quantitative comparison of overall performance of the four approaches is depicted in Figure 3. The medians and the IQRs of the absolute values of residua maps are compared. The EPI approach as well as PR\({}_{\text{FT}}\) yield lower absolute residua compared to the NC approach. For the NC approach, the median of the median of the absolute residua was \(7.59\,\mathrm{Hz}\), while for the EPI approach it was significantly lower, \(3.48\,\mathrm{Hz}\) (p-value \(<<0.0001\)), as well as for PR\({}_{\text{FT}}\), \(3.45\,\mathrm{Hz}\) (p-value \(<<0.0001\)). There was no significance difference between the EPI approach and the PR\({}_{\text{FT}}\) (p-value \(=0.69\)). For the NC approach the median of the IQR of the absolute residua was \(10.94\,\mathrm{Hz}\), which was significantly higher compared to the other three approaches: \(8.37\,\mathrm{Hz}\) (p-value \(=2.21e-5\)) for PR\({}_{0}\); \(4.82\,\mathrm{Hz}\) (p-value \(<<0.0001\)) for EPI approach, \(4.48\,\mathrm{Hz}\) (p-value \(<<0.0001\)) for PR\({}_{\text{FT}}\). The PR\({}_{\text{FT}}\) results are significantly lower to the PR\({}_{0}\) (p-value \(<<0.0001\)). There was no significant difference between PR\({}_{\text{FT}}\) and the EPI approach (p-value \(=0.57\)). A quantitative comparison of methods for one volunteer is depicted in Figure 4. 23 head positions are evaluated. Boxplots of absolute residua maps for each method at each head position are plotted. For the NC approach and the PR\({}_{0}\) the medians of absolute residua are above \(5\,\mathrm{Hz}\) in all cases. For the EPI approach and the PR\({}_{\text{FT}}\), the medians in all case are below 5 Hz. Moreover, the average of upper quartiles is 5.58 Hz for the EPI approach and 5.19 Hz for the PR\({}_{\text{FT}}\). ### _Analysis of fine-tuning procedure_ Quantitative results of the fine-tuning evaluation are depicted in Figure 5 in terms of number of epochs and in Figure 6 for the number of brain volumes used for the fine-tuning. The effect of using a different number of epochs for the fine-tuning was evaluated in a range between 5 and 200 epochs. In Figure 5, section A, a trend of decreasing of the median of the absolute median residua can be observed in a range between 5 and 50 epochs. In a range between 50 and 200 epochs, the difference in the median values were not observed. The IQR metric followed the same trend, as shown in Figure 5, section B. Only in the the range between 5 and 50 epoch of fine-tuning training, the values were decreasing. For the fine-tuning, which lasted longer then 50 epochs, there were no differences compared to the case of 50 epochs of the fine-tuning. The number of volumes used for the fine-tuning training were compared in a range 3 to 6 brain volumes. There were no major differences for the median of the absolute median residua, depicted in Figure 6, section A, nor for the IQRs of the absolute median residua, depicted in Figure 6, section B. ## IV Discussion This work presents the proof-of-principle investigation of a method to predict motion-induced \(B_{0}\) changes via a NN from available rigid body head motion logs (e.g., obtained from external motion tracking), an initial \(B_{0}\) map and an initial anatomical image. The prediction of the \(B_{0}\) maps was carried out with a U-net trained with the experimentally acquired and augmented data of 11 volunteers. For each volunteer in the test dataset, the network was fine-tuned with a small number of subject-specific data and a limited number of epochs. The performance of the network was compared with three other approaches using a test dataset of four volunteers. \(B_{0}\) (as well as \(B_{1}\)) inhomogeneities can cause severe artifacts in reconstructed images. Several other methods were therefore proposed in the past to estimate field inhomogeneities and correct for them [33, 34, 35, 36, 37]. Such calibration methods typically require a specific calibration scan at the beginning of each MRI acquisition protocol. The results are then used to set the parameters of the following sequences. In case of subject movement, the \(B_{0}\) information is not updated, which is analogous to the no-correction approach. Repeated \(B_{0}\) mapping throughout the acquisition protocol is therefore necessary to account for any temporal instability (e.g., patient movement related \(B_{0}\) changes). vNavs--typically interleaved with the main (parent) sequence--are able to provide dynamic \(B_{0}\) estimates, but would often increase the total scan time and lower the SNR-per-unit-time efficiency in an unacceptable way [13]. In some specific cases (e.g., fMRI), the \(B_{0}\) maps can be calculated from the phase of EPI images [38], but such approaches are not very flexible and often require making certain compromises in the parent sequence [39]. In contrast, our NN approach is capable to predict the change of the \(B_{0}\) with high temporal resolution without measuring extra data during the subsequent sequences in the particular volunteer's MRI acquisitions. If motion information can be acquired externally, e.g. by optical tracking or directly from the MR data, no sequence parameter changes are necessary for our proposed NN method, which relies only on an accurate knowledge of the transformation matrices describing the rigid body motion. The fine-tuning procedure depends on the limited number of subject-specific data, which can be acquired at the beginning of the MR investigation protocol. In recent years, deep learning methods have been applied in the reconstruction of MRI data [23], mainly assuming that the acquired k-space data are artefact-free. A small number of methods have been proposed which reduce or remove some artifacts arising from \(B_{0}\) inhomogeneities but this is, to the best of our knowledge, the first which directly predicts \(B_{0}\) map inhomogeneities. In the context of distortions in EPI images caused by \(B_{0}\) inhomogeneities, deep-learning-based methods have been proposed to directly predict distortion free Fig. 3: Comparison of the medians (section A) and the IQRs (section B) of the residua maps between four tested apporaches and the ground truth. The apporachers are: No-correction (NC), prediction by the non-fine-tuned NN (PR\({}_{0}\)), EPI-based \(B_{0}\) mapping (EPI), and prediction by the fine-tuned NN (PR\({}_{\text{FT}}\)). Fig. 4: Quantitative results of one volunteer. The boxplots of the absolute \(B_{0}\) residua to the ground truth for 4 tested approaches at 23 head positions are presented. No-correction (NC), Prediction by non-fine-tuned NN (PR\({}_{0}\)), EPI-based \(B_{0}\) mapping (EPI) and prediction by fine-tuned NN (PR\({}_{\text{FT}}\)) are compared. The fine-tuning was performed with 6 brain volumes in 50 epochs. Fig. 5: Quantitative comparison of the number of epochs for fine tuning training. A range of epoch was between 5 and 200. The results of fine-tuning are plotted along side other three methods: No-correction (NC), EPI-base \(B_{0}\) mapping (EPI), and prediction by non-fine-tuned NN (PR\({}_{0}\)). Section A: Comparison of the medians of absolute residua maps. Section B: Comparison of the IQRs of the absolute residua maps. Fig. 6: Quantitative comparison of the number of volumes for fine tuning training. The range of used volumes was between 3 and 6. The results of fine-tuning are plotted along side other three methods: No-correction (NC), Epi-base \(B_{0}\) mapping (EPI), and prediction by non-fine-tuned NN (PR\({}_{0}\)). Section A: Comparison of the medians of absolute residua maps. Section B: Comparison of the IQRs of the absolute residua maps. images [40, 41]. Another deep-learning based approach was designed to compensate for the artifact due to \(B_{0}\) fluctuations arising from respiration in multi-slice GRE by predicting the phase error term from the corrupted images [42]. In contrast, our approach is less direct, but more flexible. We have used subject-specific find-tuned NN to predict \(B_{0}\) maps. In general, this PRFT approach outperformed PR\({}_{0}\) (i.e., without fine-tuning) and the NC approach. The results of PRFT are similar to those obtained with the EPI approach. The NC method yielded the highest medians of \(B_{0}\) residua as well as IQRs. The \(B_{0}\) residua maps shows the left-right gradient of error in the axial and the coronal plane. The sagittal plane frequently shows error hotspots in the frontal lobe. Those effects are in agreement with a previously published analysis [43]. The PR\({}_{0}\) approach yielded slightly better results compared to the NC approach. The quantitative comparison showed improvement in the medians and the IRQs of \(B_{0}\) residua maps. From the investigation of methods per single head position, the PR\({}_{0}\) had similar performance for each position, no matter how severe the error of the NC approach was. The residua maps of the PR\({}_{0}\) shows a reverse gradient of the error compared to the NC approach. However, the \(B_{0}\) maps themselves have similar features compared to the PRFT. The main difference is in their magnitude. PRFT results were similar to those with the EPI approach in several investigations. The \(B_{0}\) maps of both methods are comparable to the ground truth. Their \(B_{0}\) maps residua did not contain the left-right gradient and the frontal lobe hotpot, which are typical \(B_{0}\) inhomogeneities originating from subject movement. Quantitative results showed the same results for the median and IQRs of the absolute \(B_{0}\) map residua in the overall comparison of the methods as well as in the comparison of methods per head volume. The EPI approach and the fine-tuning required acquisition of additional subject-specific data. However, while the EPI approach acquires data during the whole scan, the data for fine-tuning are only acquired at the beginning of the MR protocol as a prescan. Once the network is fine-tuned for a specific subject, only the tracking of movement is required. These motion logs could be acquired via several internal or external methods [10]. The one most independent from the MR acquisitions, and hence the most versatile, is optical tracking, in which information about the movement is sampled at a very high temporal rate up to \(80\,\mathrm{Hz}\)[17]. Another approach with minimal interference to the MR acquisition is to use navigators based on highly-undersampled kSpace [44], which takes only \(2.3\,\mathrm{ms}\) to detect the rigid-body movement. Acquiring subject-specific additional data (e.g. as prescan) for the MRI reconstruction is common practice for conventional methods. E.g. the parallel imaging method GRAPPA requires ACS lines to reconstruct the missing kSpace points [45]. Similarly, SENSE requires measured coil sensitivity profiles to disentangle aliased MRI images [46]. Subject-specific NN were also proposed by Akcakaya et al. [47] to perform MRI reconstructions. Our investigation of the subject-specific NN fine-tuning suggested that the amount of the additional training datasets can be as little as three volumes. The amount of head positions used for training were investigated and no significant differences were shown in the range 3 to 6 datasets. However, there were no special instruction for the fine-tuning data. Further optimization could, thus, lead to improved results. The fine-tuning should be performed for a sufficient time, however after some point the improvement is saturated. In our case, 50 epochs (which took approximately 1 minute without any specific optimization) were sufficient for the fine-tuning training with the 6 head positions. ### _Limitations and Outlook_ The paper presents a proof-of-principle and there are many details which can be improved. The training dataset was created from only 11 volunteers. For each volunteer only \(B_{0}\) maps at 30 positions were acquired and although the distribution of test and training data was similar, more data on a much more diverse group of subjects (e.g., different head sizes) will be necessary to achieve optimal results. The training dataset was augmented by a physics-driven concept. However, the number of head positions remained unchanged. The augmentation, thus, simulates only the differences in the \(B_{0}\) shimming of the volume-of-interest. Also other augmentation approaches and tests on generalizability should be considered. The subject-specific fine-tuning was performed with six volumes in 50 epochs. No special instruction were given, which can be improved by a tailored process of fine-tuning sampling for a given coil. Future research should involve a combination with external motion tracking hardware and evaluating the benefits for \(B_{0}\)- and motion-sensitive MRI sequences. Optical tracking could provide updates of the patient position with temporal resolution up to 85 Hz. The proposed method requires calibration sampling for the fine-tuning and short training. After that the input to the NN consist of anatomical images, the initial \(B_{0}\) map from the beginning of the measurement protocol, and information of the rigid movement. The prediction of the \(B_{0}\) map is completely independent from the MR scanner. Thus, the valuable information about the change of the \(B_{0}\) map due to subjects movement, which is usually available only from lengthy volumetric navigators, could be predicted with the same temporal resolution as is available from rigid-body motion logs. In the future these predicted \(B_{0}\) maps could even be used for real-time updating both the MRI volume and the \(B_{0}\) shim parameters together during the acquisition of the MR data. This can ultimately lead to significantly improved data quality for a range of \(B_{0}\)-sensitive MRI methods. ## V Conclusion This paper presents the proof-of-principle implementation of a new deep learning-based and subject-specific approach to predicting the change of the \(B_{0}\) maps due to patient movement. Results were compared to the ground truth and the established EPI approach, which is equivalent to using lengthy volumetric navigators. Our results suggest that the prediction of \(B_{0}\) maps is feasible and highly accurate. In combination with external tracking, a considerable improvement in data quality of \(B_{0}\)-sensitive MRI methods could be expected.
2303.15497
Direct searches for general dark matter-electron interactions with graphene detectors: Part I. Electronic structure calculations
We develop a formalism to describe electron ejections from graphene-like targets by dark matter (DM) scattering for general forms of scalar and spin 1/2 DM-electron interactions and compare their applicability and accuracy within the density functional theory (DFT) and tight binding (TB) approaches. This formalism allows for accurate prediction of the daily modulation signal expected from DM in upcoming direct detection experiments employing graphene sheets as the target material. A key result is that the physics of the graphene sheet and that of the DM and the ejected electron factorise, allowing for the rate of ejections from all forms of DM to be obtained with a single graphene response function. We perform a comparison between the TB and DFT approaches to modeling the initial state electronic wavefunction within this framework, with DFT emerging as the more self-consistent and reliable choice due to the challenges in the embedding of an appropriate atomic contribution into the TB approach.
Riccardo Catena, Timon Emken, Marek Matas, Nicola A. Spaldin, Einar Urdshals
2023-03-27T18:00:00Z
http://arxiv.org/abs/2303.15497v1
# Direct searches for general dark matter-electron interactions with graphene detectors: ###### Abstract We develop a formalism to describe electron ejections from graphene-like targets by dark matter (DM) scattering for general forms of scalar and spin 1/2 DM-electron interactions and compare their applicability and accuracy within the density functional theory (DFT) and tight binding (TB) approaches. This formalism allows for accurate prediction of the daily modulation signal expected from DM in upcoming direct detection experiments employing graphene sheets as the target material. A key result is that the physics of the graphene sheet and that of the DM and the ejected electron factorise, allowing for the rate of ejections from all forms of DM to be obtained with a single graphene response function. We perform a comparison between the TB and DFT approaches to modeling the initial state electronic wavefunction within this framework, with DFT emerging as the more self-consistent and reliable choice due to the challenges in the embedding of an appropriate atomic contribution into the TB approach. ## I Introduction Dark Matter (DM) plays a key role in simultaneously explaining otherwise anomalous physical phenomena that occur on extremely different astronomical length scales [1]. For example, it provides the initial density fluctuations that trigger the formation of cosmic structures and generate the anisotropy pattern observed in the cosmic microwave background temperature and polarisation maps [2]. It bends the light emitted by distant astrophysical sources, giving rise to spectacular gravitational lensing events [3] and, furthermore, it provides the mass required to support the flat rotation curves of spiral galaxies [4]. Despite these remarkable observations, we still do not know what DM is made of. The leading hypothesis in astroparticle physics is that DM is made of unidentified, yet-to-be-discovered particles [1]. While this simple assumption can collectively explain all phenomena listed above, the hypothetical particles forming our universe's DM component have so far escaped detection. This dichotomy between solid gravitational evidence and lack of microscopic description makes the search for the "DM particle" a top priority. A prominent class of experiments searching for the DM particle relies on the direct detection technique [5; 6]. This technique seeks for rare interactions between DM particles from the Milky Way and detector materials located deep underground in low background environments. As far as the DM-material interaction is concerned, DM direct detection experiments have until recently focused on the search for nuclear recoil events induced by the scattering of Weakly Interacting Massive Particles (WIMPs) in crystals, or liquid noble gases [7]. Consequently, direct detection experiments have so far only probed DM particles of mass above about 1 GeV, as lighter particles would not be able to cause an observable nuclear recoil. However, the lack of detection of WIMPs has recently motivated the exploration of alternative experimental approaches that are better suited to probe DM particles of sub-GeV mass [8]. It is in this context that DM direct detection experiments sensitive to DM-induced electronic transitions or electron ejections in materials play a central role. Materials that have been proposed to search for sub-GeV DM particles via DM-electron interactions include liquid argon [9] and xenon [10; 11; 12], semiconductor crystals [13; 14; 15; 16; 17; 18; 19; 20; 21; 22], 3D Dirac materials [23; 24], graphene [25; 26] and carbon nanotubes [27; 28; 29; 30], to name a few. In this context, anisotropic media, particularly materials with anisotropic Fermi velocities such as graphene and carbon nanotubes, are interesting, as the associated rate of DM-induced electron ejections exhibits an enhanced daily modulation. This enhancement is caused by the structural anisotropy of the target material in combination with its relative orientation to the DM wind. Given that such a daily modulation is not present in typical experimental backgrounds, it would thus be a smoking gun for DM signal. A proposed experiment to search for DM-induced electron ejections from graphene sheets or arrays of carbon nanotubes, which is currently in the conceptual design stage, is the Princeton Tritium Observatory for Light, Early-Universe, Massive-Neutrino Yield, or PTOLEMY [31; 32; 33]. PTOLEMY's experimental design employs a large area surface-deposition tritium target coupled to a graphene substrate to detect the cosmic neutrino background via the observation of single electrons produced in the neu trino absorption by tritium atoms [31]. The coupling between tritium and graphene reduces the energy dispersion of the final state electrons by about an order of magnitude compared to the case of molecular tritium [32]. A small electron energy dispersion allows for better discrimination between electrons produced by neutrino absorption and electrons populating the tail of the tritium \(\beta\)-decay spectrum [32]. For this experimental setup to work, it is crucial to experimentally validate the use of graphene as a substrate by accurately measuring the electron-graphene interaction properties. In this intermediate stage of the PTOLEMY experimental program, when tritium target and graphene substrate are still decoupled, PTOLEMY can also operate as a directional MeV-scale DM detector. Specifically, two experimental configurations have been proposed. In the first one, a sample of stacked graphene sheets is considered (PTOLEMY-G\({}^{3}\)) [25]. Once an electron is ejected from one of the graphene sheets, it drifts in an external electric field until it reaches a calorimeter at the edge of the detector volume. This configuration allows for a full reconstruction of the final state electron kinematics. In a second experimental configuration (PTOLEMY-CNT) [27; 28; 29; 30], an array of single- or multi-wall metallic carbon nanotubes is positioned in vacuum. When an electron is ejected from one of the nanotubes, it is driven by an electric field to the detection region, and recorded by a single electron sensor. The idea of adopting graphene sheets and carbon nanotubes as targets for directional, light DM detection has recently been further developed by the "Graphene-FET" and "dark-PMT" projects, respectively [34]. The possibility of using graphene or carbon nanotubes as directional detectors sensitive to DM-induced electron ejections motivates an accurate and comprehensive modelling of DM scattering by electrons bound in this class of anisotropic materials. In contrast, the rate of DM-induced electron ejections from graphene sheets [25] and from carbon nanotubes [27; 28; 29; 30] has so far been computed assuming that the amplitude for DM-electron interactions depends on the momentum transfer only, and that the DM-electron interaction is spin-independent. This is a rather restrictive assumption, which can easily be violated, e.g. in models where DM has a non negligible magnetic or anapole moment [35]. Furthermore, current estimates rely on the tight-binding approximation and have not been validated against first-principles calculations. The purpose of this work is to extend and improve the formalism currently used to model the scattering of DM particles by electrons bound to graphene sheets. First, we extend the current formalism to virtually arbitrary DM-electron interactions by using the non-relativistic effective theory framework we developed in [35; 36], and recently applied by the XENON group in an analysis of the electron recoil data reported in [37]. Second, we improve the existing formalism by performing state-of-the art density functional theory (DFT) calculations in order to accurately model the electronic properties of graphene. We expect that the formalism and findings we present here will be useful in the design of the PTOLEMY detector, as well as for the development of the Graphene-FET and dark-PMT projects. However, the relevance of our formalism goes beyond its application to these experimental concepts, as it can also be straightforwardly used to study the ejection of electrons in other experimental settings, where the final state is a free electron that can be described by a plane wave. We leave this exploration for future work. This paper is the first of a two part series studying DM-electron scatterings in graphene targets. In this paper (Paper I), we lay the theoretical foundations. In the companion Paper II, we will focus on more explicit experimental setups and sensitivity studies [38]. In addition, the software tools Darphene and QEdark-EFT developed for TB and DFT calculations respectively are publicly available [39; 40]. This article is organized as follows. In Sec. II we introduce our general formalism for modeling the ejection of electrons by the scattering of DM particles in two- and three-dimensional periodic systems. In Sec. III, we describe the detailed electronic structure calculations we performed for graphene, both within the tight-binding-approximation and within DFT. We apply these results to study the daily modulation of the DM-induced electron ejection rate for a hypothetical graphene detector in Sec. IV and conclude in Sec. V. We complement this work with appendices where we provide analytic formulae that are useful for evaluating our general electron ejection rates. ## II Rate of electron ejection caused by general dark matter-electron interactions In this section, we derive an expression for the rate of electron ejection caused by general DM-electron interactions in periodic systems. In Sec. III, we will perform the detailed electronic structure calculations that will enable us in Sec. IV to apply this general expression to the specific, experimentally relevant case of graphene. ### General formalism We are interested in processes in which a DM particle \(\chi\) of mass \(m_{\chi}\), initial velocity in the detector rest frame \(\mathbf{v}\), and momentum \(\mathbf{p}=m_{\chi}\mathbf{v}\) is scattered by an electron in initial state \(|\mathbf{e}_{1}\rangle\). During the interaction, the DM particle transfers momentum \(\mathbf{q}=\mathbf{p}-\mathbf{p}^{\prime}\) to the electron, where \(\mathbf{p}^{\prime}\) is the final DM momentum, and causes an electronic transition from \(|\mathbf{e}_{1}\rangle\) to the final state \(|\mathbf{e}_{2}\rangle\). In the notation of [35; 41], the rate \(R_{1\to 2}\) for this transition is \[R_{1\to 2} =\frac{n_{\chi}}{16m_{\chi}^{2}m_{e}^{2}}\int\frac{\mathrm{d}^{3} \mathbf{q}}{(2\pi)^{3}}\int\mathrm{d}^{3}\mathbf{v}\,f_{\chi}(\mathbf{v})\] \[\times(2\pi)\delta(E_{f}-E_{i})\overline{\left|\mathcal{M}_{1 \to 2}\right|^{2}}\,, \tag{1}\] where \(m_{e}\) is the electron mass, \(n_{\chi}=\rho_{\chi}/m_{\chi}\) is the local DM number density, \(\rho_{\chi}=0.4\,\mathrm{GeV}\ \mathrm{cm}^{-3}\) is the local DM mass density, and \(f_{\chi}(\mathbf{v})\) is the local DM velocity distribution boosted to the detector rest frame. For \(f_{\chi}(\mathbf{v})\), we assume a truncated Maxwell-Boltzmann distribution, as in the so-called Standard Halo Model (SHM) [42]. Specifically, \[f_{\chi}(\mathbf{v}) =\frac{1}{N_{\mathrm{esc}}\pi^{3/2}v_{0}^{3}}\exp\left[-\frac{( \mathbf{v}+\mathbf{v}_{e})^{2}}{v_{0}^{2}}\right]\] \[\times\Theta\left(v_{\mathrm{esc}}-|\mathbf{v}+\mathbf{v}_{e}| \right)\,, \tag{2}\] and we take \(v_{0}=|\mathbf{v}_{0}|=\ 238\ \mathrm{km}\ \mathrm{s}^{-1}\)[42] for the local standard of rest speed, and \(v_{\mathrm{esc}}=544\ \mathrm{km}\ \mathrm{s}^{-1}\)[42] for the galactic escape speed. Following [24], we express the Earth's velocity with respect to the galactic centre, \(\mathbf{v}_{e}\), in a coordinate system with \(z\)-axis in the \(\mathbf{v}_{0}+\mathbf{v}_{\odot}\) direction, \(\mathbf{v}_{\odot}\) the Sun's peculiar velocity and \(v_{e}=|\mathbf{v}_{0}+\mathbf{v}_{\odot}|\simeq 250.5\ \mathrm{km}\ \mathrm{s}^{-1}\)[42], \[\mathbf{v}_{e}=v_{e}\left(\begin{array}{c}\sin\alpha_{e}\sin\beta\\ \sin\alpha_{e}\cos\alpha_{e}(\cos\beta-1)\\ \cos^{2}\alpha_{e}+\sin^{2}\alpha_{e}\cos\beta\end{array}\right)\,,\] where \(\alpha_{e}=42^{\circ}\), \(\beta=2\pi\,t/\mathrm{day}\), and \(t\) is the time variable. Finally, we also introduced the normalization constant, \[N_{\mathrm{esc}}\equiv\mathrm{erf}(v_{\mathrm{esc}}/v_{0})-\frac{2}{\sqrt{ \pi}}\,\frac{v_{\mathrm{esc}}}{v_{0}}\exp\left(-\frac{v_{\mathrm{esc}}^{2}} {v_{0}^{2}}\right)\,. \tag{3}\] The total initial (final) energy \(E_{i}\) (\(E_{f}\)) in Eq. (1) is the sum of the DM and electronic energies, \[E_{i}=\frac{|\mathbf{p}|^{2}}{2m_{\chi}}+E_{1}\,,\quad E_{f}=\frac{|\mathbf{p }-\mathbf{q}|^{2}}{2m_{\chi}}+E_{2}\,, \tag{4}\] where \(E_{1}\) (\(E_{2}\)) is the energy eigenvalue of the electronic state \(|\mathbf{e}_{1}\rangle\) (\(|\mathbf{e}_{2}\rangle\)). We denote the corresponding wave functions by \(\psi_{1}\) and \(\psi_{2}\), and their associated Fourier transforms by \(\widetilde{\psi}_{1}\) and \(\widetilde{\psi}_{2}\), respectively. These electron wave functions enter the electron transition amplitude \(\mathcal{M}_{1\to 2}\), defined as in Eq. (14) of [35] by the integral, \[\mathcal{M}_{1\to 2}=\int\frac{\mathrm{d}^{3}\boldsymbol{\ell}}{(2\pi)^{3}} \,\widetilde{\psi}_{2}^{*}(\boldsymbol{\ell}+\mathbf{q})\mathcal{M}( \boldsymbol{\ell},\mathbf{p},\mathbf{q})\widetilde{\psi}_{1}(\boldsymbol{ \ell})\,, \tag{5}\] where \(\mathcal{M}(\boldsymbol{\ell},\mathbf{p},\mathbf{q})\) is the free electron scattering amplitude, and \(\boldsymbol{\ell}\) the initial state electron momentum. Here, we use momentum conservation to eliminate explicit dependence on the final state electron momentum from \(\mathcal{M}\). Furthermore, since the scattering of Milky Way DM particles by free electrons is expected to be non-relativistic, we use the Galilean invariance of \(\mathcal{M}\) to write \(\mathcal{M}(\boldsymbol{\ell},\mathbf{p},\mathbf{q})=\mathcal{M}(\mathbf{q}, \mathbf{v}_{\mathrm{el}}^{\perp})\), where \(\mathbf{v}_{\mathrm{el}}^{\perp}=\mathbf{v}-\mathbf{q}/(2\mu_{\mathrm{x}e})- \boldsymbol{\ell}/m_{e}\) and \(\mu_{\chi e}\) is the DM-electron reduced mass. Finally, we expand \(\mathcal{M}\) at linear order in \(\boldsymbol{\ell}/m_{e}\), and write it as follows [35] \[\mathcal{M}(\mathbf{q},\mathbf{v}_{\mathrm{el}}^{\perp})\approx\left.\mathcal{M }(\mathbf{q},\mathbf{v}_{\mathrm{el}}^{\perp})\right|_{\boldsymbol{\ell}= \mathbf{0}}+\boldsymbol{\ell}\cdot\left.\nabla_{\boldsymbol{\ell}}\mathcal{M}( \mathbf{q},\mathbf{v}_{\mathrm{el}}^{\perp})\right|_{\boldsymbol{\ell}= \mathbf{0}}\,. \tag{6}\] This expansion allows us to express the transition amplitude as \[\mathcal{M}_{1\to 2} =\left.\mathcal{M}(\mathbf{q},\mathbf{v}_{\mathrm{el}}^{\perp}) \right|_{\boldsymbol{\ell}=\mathbf{0}}f_{1\to 2}(\mathbf{q})\] \[+m_{e}\left.\nabla_{\boldsymbol{\ell}}\mathcal{M}(\mathbf{q}, \mathbf{v}_{\mathrm{el}}^{\perp})\right|_{\boldsymbol{\ell}=\mathbf{0}}\cdot \mathbf{f}_{1\to 2}(\mathbf{q})\,, \tag{7}\] where we introduce the scalar and vectorial overlap integrals, \[f_{1\to 2}(\mathbf{q}) \equiv\int\mathrm{d}^{3}\mathbf{x}\,\psi_{2}^{*}(\mathbf{x})\,e^{i \mathbf{q}\cdot\mathbf{x}}\,\psi_{1}\left(\mathbf{x}\right)\,, \tag{8}\] \[\mathbf{f}_{1\to 2}(\mathbf{q}) \equiv\int\mathrm{d}^{3}\mathbf{x}\,\psi_{2}^{*}(\mathbf{x})\,e^{i \mathbf{q}\cdot\mathbf{x}}\,\frac{i\nabla}{m_{e}}\psi_{1}\left(\mathbf{x} \right)\,. \tag{9}\] In order to evaluate the expressions above, we need to specify the initial and final electron wave functions. We begin by specifying the final-state electron wave function for the case in which the electron is ejected by the DM particle. In this case, the state \(|\mathbf{e}_{2}\rangle\) asymptotically approaches a free particle of momentum \(\mathbf{k}^{\prime}\). Consequently, the wave function \(\psi_{2}(\mathbf{x})\) can be approximated by the plane wave \[\psi_{2}(\mathbf{x})\rightarrow\psi_{\mathbf{k}^{\prime}}(\mathbf{x})=\frac{1}{ \sqrt{V}}e^{i\mathbf{k}^{\prime}\cdot\mathbf{x}}\,, \tag{10}\] which is normalised to one over a finite volume \(V\). For electrons initially bound in graphene, this plane wave approximation has been validated by comparing results from angular-resolved photoemission spectroscopy (ARPES) measurements with simulated photoemission intensity maps, for which excellent agreement was found [43]. Within this plane-wave assumption, we can express the scalar and vectorial overlap integrals in Eq. (8) and Eq. (9) in terms of the Fourier transform of the initial state electron wave function \[f_{1\to 2} =\frac{1}{\sqrt{V}}\widetilde{\psi}_{1}(\mathbf{k}^{\prime}- \mathbf{q})\,, \tag{11}\] \[\mathbf{f}_{1\to 2} \equiv\frac{1}{\sqrt{V}}\frac{\mathbf{q}-\mathbf{k}^{\prime}}{m_{e }}\,\,\widetilde{\psi}_{1}(\mathbf{k}^{\prime}-\mathbf{q})\,. \tag{12}\] Also, using a plane wave as a final state, we find that the square of the transition amplitude in Eq. (1) can be written as \[\overline{\left|\mathcal{M}_{1\to 2}\right|^{2}} =\left\{\overline{\left|\mathcal{M}\right|^{2}}+2\;\overline{\text{ Re}\left[\mathcal{M}\nabla_{\mathbf{\ell}}\mathcal{M}\cdot(\mathbf{q}-\mathbf{k}^{ \prime})\right]}+\overline{\left|\nabla_{\mathbf{\ell}}\mathcal{M}\cdot(\mathbf{q} -\mathbf{k}^{\prime})\right|^{2}}\right\}\times\frac{1}{V}\left|\widetilde{ \psi}_{1}(\mathbf{k}^{\prime}-\mathbf{q})\right|^{2}\] \[\equiv\underbrace{R_{\text{free}}(\mathbf{k}^{\prime},\mathbf{q}, \mathbf{v})}_{\text{free electrons}}\times\frac{1}{V}\underbrace{\left| \widetilde{\psi}_{1}(\mathbf{k}^{\prime}-\mathbf{q})\right|^{2}}_{\text{ material properties}}\;, \tag{13}\] where we introduced the free particle response function \(R_{\text{free}}(\mathbf{k}^{\prime},\mathbf{q},\mathbf{v})\), for which we give a general expression in Appendix A. In order to understand the physical meaning of \(R_{\text{free}}(\mathbf{k}^{\prime},\mathbf{q},\mathbf{v})\), it is instructive to take the limit of a free initial state electron in Eq. (13), and hence replace \(\psi_{1}(\mathbf{x})\) with a plane wave of linear momentum \(\mathbf{\ell}\). In this limit, one finds \[\overline{\left|\mathcal{M}_{1\to 2}\right|^{2}}=R_{\text{free}}(\mathbf{k}^{ \prime},\mathbf{q},\mathbf{v})\times(2\pi)^{3}\delta^{3}(\mathbf{k}^{\prime}- \mathbf{\ell}-\mathbf{q})\,, \tag{14}\] where all information, besides momentum conservation, is contained in \(R_{\text{free}}\). This shows that the second factor in Eq. (13) contributes non trivially to the squared transition amplitude only when the initial state electron is bound within a material. In this latter case, it encodes all relevant material properties via the Fourier transform of the initial state electron wave function. As we will see in the next section, this factorization allows us to express the rate of DM-induced electron ejection from materials in terms of a single material response function. This is in contrast with our previous findings for the cases of atomic ionizations [35] and excitations in crystals [41] where up to five material response functions were required to evaluate the rate of DM-induced electronic transitions between filled valence and empty conduction bands. One should also note that the results reported in [35; 41] neglect the directional information of the event rate and assume a simplified treatment of the velocity integral in the transition-rate formula. By performing this integral exactly, as we do here via Monte Carlo integration (see Sec. IV), up to five scalar and two vectorial material response functions are in general expected to contribute to the DM-induced electronic transition rate [41]. ### Effective theory expansion of the scattering amplitude In order to evaluate our general electron ejection formulae, Eqs. (1) and (13), we need to specify the coefficients, \(\mathcal{M}(\mathbf{q},\mathbf{v}_{\text{el}}^{\perp})_{\mathbf{\ell}=\mathbf{0}}\) and \(\nabla_{\mathbf{\ell}}\mathcal{M}(\mathbf{q},\mathbf{v}_{\text{el}}^{\perp})_{\bm {\ell}=\mathbf{0}}\) in the non-relativistic expansion of the scattering amplitude \(\mathcal{M}\) in Eq. (7). From these coefficients, one can in turn obtain an explicit expression for the free-particle response function \(R_{\text{free}}\), as shown in App. A. In this work, we calculate these coefficients using effective theory methods. Specifically, we extract them from the non-relativistic effective theory of spin 0 and spin 1/2 DM-electron interactions [35], within which the scattering amplitude can be written as \[\mathcal{M}(\mathbf{q},\mathbf{v}_{\text{el}}^{\perp})=\sum_{i}c_{i}\;F_{ \text{DM},i}(q)\left\langle\mathcal{O}_{i}\right\rangle. \tag{15}\] Here \(c_{i}\) is the dimensionless effective coupling corresponding to the \(i\)-th operator, \(\mathcal{O}_{i}\), in Tab. 1, angle brackets denote an expectation value between DM-electron spin states, and \(F_{\text{DM},i}(q)\) is the DM form factor that encapsulates the \(q\)-dependence of the amplitude not captured by the operator \(\mathcal{O}_{i}\) itself 1. Possible forms of the DM form factor include Footnote 1: If the non-relativistic amplitude \(\mathcal{M}(q)\) contains a given operator \(\mathcal{O}_{i}\) within two terms with distinct \(q\)-dependencies, i.e. two different DM form factors, one can still use our formalism by replacing \(c_{i}F_{\text{DM},i}(q)\) with the sum \(c_{i}^{(1)}F_{\text{DM},i}^{(1)}(q)+c_{i}^{(2)}F_{\text{DM},i}^{(2)}(q)\) for that particular operator. \[F_{\text{DM},i}(q)=\begin{cases}1&\text{for short-range interactions},\\ \left(\frac{q_{\text{ref}}}{q}\right)^{2}&\text{for long-range interactions},\\ \left(\frac{q_{\text{ref}}^{2}+m_{\phi}^{2}}{q^{2}+m_{\phi}^{2}}\right)&\text{ for a massive mediator }\phi\,,\end{cases} \tag{16}\] where we introduced an arbitrary reference momentum transfer \(q_{\text{ref}}\). In the context of sub-GeV DM searches, \(q_{\text{ref}}\) is usually set to \(\alpha m_{e}\), where \(\alpha\) is the fine-structure constant, since this is the typical momentum of an electron in the outer atomic orbitals. Eq. (15) gives the most general expression for the non-relativistic amplitude for DM-electron scattering that is compatible with momentum conservation and Galilean invariance. The formalism we develop in this work, and in particular the free-particle response function used in the numerical calculations, relies on the expansion in Eq. (15). ### Benchmark particle physics models With Eq. (15), we have a general parametrization of the non-relativistic scattering amplitude that virtually any fundamental DM particle model can be mapped onto. In the numerical applications presented in Sec. IV, we focus on four benchmark models, each of them corresponding to a different linear combination of operators in Tab. 1. These models - briefly reviewed below - provide interesting examples of DM-electron interactions, and demonstrate both the generality of the effective theory expansion in Eq. (15), as well as how the mapping from fundamental to effective coupling constants works in practice. #### ii.1.1 Dark photon model Our first benchmark model has guided the direct search for sub-GeV DM particles over the past few years, and is referred to as the dark photon model. In this framework, the Standard Model (SM) Lagrangian is extended by a new \(U(1)\) gauge group with gauge coupling \(g_{D}\), and by a massive dark photon \(A^{\prime}\)[13; 44; 45; 46; 47; 48]. The DM-ordinary matter interaction portal opens via a kinetic mixing between ordinary and dark photons in the interaction Lagrangian, i.e. \(\varepsilon F_{\mu\nu}F^{\prime\mu\nu}\), where \(F_{\mu\nu}(F^{\prime\mu\nu})\) is the field strength tensor of the ordinary photon (massive dark photon). The Lagrangian of the dark sector in this model is given by \[\mathscr{L}_{D} =\overline{\chi}(i\gamma^{\mu}D_{\mu}-m_{\chi})\chi-\frac{1}{4}F^ {\prime}_{\mu\nu}F^{\prime\mu\nu}\] \[+\frac{1}{2}m_{A^{\prime}}^{2}A^{\prime}_{\mu}A^{\prime\mu}-\frac {\varepsilon}{2}F_{\mu\nu}F^{\prime\mu\nu}\,, \tag{17}\] with the covariant derivative defined as \[D_{\mu}\chi=\partial_{\mu}\chi-ig_{D}A^{\prime}_{\mu}\chi\,, \tag{18}\] where \(g_{D}\) is the gauge coupling corresponding to the dark \(U(1)\) gauge group. In our general framework, the DM-electron scattering amplitude in the dark photon model can be mapped on to the operator \(\mathcal{O}_{1}\) if one relates the coupling constants \(g_{D}\) and \(\varepsilon\) to the effective coupling \(c_{1}\) as follows, \[c_{1}=\frac{4m_{\chi}m_{e}g_{D}\varepsilon e}{q_{\text{ref}}^{2}+m_{A^{\prime }}^{2}}\,,\] (19a) with \[F_{\text{DM},1}(q)=\frac{q_{\text{ref}}^{2}+m_{A^{\prime}}^{2}}{q^{2}+m_{A^{ \prime}}^{2}}\,. \tag{19b}\] #### ii.1.2 Electric dipole interactions As another less trivial example that illustrates the wide applicability of our framework, we next consider the case of electric dipole DM-electron interactions induced by the interaction Lagrangian \[\mathscr{L}_{\text{int}}=\frac{g}{\Lambda}\,i\overline{\chi}\sigma^{\mu\nu} \gamma^{5}\chi\,F_{\mu\nu}\,. \tag{20}\] In [35], we showed that the scattering amplitude of this model can be mapped to the operator \(\mathcal{O}_{11}\) via \[c_{11}=\frac{16em_{\chi}m_{e}^{2}}{q_{\text{ref}}^{2}}\frac{g}{\Lambda}\quad \text{ with }F_{\text{DM},11}(q)=\left(\frac{q_{\text{ref}}}{q}\right)^{2}\,. \tag{21}\] #### ii.1.3 Magnetic dipole interactions Similarly, one can assume an interaction portal via magnetic dipole interactions between DM and electrons by the following interaction term in the Lagrangian, \[\mathscr{L}_{\text{int}}=\frac{g}{\Lambda}\,\overline{\chi}\sigma^{\mu\nu} \chi\,F_{\mu\nu}\,. \tag{22}\] As shown in [35], the corresponding scattering amplitude can be identified with a linear combination of four of the operators in Tab. 1, with non-zero effective couplings given by \[c_{1} =4em_{e}\frac{g}{\Lambda}\,, \text{with }F_{\text{DM},1}(q) =1\,, \tag{23a}\] \[c_{4} =16em_{\chi}\frac{g}{\Lambda}\,, \text{with }F_{\text{DM},4}(q) =1\,,\] (23b) \[c_{5} =\frac{16em_{e}^{2}m_{\chi}}{q_{\text{ref}}^{2}}\frac{g}{\Lambda}\,, \text{with }F_{\text{DM},5}(q) =\left(\frac{q_{\text{ref}}}{q}\right)^{2}\,,\] (23c) \[c_{6} =-\frac{16em_{e}^{2}m_{\chi}}{q_{\text{ref}}^{2}}\frac{g}{\Lambda }\,, \text{with }F_{\text{DM},6}(q) =\left(\frac{q_{\text{ref}}}{q}\right)^{2}\,. \tag{23d}\] As one can see, in this case the amplitude \(\mathcal{M}\) is a linear combination of "short-range" and "long-range" contributions. #### ii.1.4 Anapole interactions Finally, we also study anapole interactions, defined by the interaction Lagrangian \[\mathscr{L}_{\text{int}}=\frac{g}{2\Lambda^{2}}\,\overline{\chi}\gamma^{\mu} \gamma^{5}\chi\,\partial^{\nu}F_{\mu\nu}\,. \tag{24}\] \begin{table} \begin{tabular}{l l} \(\mathcal{O}_{1}=\mathds{1}_{\chi^{e}}\) & \(\mathcal{O}_{9}=i\mathds{S}_{\chi}\cdot\left(\mathds{S}_{e}\times\frac{ \mathds{q}}{m_{e}}\right)\) \\ \(\mathcal{O}_{3}=i\mathds{S}_{e}\cdot\left(\frac{\mathds{q}}{m_{e}}\times \mathds{v}_{\text{el}}^{\perp}\right)\) & \(\mathcal{O}_{10}=i\mathds{S}_{e}\cdot\frac{\mathds{q}}{m_{e}}\) \\ \(\mathcal{O}_{4}=\mathds{S}_{\chi}\cdot\mathds{S}_{e}\) & \(\mathcal{O}_{11}=i\mathds{S}_{\chi}\cdot\frac{\mathds{q}}{m_{e}}\) \\ \(\mathcal{O}_{5}=i\mathds{S}_{\chi}\cdot\left(\frac{\mathds{q}}{m_{e}}\times \mathds{v}_{\text{el}}^{\perp}\right)\) & \(\mathcal{O}_{12}=\mathds{S}_{\chi}\cdot\left(\mathds{S}_{e}\times\mathds{v}_{ \text{el}}^{\perp}\right)\) \\ \(\mathcal{O}_{6}=\left(\mathds{S}_{\chi}\cdot\frac{\mathds{q}}{m_{e}}\right) \left(\mathds{S}_{e}\cdot\frac{\mathds{q}}{m_{e}}\right)\) & \(\mathcal{O}_{13}=i\left(\mathds{S}_{\chi}\cdot\mathbf{v}_{\text{el}}^{\perp} \right)\left(\mathds{S}_{e}\cdot\frac{\mathds{q}}{m_{e}}\right)\) \\ \(\mathcal{O}_{7}=\mathds{S}_{e}\cdot\mathbf{v}_{\text{el}}^{\perp}\) & \(\mathcal{O}_{14}=i\left(\mathds{S}_{\chi}\cdot\frac{\mathds{q}}{m_{e}}\right) \left(\mathds{S}_{e}\cdot\mathbf{v}_{\text{el}}^{\perp}\right)\) \\ \(\mathcal{O}_{8}=\mathds{S}_{\chi}\cdot\mathbf{v}_{\text{el}}^{\perp}\) & \(\mathcal{O}_{15}=i\mathcal{O}_{11}\left[\left(\mathds{S}_{e}\times\mathbf{v}_{ \text{el}}^{\perp}\right)\cdot\frac{\mathds{q}}{m_{e}}\right]\) \\ \end{tabular} \end{table} Table 1: Interaction operators defining the non-relativistic effective theory of spin 0 and 1/2 DM-electron interactions [35]. \(\mathbf{S}_{e}\) (\(\mathbf{S}_{\chi}\)) is the electron (DM) spin, \(\mathbf{v}_{\text{el}}^{\perp}=\mathbf{v}-\boldsymbol{\ell}/m_{e}-\mathbf{q}/(2 \mu_{\chi e})\), where \(\mu_{\chi e}\) is the DM-electron reduced mass, \(\mathbf{v}_{\text{el}}^{\perp}\) is the transverse relative velocity and \(\mathds{1}_{\chi e}\) is the identity in the DM-electron spin space. In the case of elastic scattering, \(\mathbf{v}_{\text{el}}^{\perp}\cdot\mathbf{q}=0\), which explains the notation. Just as before, we compare the scattering amplitude with the effective operators and find a correspondence to \(\mathcal{O}_{8}\) and \(\mathcal{O}_{9}\) with the effective couplings \[c_{8} =8em_{e}m_{\chi}\frac{g}{\Lambda^{2}}\,, \text{with }F_{\text{DM},8}(q) =1\,, \tag{25a}\] \[c_{9} =-8em_{e}m_{\chi}\frac{g}{\Lambda^{2}}\,, \text{with }F_{\text{DM},9}(q) =1\,. \tag{25b}\] ### Application to periodic systems We continue the development of our formalism by specifying the wave function for the initial state electrons, which we assume to be bound within a crystal consisting of periodically repeating atoms. In this case, the electron eigenfunction is characterized by an energy band, \(i\), and a lattice momentum, \(\mathbf{k}\), \[\psi_{1}(\mathbf{x})\to\psi_{i\mathbf{k}}(\mathbf{x})\,, \tag{26}\] and has the form of a Bloch state, in which \[\psi_{i\mathbf{k}}(\mathbf{x}+\mathbf{a})=e^{i\mathbf{a}\cdot\mathbf{k}}\, \psi_{i\mathbf{k}}(\mathbf{x})\,, \tag{27}\] with \(\mathbf{a}\) being an arbitrary lattice vector. We refer to App. 3 for further details on Bloch's theorem and Bloch states. In this work, we consider two different bases to form the Bloch states of the initial state electrons. For our tight-binding approximation calculations (Sec. II.4.1) we build the wavefunctions from linear combinations of atomic orbitals, and for our DFT calculations (Sec. II.4.2) we use linear combinations of plane waves. Assuming a Bloch state of crystal momentum \(\mathbf{k}\) for the initial electron wave function and a plane wave of linear momentum \(\mathbf{k}^{\prime}\) for the final state electron, we can combine Eq. (13) with Eq. (1) to calculate the transition rate \(R_{i\mathbf{k}\to\mathbf{k}^{\prime}}\), that is the rate of electron ejections by DM scattering when the initial electron is in energy band \(i\) with crystal momentum \(\mathbf{k}\) and the outgoing electron has linear momentum \(\mathbf{k}^{\prime}\). The energy difference in Eq. (1) now reads \[E_{f}-E_{i}=\frac{k^{\prime 2}}{2m_{e}}+\frac{q^{2}}{2m_{\chi}}-\mathbf{v} \cdot\mathbf{q}+\Phi-E_{i}(\mathbf{k})\,, \tag{28}\] where \(E_{i}(\mathbf{k})\) is the energy of an electron in energy band \(i\) with wavevector \(\mathbf{k}\), which is negative for bound electrons. \(\Phi\) is the (positive) work function, which corresponds to the energy difference between the highest occupied electronic state and the zero-energy unbound free-electron plane-wave state; in this work we take the measured value for graphene of \(\Phi=4.3\,\mathrm{eV}\)[25, 43]. Adding the contributions from all occupied initial states yields \[R_{\text{any}\to\mathbf{k}^{\prime}}\equiv 2\sum_{i}\int_{\text{BZ}}\frac{N_{ \text{cell}}V_{\text{cell}}\text{d}^{3}\mathbf{k}}{(2\pi)^{3}}R_{i\mathbf{k} \to\mathbf{k}^{\prime}}\,, \tag{29}\] where the factor of 2 accounts for the double occupation of each electronic state due to spin degeneracy. We then sum the contributions from all final plane-wave states to obtain the total rate of electron ejections by DM scattering, \[R\equiv\int\frac{V\text{d}^{3}\mathbf{k}^{\prime}}{(2\pi)^{3}}R_{\text{any} \to\mathbf{k}^{\prime}}\,. \tag{30}\] Finally, we express the total ejection rate, \(R\), as follows \[R=\frac{n_{\chi}N_{\text{cell}}}{32\pi^{2}m_{\chi}^{2}m_{e}^{2}}\int\text{d}^{ 3}\mathbf{k}^{\prime}\int\text{d}\,E_{e}\int\text{d}^{3}\mathbf{q}\int\text{d }^{3}\mathbf{v}\,f_{\chi}(\mathbf{v})\delta\left(\Delta E_{e}+\frac{q^{2}}{2m _{\chi}}-\mathbf{v}\cdot\mathbf{q}\right)R_{\text{free}}(\mathbf{k}^{\prime}, \mathbf{q},\mathbf{v})\;W(\mathbf{k}^{\prime}-\mathbf{q},E_{e})\,, \tag{31}\] where we define the electron's energy change as \(\Delta E_{e}\equiv\frac{k^{\prime 2}}{2m_{e}}+\Phi-E_{e}\), and introduce the material-specific response function \[W\left(\mathbf{\ell},E_{e}\right) =\frac{V_{\text{cell}}}{(2\pi)^{3}}\sum_{i}\int_{\text{BZ}}\frac {\text{d}^{3}\mathbf{k}}{(2\pi)^{3}}\delta\left(E_{e}-E_{i}(\mathbf{k})\right)\] \[\times\left|\widetilde{\psi}_{i\mathbf{k}}(\mathbf{\ell})\right|^{2}\,. \tag{32}\] As implied by the above \(\delta\)-function, \(E_{e}\) takes the value of the initial state electron energy where \(E_{e}=0\,\mathrm{eV}\) for the highest occupied state and \(E_{e}<0\,\mathrm{eV}\) for other bound states. Eq. (31) shows that both free-electron physics and material properties contribute to the total rate of electron ejections by DM scattering in a factorisable way, and that this factorisation involves a single material response function. The response function is normalized to \[\int\text{d}E_{e}\text{d}^{3}\mathbf{\ell}\,W(\mathbf{\ell},E_{e})=N_{\text{bands}}\,, \tag{33}\] where \(N_{\text{bands}}\) is the number of occupied initial-state electronic bands and we use the normalization of \(\widetilde{\psi}_{i\mathbf{k}}(\mathbf{\ell})\) (see Eq. (23)) and \(V_{\text{cell}}V_{\text{1BZ}}=(2\pi)^{3}\). In the next two subsections we express the response function \(W\) by writing \(\psi_{i\mathbf{k}}\) as a linear combination of atomic orbitals and plane waves. Atomic orbitals are often employed within the tight-binding approximation (see Sec. III.1), whereas plane waves are a standard basis in DFT electronic structure calculations (see Sec. III.2). Atomic orbital basis Let us now derive a compact expression for the response function \(W\) by expanding the initial state electron wave function, \(\psi_{i\mathbf{k}}(\mathbf{x})\), in a basis of atomic orbitals, \[\psi_{i\mathbf{k}}(\mathbf{x})=\mathcal{N}_{\mathbf{k}}\sum_{j=1}^{n}C_{ij}( \mathbf{k})\Phi_{j\mathbf{k}}(\mathbf{x})\,,\] (34a) where \[j\] runs over the \[n\] atomic orbitals present in each unit cell, and \[\Phi_{j\mathbf{k}}(\mathbf{x})\] are the Bloch states corresponding to each atomic orbital, \[\Phi_{j\mathbf{k}}(\mathbf{x})=\frac{1}{\sqrt{N_{\mathrm{cell}}}}\sum_{r=1}^{ N_{\mathrm{cell}}}e^{i\mathbf{k}\cdot\mathbf{R}_{jr}}\varphi_{j}(\mathbf{x}- \mathbf{R}_{jr})\,. \tag{34b}\] Here \(\varphi_{j}\) is an atomic wave function on an atom at position \(\mathbf{R}_{jr}\), \(N_{\mathrm{cell}}\) is the number of unit cells and \(\mathcal{N}_{\mathbf{k}}\) is a normalisation constant defined in App. C.1. Within the tight-binding approximation (introduced below in Sec. III.1), the coefficients \(C_{ij}\) in Eq. (34) are computed by solving the secular equation (Eq. 12) with band energies extracted from measurements, or calculated using a more sophisticated technique such as DFT. To evaluate the material response function defined in Eq. (32), \(W(\mathbf{\ell},E_{e})\), we need to calculate the square of the Fourier transform of \(\psi_{i\mathbf{k}}(\mathbf{x})\). Denoting by \(\widetilde{\varphi}_{j}\) the Fourier transform of \(\varphi_{j}\), for the Fourier transform of \(\psi_{i\mathbf{k}}\) we find \[\widetilde{\psi}_{i\mathbf{k}}(\mathbf{\ell})=\frac{\mathcal{N}_{\mathbf{k}}}{ \sqrt{N_{\mathrm{cell}}}}\sum_{j=1}^{n}C_{ij}(\mathbf{k})\widetilde{\varphi}_{ j}(\mathbf{\ell})\sum_{r=1}^{N_{\mathrm{cell}}}e^{i(\mathbf{k}-\mathbf{\ell})\cdot \mathbf{R}_{jr}}\,. \tag{35}\] The lattice sum can be evaluated by using Eq. (13) of [49]. \[\sum_{r=1}^{N_{\mathrm{cell}}}e^{i\mathbf{k}\cdot\mathbf{R}_{jr}}=\frac{1}{V_{ \mathrm{cell}}}\sum_{\mathbf{G}}(2\pi)^{3}\delta^{(3)}\left(\mathbf{k}+\mathbf{ G}\right)\,e^{i\mathbf{k}\cdot\mathbf{\delta}_{j}}\,, \tag{36}\] where \(\mathbf{\delta}_{j}\) is the location of the atom hosting the \(j\)-th orbital in the unit cell that contains the origin of coordinates. For a given \(j\), \(\mathbf{\delta}_{j}=0\) if there exists a \(\overline{r}\in\{1,\ldots,N_{\mathrm{cell}}\}\) such that \(\mathbf{R}_{j\overline{r}}=0\), and \(\mathbf{\delta}_{j}\neq 0\) otherwise. In Eq. (36), the sum runs over the reciprocal lattice vectors \(\mathbf{G}\). For each \(j\) and \(r\), they satisfy \(\mathbf{G}\cdot(\mathbf{R}_{jr}-\mathbf{\delta}_{j})=2\pi m\), where \(m\in\mathds{Z}\). Another useful identity to evaluate the squared modulus of \(\widetilde{\psi}_{i\mathbf{k}}\) is \[(2\pi)^{3}\Big{|}\sum_{\mathbf{G}}\delta^{(3)}\left(\mathbf{k}+ \mathbf{G}\right)\Big{|}^{2}=N_{\mathrm{cell}}V_{\mathrm{cell}}\sum_{\mathbf{G }}\delta^{(3)}\left(\mathbf{k}+\mathbf{G}\right)\,, \tag{37}\] where \(N_{\mathrm{cell}}V_{\mathrm{cell}}=(2\pi)^{3}\delta^{(3)}(0)\). Using Eqs. (36) and (35), we obtain \[|\widetilde{\psi}_{i\mathbf{k}}(\mathbf{\ell})|^{2} =\frac{\mathcal{N}_{\mathbf{k}}^{2}}{V_{\mathrm{cell}}}\Big{|}\sum _{j=1}^{n}C_{ij}(\mathbf{k})\widetilde{\varphi}_{j}(\mathbf{\ell})e^{i(\mathbf{k}- \mathbf{\ell})\cdot\mathbf{\delta}_{j}}\Big{|}^{2}\] \[\times\sum_{\mathbf{G}}(2\pi)^{3}\delta^{(3)}\left(\mathbf{k}- \mathbf{\ell}+\mathbf{G}\right)\,, \tag{38}\] and by combining Eq. (38) with Eq. (32), we obtain the following expression for the response function \(W\): \[W\left(\mathbf{\ell},E_{e}\right)=\frac{\mathcal{N}_{\mathbf{k}}^{2}}{(2\pi)^{3}} \sum_{i}\Big{|}\sum_{j=1}^{n}C_{ij}(\mathbf{k})\widetilde{\varphi}_{j}(\mathbf{ \ell})e^{-i\mathbf{G}^{*}\cdot\mathbf{\delta}_{j}}\Big{|}^{2}\delta\left(E_{e}-E_ {i}(\mathbf{k})\right)\Big{|}_{\mathbf{k}=\mathbf{\ell}-\mathbf{G}^{*}}\,. \tag{39}\] Here, we performed the integral over the lattice momentum \(\mathbf{k}\), such that the \(\delta\) function fixes it to \(\mathbf{k}=\mathbf{\ell}-\mathbf{G}^{*}\), where \(\mathbf{G}^{*}\) is the unique reciprocal lattice vector that ensures that \(\mathbf{k}\) lies within the first Brillouin zone. The other terms of the sum over the vectors \(\mathbf{G}\) do not contribute. To evaluate Eq. (39), one needs to specify the wave functions \(\varphi_{j}\), which we will do in Sec. III.1. #### iii.1.2 Plane wave basis Let us now express the response function \(W\) by using plane waves to write the electron wave function \(\psi_{i\mathbf{k}}\left(\mathbf{x}\right)\) as \[\psi_{i\mathbf{k}}\left(\mathbf{x}\right)= \frac{1}{\sqrt{V}}\sum_{\mathbf{G}}u_{i}\left(\mathbf{k}+\mathbf{G }\right)e^{i(\mathbf{k}+\mathbf{G})\cdot\mathbf{x}}\,, \tag{40}\] where \(\sum_{\mathbf{G}}|u_{i}\left(\mathbf{k}+\mathbf{G}\right)|^{2}=1\) for all \(i\) and \(\mathbf{k}\). From Eq. (40), we find \[\widetilde{\psi}_{i\mathbf{k}}(\mathbf{\ell})=\frac{(2\pi)^{3}}{\sqrt{V}}\sum_{ \mathbf{G}}u_{i}(\mathbf{k}+\mathbf{G})\delta^{(3)}(\mathbf{k}+\mathbf{G}-\mathbf{ \ell})\,, \tag{41}\] and \[\left|\widetilde{\psi}_{i\mathbf{k}}(\mathbf{\ell})\right|^{2}= (2\pi)^{3}\sum_{\mathbf{G}}|u_{i}(\mathbf{k}+\mathbf{G})|^{2} \delta^{(3)}(\mathbf{k}+\mathbf{G}-\mathbf{\ell})\,, \tag{42}\] where we used \(V=(2\pi)^{3}\delta^{(3)}(0)\). This result can be directly inserted into Eq. (32), which leads to the response function \[W\left(\mathbf{\ell},E_{e}\right)=V_{\mathrm{cell}}\sum_{i}\int_{\mathrm{BZ}}\frac{ \mathrm{d}^{3}\mathbf{k}}{(2\pi)^{3}}\delta\left(E_{e}-E_{i}(\mathbf{k})\right)\] \[\times\sum_{\mathbf{G}}|u_{i}(\mathbf{k}+\mathbf{G})|^{2}\delta^{(3)}( \mathbf{k}+\mathbf{G}-\boldsymbol{\ell})\,. \tag{43}\] In Sec. III.2.3, we extract the \(u_{i}(\mathbf{k}+\mathbf{G})\) coefficients and the band structure \(E_{i}(\mathbf{k})\) from state-of-the-art DFT calculations. ## III Electronic structure calculations for electron ejections in graphene detectors The equations derived in the previous section refer to three-dimensional periodic systems. We are now interested in applying them to the specific case of graphene, which is a single-layer material that is periodic in two dimensions. The "dimensional reduction" can be performed straightforwardly by means of the replacement specified below, \[V_{\rm cell}\int_{\rm BZ}\frac{\rm d^{3}\mathbf{k}}{(2\pi)^{3}} \longrightarrow A_{\rm cell}\int_{\rm BZ}\frac{\rm d^{2}\mathbf{k}}{(2\pi)^{2 }}\,, \tag{44}\] where \(A_{\rm cell}\) is the two-dimensional unit cell of graphene, while \(\mathbf{k}\) in the right-(left-)hand-side is a two-dimensional (three-dimensional) lattice vector in the first Brillouin zone. With a general formalism for electron ejections by DM scattering in graphene detectors in place, we can now focus on the evaluation of the predicted electron ejection rate. This crucially depends on the response function \(W\), which is in turn a function of the initial state electron wave functions. As a result, numerical evaluation of the predicted electron ejection rate requires detailed electronic structure calculations for graphene. In the following, we perform such electronic structure calculations using two methods: the tight binding approximation, and DFT. From this analysis, DFT will emerge as our recommended framework for electronic structure calculations for DM-electron scattering in graphene-based DM detectors. ### Tight binding To obtain the graphene response function in the tight binding (TB) approximation, we need to evaluate Eq. (39). The missing ingredient at this point are the coefficients \(C_{ij}(\mathbf{k})\) that yield the contribution of atomic orbital \(j\) in band \(i\) to the response function. We separate the \(\pi\)- and \(\sigma\)-electrons and write the response function as \[W(\boldsymbol{\ell},E_{e})=W_{\pi}(\boldsymbol{\ell},E_{e})+\sum_{i=1}^{3}W_{ \sigma_{i}}(\boldsymbol{\ell},E_{e})\,. \tag{45}\] In the TB approximation, the coefficients are found as the eigenvectors of the generalized eigenvalue problem in Eq. (11). For a detailed review of the TB approximation in general and for the specific case of graphene, we refer to App. 1 and 2 respectively. #### iii.1.1 \(\pi\)-electrons In case of the \(\pi\)-electrons in graphene, this eigenvalue problem can be solved analytically, as described in App. 2. Therein, the full wavefunction of the \(\pi\)-electrons are derived in position and momentum space, which can be found in Eq. (12) and (13). The eigenvalues \(E_{\pi}(\mathbf{k})\) and eigenvectors \(\mathbf{C}_{\pi}\), required for the response function, are given by Eqs. (12) and (13) respectively. Therefore, the \(\pi\)-electron contribution to the response function can be written out explicitly. It can be shown to simplify to \[W_{\pi}(\boldsymbol{\ell},E_{e}) =\frac{\mathcal{N}_{\mathbf{k}}^{2}}{(2\pi)^{3}}\delta(E_{e}-E_{ \pi}(\mathbf{k}))|\widetilde{\varphi}_{2p_{s}}(\boldsymbol{\ell})|^{2}\] \[\times(1+\cos(\varphi_{\mathbf{k}}-\delta\cdot\mathbf{G}^{*})) \bigg{|}_{\mathbf{k}=\boldsymbol{\ell}-\mathbf{G}^{*}}\,. \tag{46}\] Here, the phase \(\varphi_{\mathbf{k}}\) and the normalization constant \(\mathcal{N}_{\mathbf{k}}\) are given by Eqs. (13) and (14) respectively. #### iii.1.2 \(\sigma\)-electrons While the procedure for the \(\sigma\)-electrons is conceptionally identical, the fact that their wavefunctions involve combinations of three atomic orbitals at two atomic sites means that the generalized eigenvalue problem of Eq. (11) involves \(6\times 6\) matrices, which can no longer be solved analytically. Instead, we rely on numerical procedures where we use the Eigen library [50]. The involved six-dimensional matrices \(\boldsymbol{\mathcal{H}}\) and \(\boldsymbol{\mathcal{S}}\) are listed in Eq.(13) of App. 2. Using the numerical procedures of the Eigen library, we obtain the eigenvalues or band energies \(E_{\sigma_{i}}(\mathbf{k})\) as well as the eigenvectors \(\mathbf{C}_{\sigma_{i}}\). 2 Finally this allows us to evaluate the \(\sigma\)-contribution to the response function, Footnote 2: This is a good time to point out the dependence of the normalization constant \(\mathcal{N}_{\mathbf{k}}\) on the norm of the eigenvectors \(\mathbf{C}\), which is ambiguous. The numerical eigenvalue routine of Eigen that we use for the \(\sigma\)-electrons (namely GeneralizedSelfAdjointEigenSolver) solves the problem \(A\mathbf{x}=\lambda B\mathbf{x}\) such that \(\mathbf{x}^{*}B\mathbf{x}=1\). In that case, \(\mathcal{N}_{\mathbf{k}}\) is trivially equal to one, as can be seen from Eq. (13). However, for the analytic solution of the \(\pi\)-electrons, we had chosen normalized eigenvectors. In that case, \(\mathcal{N}_{\mathbf{k}}\neq 1\), but instead given by Eq. (13). \[W_{\sigma_{i}}(\boldsymbol{\ell},E_{e}) =\frac{\mathcal{N}_{\mathbf{k}}^{2}}{(2\pi)^{3}}\delta(E_{e}-E_{ \sigma_{i}}(\mathbf{k}))\] \[\times\left|\widetilde{\varphi}_{2s}(\boldsymbol{\ell})\left(C_{ \sigma_{i}1}+C_{\sigma_{i}4}e^{-i\boldsymbol{\delta}\cdot\mathbf{G}^{*}}\right)\right.\] \[+\left.\widetilde{\varphi}_{2p_{s}}(\boldsymbol{\ell})\left(C_{ \sigma_{i}2}+C_{\sigma_{i}5}e^{-i\boldsymbol{\delta}\cdot\mathbf{G}^{*}}\right)\right.\] \[\left.+\left.\widetilde{\varphi}_{2p_{s}}(\boldsymbol{\ell})\left( C_{\sigma_{i}3}+C_{\sigma_{i}6}e^{-i\boldsymbol{\delta}\cdot\mathbf{G}^{*}}\right) \right|_{\mathbf{k}=\boldsymbol{\ell}-\mathbf{G}^{*}}^{2}.\right. \tag{47}\] By adding up the four distributions given by Eqs. (46) and (47), we obtain the final TB estimate of the graphene response function. Finally, we point out that our TB treatment of the electrons in graphene differs from a previous treatment by Hochberg et al. [25], in particular with regard to the Bloch wavefunctions given by Eq. (34). A second crucial difference is our choice for the atomic wavefunctions, which we discuss next. We present a detailed comparison in App. E. #### iii.1.3 Atomic wavefunctions In order to evaluate the TB estimates of the graphene response function given in Eqs. (46) and (47), we need to specify the atomic orbitals \(\varphi_{i}(\mathbf{x})\) (or rather their Fourier transforms \(\widetilde{\varphi}_{i}(\mathbf{\ell})\)) for the electrons in carbon. We expand the atomic wavefunction into a radial and directional component, \(\varphi_{nlm}(\mathbf{x})=R_{nl}(r)Y_{l}^{m}(\hat{\mathbf{x}})\), where \(r=|\mathbf{x}|\) and \(Y_{l}^{m}(\hat{\mathbf{x}})\) are spherical harmonics. Following [51], we describe the radial part of the atomic orbitals of carbon electrons as linear combinations of Slater-type orbitals (STOs). \[R_{nl}(r)=\sum_{j}C_{nlj}R_{\text{STO}}(r,Z_{lj},n_{lj})\,.\] (48a) with \[R_{\text{STO}}(r,Z,n)\equiv a_{0}^{-3/2}\frac{(2Z)^{n+1/2}}{\sqrt{(2n)!}} \left(\frac{r}{a_{0}}\right)^{n-1}e^{-\frac{Zr}{a_{0}}}\,. \tag{48b}\] We include a more detailed description of these wavefunctions, including the values of the different coefficients and the expressions for the Fourier transforms, in App. D.2. This choice differs from the previous approach by Hochberg et al. [25], who use re-scaled hydrogenic wavefunctions to approximate the electron wavefunctions in carbon atoms. We comment on this in App. E. #### iii.1.4 Limitations As described in greater detail in App. C, we can reproduce the measured band structure of graphene by adjusting the overlap and transfer parameters of the TB approximation. However, in particular the overlap parameter, e.g. the parameter \(s\) in Eq. (107), is given by the overlap integrals of atomic wavefunctions at neighboring atomic sites. For a given choice of atomic wavefunctions, it is therefore possible to compute \(s\) independently of the band structures. This gives rise to the issue of self-consistency of this approach. In [25], the authors use hydrogenic wavefunctions to approximate the atomic orbitals of carbon. As described in detail in App. D.1, they ensure consistency between the two independent values of the overlap parameters, by re-scaling the effective charge factor \(Z_{\text{eff}}\). However, the resulting wavefunctions do not resemble the atomic wavefunctions of carbon, as we describe in App. D, which is why we chose to use RHF wavefunctions. While it is possible to perform a similar re-scaling of the RHF wavefunctions to establish consistency with the overlap parameters listed in Tab. 2, this generally modifies the wavefunctions to the extent that they no longer describe electrons in carbon atoms. Therefore, it seems to be a limitation of the TB approximation to reconcile the use of realistic atomic wavefunctions with the overlap parameters that reproduce the material's band structure in a fully self-consistent manner. This issue is not a characteristic of our specific treatment but rather a general feature of the TB approximation itself reflecting the phenomenological nature of this approximation. ### Density Functional Theory Having carefully described the features and limitations of the tight-binding approach, we now report on our DFT electronic structure calculations. We start with a brief review of the main assumptions underlying DFT in Sec. III.2.1. In Sec. III.2.2, we provide a general argument supporting DFT as a framework for electronic structure calculations in the case of DM-induced electron ejections by graphene targets. In Sec. III.2.3, we describe the details of our specific DFT implementation in a modified version of the QEdark-EFT code. #### iii.2.1 Assumptions Density functional theory (DFT) [52; 53] (for a review see [54]) is a widely used method for calculating the ground-state electronic properties of materials. It allows for explicit treatment of the chemistry and crystal structure without the need for empirically determined input parameters, and provides well-tested and computationally affordable approximations for the many-body electron-electron interactions. In addition, it has been implemented in numerous publically or commercially available computer codes that are convenient to use. The theory is based on the Hohenberg-Kohn theorem [52] which states that, for electrons in an external potential (in this case provided by the charged atomic nuclei), the total energy is a unique functional of the electron density, with the ground-state density being the one that minimizes the value of this functional. The electronic ground state charge density can therefore be obtained variationally. In practice the charge density, \(n_{e}(\mathbf{x})\), is written as a sum over the so-called Kohn-Sham wave functions, \(\psi_{i}(\mathbf{x})\)[53] of a fictitious auxiliary system in which the electrons are non-interacting. This mapping enables convenient computational solution of the many-body Schrodinger equation at the expense of an inexact description of the quantum mechanical exchange and correlation terms. These have been obtained numerically using Quantum Monte Carlo for the homogeneous interacting electron gas [55], and a number of well-tested approximations exist that are appropriate for different material systems. An additional widely used and well-established approximation divides the electrons into valence electrons, which are treated explicitly within the DFT calculation, and low-energy core electrons, which are combined with the nuclei in the external potential. This pseudopotential approximation drastically reduces the computational expense and is chemically well founded, since the core electrons are not involved in chemical bonding and are only minimally modified in the solid. The choice of pseudopotential, and the numerical and implementational details of the DFT calculation for graphene performed here, are discussed further in Sec. III.2.3. We note that, while the Kohn-Sham wave functions and energies do not formally correspond to true single-electron wave functions and energies (except for the highest occupied level, which provides the ionization energy), in practice their dispersion is usually in remarkable agreement with measured photoelectron spectra. The Kohn-Sham band structure is therefore often treated as a proxy for an effective single-particle band structure in a periodic solid. Since the Hohenberg-Kohn theorem describes only the ground-state electron density, however, this is particularly ill-founded for unoccupied conduction-band states. The methodology and results we present here, however, do not rely on any physical interpretation of the Kohn-Sham wavefunctions, since our final electron states are unbound plane-wave states, and, as we show next, our response function is primarily determined by the ground-state charge density rather than the ground-state wavefunctions. #### iii.2.2 Motivations An important observation we can draw from our general electron ejection rate formula is that the graphene response function \(W\) is directly related to the ground state electron momentum density, \(\rho_{e}\), defined as the Fourier transform of the electron charge density. Indeed, an explicit second quantisation calculation allows us to write \(\rho_{e}\) as \[\rho_{e}(\mathbf{\ell}) = \sum_{ii^{\prime}}\sum_{\mathbf{G}}\int_{\mathrm{BZ}}\frac{ \mathrm{d}^{2}\mathbf{k}}{(2\pi)^{2}}\,n_{ii^{\prime}}(\mathbf{k})\,u_{i}^{*}( \mathbf{k}+\mathbf{G})u_{i^{\prime}}(\mathbf{k}+\mathbf{G}) \tag{49}\] \[\times (2\pi)^{2}\delta^{(2)}(\mathbf{k}+\mathbf{G}-\mathbf{\ell})\,,\] where \[n_{ii^{\prime}}(\mathbf{k})=\langle a_{i\mathbf{k}}^{\dagger}a_{i^{\prime} \mathbf{k}}\rangle \tag{50}\] is the mean ground state occupation number density, while \(a_{i\mathbf{k}}^{\dagger}\) and \(a_{i^{\prime}\mathbf{k}}\) are second quantisation creation and annihilation operators associated with the \(\psi_{i\mathbf{k}}\) and \(\psi_{i^{\prime}\mathbf{k}}\) Bloch states, respectively 3 Footnote 3: Eq. (49) for the momentum density \(\rho_{e}\) can be derived from the definition, \[\rho_{e}(\mathbf{\ell})=\int\mathrm{d}^{3}\mathbf{r}\int\mathrm{d}^{3}\mathbf{r}^{ \prime}e^{-i(\mathbf{r}^{\prime}-\mathbf{r})\mathbf{\ell}}\,\langle\Psi^{\dagger}( \mathbf{r})\Psi(\mathbf{r}^{\prime})\rangle\,,\] where \[\Psi(\mathbf{r})=\frac{1}{\sqrt{V}}\sum_{i}\int_{\mathrm{BZ}}\frac{V\mathrm{ d}^{2}\mathbf{k}}{(2\pi)^{2}}\,\psi_{i\mathbf{k}}(\mathbf{r})\,a_{i\mathbf{k}}\,,\] where \[\Psi(\mathbf{r})=\frac{1}{\sqrt{V}}\sum_{i}\int_{\mathrm{BZ}}\frac{V\mathrm{ d}^{2}\mathbf{k}}{(2\pi)^{2}}\,\psi_{i\mathbf{k}}(\mathbf{r})\,a_{i\mathbf{k}}\,,\] \[\rho_{e}(\mathbf{\ell})=\sum_{ii^{\prime}}\sum_{\mathbf{G}}\int_{\mathrm{BZ}}\frac{ \mathrm{d}^{2}\mathbf{k}}{(2\pi)^{2}}\,\psi_{i\mathbf{k}}(\mathbf{r})\,a_{i \mathbf{k}}\,,\] (49) \[\times (2\pi)^{2}\delta^{(2)}(\mathbf{k}+\mathbf{G}-\mathbf{\ell})\,,\] where \[\Psi(\mathbf{r})=\frac{1}{\sqrt{V}}\sum_{i}\int_{\mathrm{BZ}}\frac{V\mathrm{ d}^{2}\mathbf{k}}{(2\pi)^{2}}\,\psi_{i\mathbf{k}}(\mathbf{r})\,a_{i\mathbf{k}}\,,\] (50) is the mean ground state occupation number density, while \(a_{i\mathbf{k}}^{\dagger}\) and \(a_{i^{\prime}\mathbf{k}}\) are second quantisation creation and annihilation operators associated with the \(\psi_{i\mathbf{k}}\) and \(\psi_{i^{\prime}\mathbf{k}}\) Bloch states, respectively 4 Footnote 4: Eq. (49) for the momentum density \(\rho_{e}\) can be derived from the definition, \[\rho_{e}(\mathbf{\ell})=\int\mathrm{d}^{3}\mathbf{r}\int\mathrm{d}^{3}\mathbf{r}^{ \prime}e^{-i(\mathbf{r}^{\prime}-\mathbf{r})\mathbf{\ell}}\,\langle\Psi^{\dagger}( \mathbf{r})\Psi(\mathbf{r}^{\prime})\rangle\,,\] where \[\Psi(\mathbf{r})=\frac{1}{\sqrt{V}}\sum_{i}\int_{\mathrm{BZ}}\frac{V\mathrm{ d}^{2}\mathbf{k}}{(2\pi)^{2}}\,\psi_{i\mathbf{k}}(\mathbf{r})\,a_{i\mathbf{k}}\,,\] . In Eq. (49), the \(i=i^{\prime}\) diagonal term is directly proportional to the response function Figure 1: The partially integrated graphene response function evaluated with TB (left) and DFT (right) as a function of \(\ell_{x}\) and \(\ell_{y}\). In the left panel, we also show the contributions of each of the electron bands. We set \(\ell_{z}=91\) eV, such that the vector \(\mathbf{\ell}\) lies almost in the plane of the graphene sheet. The stripe-like structure of the DFT response is an artifact of the grid sampling in the reciprocal space and is integrated out when observables are evaluated. \(W\). The \(i\neq i^{\prime}\) off-diagonal term describes band mixing effects arising from electron-electron interactions across different bands. While in general \(n_{ii\nu}(\mathbf{k})\neq 0\) for \(i\neq i^{\prime}\), off-diagonal contributions to \(\rho_{e}\) are expected to be subleading in the case of graphene, where electron-electron interaction and correlation effects induce variations in the band energies \(E_{i}(\mathbf{k})\) of at most a few %, see for example Fig. 1 in [56]. Footnote 1: The \(i\neq i^{\prime}\) off-diagonal term describes band mixing effects arising from electron-electron interactions across different bands. While in general \(n_{ii\nu}(\mathbf{k})\neq 0\) for \(i\neq i^{\prime}\), off-diagonal contributions to \(\rho_{e}\) are expected to be subleading in the case of graphene, where electron-electron interaction and correlation effects induce variations in the band energies \(E_{i}(\mathbf{k})\) of at most a few %, see for example Fig. 1 in [56]. This is an important observation because it implies that our DFT predictions for \(W\) are only marginally affected by the lack of a clear interpretation for the individual Kohn-Sham states, which, in principle, is one of the limitations of a DFT approach. These states contribute to \(W\) mainly through one very specific combination, namely, the Fourier transform of the electron charge density, which, by construction, is self-consistently computed in DFT. We find this observation a solid argument in favour of DFT as a theoretical framework for computing the graphene response function \(W\). This conclusion is also corroborated by the good agreement found between the measured and DFT-calculated graphene Compton profile [57], which is the longitudinal projection of the electron momentum density, and thus closely related to \(W\). #### iv.2.3 Numerical implementation For the numerical evaluation of the graphene responses in the DFT framework, the QuantumEspresso v.6.4.1[58, 59, 60] code was used, and interfaced with QEdark-EFT[40], an extension to the previously established QEdark[16] package. Since QuantumEspresso uses a plane-wave basis with periodic boundary conditions, we simulated the graphene sheet as a system containing Figure 2: \(W(\mathbf{\ell},E_{e})\) integrated between \(E_{\rm min}<0\) and \(0\) and over the azimuthal angle of \(\mathbf{\ell}\), \(\phi\) for TB (left) and DFT (right). The first row is for \(E_{\rm min}=-5\,\)eV, including only electrons accessible to low mass DM. The second row has \(E_{\rm min}=-20\,\)eV and also includes electrons accessible to heavier DM. As in Fig. 1, the distorting effect of the finite sampling of the reciprocal space can be seen for the case of DFT. When evaluating the final state observables, these distortions are washed out. sheets separated by a large but finite distance, \(L_{z}\). For the self-consistent calculations, we used the C.pbe-n-kjpaw_ps1.1.0.0.UPF pseudopotential provided with the QuantumEspresso package, which includes the \(2s^{2}\) and \(2p^{2}\) electrons in the valence configuration and treats the \(1s^{2}\) electrons as core. The minimal suggested energy cutoff for the plane-wave expansion for this pseudopotential is 40 Ry for the wave function and 326 Ry for the charge density. We chose much larger values--2000 Ry for the wave function cutoff and 16000 Ry for the charge density cutoff--since for the case of dark-matter induced excitations, we are interested in the high-momentum tails of the electronic wave functions that are usually unimportant for low-energy solid state physics applications. We used the PBE exchange and correlation functional [61] with the experimentally measured lattice constant of \(a=2.46\) A, and sampled reciprocal space with a \(16\times 16\times 1\) Monkhorst-Pack \(k\)-point grid, which is sufficient to capture the linearly dispersing Dirac cones at the Fermi level (see Fig. 5). Since carbon is a light atom relativistic effects are minimal and we did not including spin-orbit coupling. #### iv.1.4 Discretization As a result of the widely spaced periodically repeating graphene sheets required by the periodic boundary conditions of our DFT code, Eq. (43) is discretised and the expression evaluated by QEdark-EFT becomes \[W\left((\ell_{x})_{n},(\ell_{y})_{m},(\ell_{z})_{o},(E_{B})_{l}\right) =\sum_{\mathbf{k},\mathbf{G},i}\frac{\omega_{\mathbf{k}}\left|u _{i}\left(\mathbf{k}+\mathbf{G}\right)\right|^{2}}{2\delta_{\ell}^{3}\delta _{E}}\Theta\left(1-\frac{\left|\left(E_{B}\right)_{l}-E_{i\mathbf{k}}\right|} {\frac{1}{2}\delta_{E}}\right)\Theta\left(1-\frac{\left|k_{z}+G_{z}-(\ell_{z })_{o}\right|}{\frac{1}{2}\delta_{\ell}}\right)\] \[\times\Theta\left(1-\frac{\left|k_{y}+G_{y}-(\ell_{y})_{m}\right| }{\frac{1}{2}\delta_{\ell}}\right)\Theta\left(1-\frac{\left|k_{x}+G_{x}-( \ell_{x})_{n}\right|}{\frac{1}{2}\delta_{\ell}}\right)\,, \tag{51}\] where \(\delta_{E}\) and \(\delta_{\ell}\) are the bin size in energy and momentum respectively, and \(\mathbf{\ell}=\mathbf{k}^{\prime}-\mathbf{q}\). Subscripts \(n\), \(m\), \(o\) and \(l\) denote the index of the corresponding momentum and energy bin, \(\omega_{\mathbf{k}}\) are the weights of the reciprocal lattice k-points and, following the conventions of QuantumEspresso, \(\sum_{\mathbf{k}}\omega_{\mathbf{k}}=2\). The sum over \(i\) runs over 4 valence bands, and the sum over \(\mathbf{G}\) is truncated by \[\frac{\left|\mathbf{k}+\mathbf{G}\right|^{2}}{2m_{e}}\leq E_{\rm cut} \tag{52}\] with the value of \(E_{\rm cut}=27.2\) keV. Figure 3: Graphene response function and its dependence on the initial state momentum integrated over various regions of initial state energy and all directions. This plot shows how much different electron energies (that are accessible to different DM candidate masses) contribute to the momentum distribution of the target for TB (left) and DFT (right) simulations. ### Comparison of response functions In Figs. 1-4, we present different ways to visualize the graphene response function obtained with the TB and DFT methods. In these figures, we integrate the response function defined in Eq. (32) over all energies \(E_{e}\) or an interval in \(E_{e}\) to obtain a function of momentum \(\mathbf{\ell}\) only. In Fig. 1, we depict the response function as a function of \(\ell_{x}\) and \(\ell_{y}\) with \(\ell_{z}\approx 0\). Both for TB and DFT, we find the characteristic hexagonal shape of the first Brillouin zone of graphene. In the case of TB, we also depict the individual contributions of the \(\pi\) and \(\sigma\) electrons. Note that the distortions of the DFT version originate in the finite grid sampling in reciprocal space and are washed out once we integrate over \(\mathbf{\ell}\) to obtain an observable. In Fig. 2, we show the response function w.r.t. the angle between the initial state electron momentum and the orientation of the graphene sheet. The response function is integrated over all-electron energies as well as only over electrons within \(5\,\mathrm{eV}\) of the Fermi level in order to understand the patterns observed in daily modulation plots for dark matter candidates of various masses. We can see that the DM particles carrying lower kinetic energies will be able to interact preferably with electrons that have their momenta pointing in \(\theta\sim 40^{\circ}\) whereas DM candidates with higher energy will be able to access electrons with momenta pointing in all directions. In Fig. 3, we show the contribution of various electron-binding energies to the integrated response function for both TB and DFT. Both approaches show a similar dependence of the momentum contribution integrated over all directions for the selected energy intervals. In order to facilitate a quantitative comparison of the two approaches, we show the response function as a function of momentum \(\ell^{\perp}\) (perpendicular to the sheet) and \(\ell^{\parallel}\) (in-sheet momenta) in Fig. 4 using a log-scale on the \(y\)-axis. Furthermore, the lower panel depicts the ratio of the \(W\) functions obtained by DFT and TB. We can see that for low momenta \(\ell\lesssim 15\) keV, both response functions lie within a factor of 2 of each other. However, for larger momenta, the TB approximation predicts significantly larger values. Overall, the Figs. 1-4 demonstrate that TB and DFT predict the same qualitative features of the graphene response function. A more quantitative comparison reveals relative deviations between the two approaches that typically do not exceed a factor of 2. As mentioned above, the exception here is the case of large momenta, where we found larger deviations between the two methods. The response matrix at large momenta can be sensitive to the electron density close to the atomic nuclei, where the electron orbitals resemble the atomic orbitals the closest. This makes a qualitative comparison or assessment for large momenta difficult as these contributions are smoothened out in the DFT approach. Figure 4: Comparison of the partially integrated response function as a function of the initial electron momentum, \(\ell\), between DFT (blue) and TB(red). The solid lines illustrate the response function for momenta perpendicular to the graphene sheet. The dashed lines show the dependence of \(W\) on momenta parallel to the graphene sheet, where we average over the in-plane directions. The lower panel shows the ratio. As indicated by the gray band, for momenta below \(\sim 15\) keV, the two predictions are within a factor of 2. Above, the TB approximation predicts a significantly higher response function. Figure 5: Graphene band structure as calculated from TB and DFT. In the case of DFT, the Dirac cone crossing at K-symmetry points was used to determine a precise value for the Fermi energy of the graphene sheet. We include the valence bands and the conduction band for the \(\pi\)-electrons. Further conduction bands are are not shown here, but can be found in Fig. 10 for TB. ## IV Case study: daily modulation of the electron ejection rate in a graphene detector After our comparative study of the electronic structure calculations of graphene targets in Sec. III, we now present and compare the expected electron ejection rates that we obtain using both TB and DFT. We focus on a hypothetical experimental setting where the electron ejected by an incoming DM particle is recorded independently of the direction of ejection. We refer to a companion paper (from now onward, Paper II [38]) for detailed sensitivity studies of different settings for graphene-based DM detectors that are currently in a research and development stage. Fig. 6 (left) shows a comparison of the time-averaged TB and DFT rates as a function of the DM particle mass for DM-electron interactions described by the operators \(\mathcal{O}_{1}\) (both contact and long range interactions) and \(\mathcal{O}_{3}\) (contact only) in Tab. 1. In the case of \(\mathcal{O}_{1}\), we find that the electron ejection rates predicted by TB and DFT differ by less than a factor of 2 for most DM masses, and up to a factor of 3 at very low masses. Generally, we find that DFT predicts higher rates at low masses, whereas TB predicts higher rates at high masses. In the case of \(\mathcal{O}_{3}\) contact type interactions, the quantitive comparison for low masses (\(m_{\chi}\lesssim 20\) MeV) is similar to \(\mathcal{O}_{1}\). We find larger deviations for heavier masses, however, with the TB approach predicting an \(\mathcal{O}(10)\) larger rate at \(m_{\chi}=100\) MeV. For this particular operator, large momentum transfers are favoured, and these become kinematically more accessible for larger masses. The difference in rate therefore originates in the graphene response function at large momentum \(\mathbf{\ell}\), where TB predicts a higher response than DFT as seen in Fig. 4. The structure of \(\mathcal{O}_{3}\) therefore suppresses the contribution of the response function at low momentum, where we have better agreement between the two approaches. Here, the different treatment of the electron density close to the atomic nuclei plays an important role and obstructs a qualitative comparison. While this is the case of largest deviation between TB and DFT that we present, one should also note, that \(\mathcal{O}_{3}\) is an extreme but instructive case that does not arise from relativistic DM models at leading order [36]. Again comparing the DFT and TB computational frameworks, Fig. 6 (right) shows the expected daily modulation in the rate of DM-induced electron ejections for three values of the DM particle mass, and for the interaction operator \(\mathcal{O}_{1}\). In all cases the expected electron ejection rate is divided by the corresponding time-averaged rate, in order to cancel out the \(\mathcal{O}(1)\) differences between the DFT and TB rates reported in the left panel of Fig. 6. From Fig. 6 (right), we conclude that irrespective of the chosen computational framework, the largest relative rate is found when the direction of the DM wind is perpendicular to the graphene sheet. This is due to the fact that the electrons are more spatially constrained in the out-of-plane direction, giving rise to higher momentum contribution to the electron wavefunction and thus increasing the total interaction rate (as discussed further in Paper II). The strong daily modulation we find for the rate of DM-induced electron ejections demonstrates that graphene is well suited for establishing a daily modulation signal characteristic of the directionality of the DM Figure 6: Calculated total time-averaged rate (left) and daily modulation curves (right) of DM-induced electron ejections from graphene obtained with DFT (blue) and TB (red). On the left, we show the rate for \(\mathcal{O}_{1}\) contact type interaction (solid), \(\mathcal{O}_{1}\) long range type interaction (dashed) and \(\mathcal{O}_{3}\) contact type interaction (dash-dotted). The upper panel to the left gives the total rates as a function of DM mass, and the lower panel gives the ratio of the DFT to TB rates. On the right, we show the daily modulation curves for \(\mathcal{O}_{1}\) contact type interaction, where the solid, dashed and dash-dotted curve is for a \(5\) MeV, \(10\) MeV and \(100\) MeV, respectively. wind. In Fig. 7 we show the daily modulation pattern of electron ejections from graphene for various DM masses and interaction types, now focusing on DFT as a computational framework as the corresponding results from TB are qualitatively similar. The daily modulation pattern is similar for most of the DM masses and interactions with a maximum at around time=0h (when the DM wind is perpendicular to the graphene sheet) and a minimum at around time=12h (as for Fig. 6 (right)). However, for \(m_{\chi}=2\,\)MeV, the maximum is shifted to two peaks around time=4h and time=20h. This is a consequence of the 2 MeV DM particle only being able to eject electrons with \(E_{e}\) close to 0, corresponding to the top two panels in Fig. 2. The location of the peaks around time=4h and time=20h is due to the DM wind aligning with the peak at \(\theta\sim 30^{\circ}\) and \(\ell\sim 4\,\)keV. ## V Summary and Conclusion In this paper, we have investigated two solid-state-physics approaches to modeling DM interactions with graphene-like targets, TB and DFT. Below, we summarise the main features of the two methods and the arguments that led us to identify DFT as the preferred framework for modeling the scattering of DM particles by electrons bound in graphene. Both DFT and TB capture the main characteristics of the band structure of graphene, such as the Dirac cone at the K-symmetry point of the Brillouin zone and valence band energy distributions (Fig. 5), and predict a response function that reflects the symmetry of the reciprocal space underlying the graphene lattice (Fig. 1). The two computational frameworks also predict a qualitatively similar daily modulation pattern in the total rate of electron ejections. However, the two methods generally predict electron ejection rates that differ by an \(\mathcal{O}(1)\) factor with TB (DFT) predicting higher rates for high (low) DM masses. The two approaches employ a different set of approximations that limit their predictivity and region of validity. In order to effectively perform DFT calculations, one usually chooses a radial cutoff to the pseudo-potential and smoothens the core-electron wavefunctions closer to the atomic nucleus. This approximates the electronic structure below that cutoff and suppresses some of the high-momentum contributions to the electronic wavefunction. The total event rate is therefore suppressed when higher energy excitations (of tens of eV) are considered [62]. On the other hand, DFT is able to provide a self-consistent calculation of the ground-state electron density, which is the indirect quantity of interest for the DM interactions considered in this work. Indeed, an important result of this work is our demonstration that in cases where the outgoing electrons can be treated as a plane wave, the DM and electronic contributions to the DM-induced electron ejection rate factorize. The five crystal response functions identified previously in our work [41] then simplify into a single crystal response that is directly proportional to "diagonal part" of the Fourier transform of the ground-state electron density. The TB approach has the advantage of easier implementability and computational affordability. In its usual low-energy applications, the detailed form of the atomic basis wavefunctions is not explicitly considered. However, since for the case of DM-electron scattering, we are interested in the explicit form of the electronic wavefunctions, one needs to embed the atomic wavefunctions into this framework. This proves problematic since these atomic wavefunctions are required to satisfy the overlap integrals of TB that reproduce the experimentally observed band structure. The TB approach assumes that a wavefunction satisfying these relations exists, but does not allow us to calculate its form directly. One possible approach for modeling it is to use the hydrogenic wavefunctions and adjust their parameters such that they would satisfy the imposed overlap integrals (as employed in [25]). These modified hydrogenic wavefunctions, however, differ significantly from those of real carbon atoms bound within the graphene lattice which is limiting their predictive powers and makes the atomic contribution to the total electronic momenta unreliable. Another approach is to use the Roothaan-Hartree-Fock wavefunctions fit to describe unbound carbon atoms in Figure 7: Daily modulation pattern for graphene sheets obtained with DFT. The colors in each panel correspond to the interaction types indicated in the legend, and the top left, top center, top right, bottom left, bottom center and bottom right corresponding to DM masses of 2 MeV, 5 MeV, 10 MeV, 20 MeV, 50 MeV and 100 MeV, respectively. Note that the \(y\)-axis differs between the top and bottom plots. Note that for DM masses around 10 MeV, the modulation pattern is similar for all the considered interactions, indicating that the expected result is largely model-independent. stead of hydrogenic wavefunctions as the atomic basis. While this atomic basis does describe the individual carbon atoms better than the hydrogenic wavefunctions, they do not satisfy the overlap integrals given by the structure of the graphene lattice, creating an inconsistency in the implementation of the theory. In order to satisfy these relations, one would have to significantly distort the shape of the atomic orbitals, spoiling the original fit describing the carbon atom. The problem underlying the form of the atomic orbitals within TB, together with the robust predictive powers of the ground-state electron density of DFT have led us to recommend DFT as the framework of choice for graphene-like DM detector modeling. We will further expand this topic and use DFT to obtain predictions for various possible detector setups and DM candidates in the associated Paper II. The research software Darphene and an updated version of QEdark-EFT used to obtain the TB and DFT results respectively, will be made publicly available [39, 40]. ###### Acknowledgements. The authors thank Yonatan Kahn for valuable discussions and for sharing their code. R.C. and T.E. acknowledge support from the Knut and Alice Wallenberg project grant Light Dark Matter (Dnr. KAW 2019.0080). Furthermore, R.C. acknowledges support from individual research grants from the Swedish Research Council, Dnr. 2018-05029 and Dnr. 2022-04299. TE was also supported by the Knut and Alice Wallenberg Foundation (PI, Jan Conrad). N.A.S. and M.M. were supported by the ETH Zurich and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program Grant Agreement No. 810451. T.E. thanks the Theoretical Subatomic Physics group at Chalmers University of Technology for its hospitality. The research presented in this paper made use of the following software packages, libraries, and tools: Arb[63], boost[64], Eigen[50], libphysica[65], obscura[66, 67], QuantumEspresso[58, 59, 60], WebPlotDigitizer[68], and Wolfram Mathematica[69]. Part of the computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre (NSC). ## Appendix A Expanded matrix element In this appendix, we explicitly give the free particle response function \(R_{\text{free}}\) from Eq. (13). To avoid making the expressions too large, we split \(R_{\text{free}}\) into three separate terms, \[R_{\text{free}}= \overline{\left|\mathcal{M}\right|^{2}}+2m_{e}\overline{\Re\left[ \mathcal{M}(\nabla\mathbf{\epsilon}\mathcal{M}^{*})\mathbf{\epsilon}_{\mathbf{=}0}\cdot \frac{\mathbf{q}-\mathbf{k}^{\prime}}{m_{e}}\right]}+m_{e}^{2}\overline{\left| \left(\nabla\mathbf{\epsilon}\mathcal{M}\right)\mathbf{\epsilon}_{\mathbf{=}0}\cdot \frac{\mathbf{q}-\mathbf{k}^{\prime}}{m_{e}}\right|^{2}}\,, \tag{12}\] where \(\mathbf{q}\) is the momentum transfer, \(m_{e}\) is the electron mass, \(\mathbf{k}^{\prime}\) is the final state electron momentum, and \(\mathbf{\ell}\) is the initial state electron momentum. The individual terms can then be expressed as \[\overline{\left|\mathcal{M}\right|^{2}}= c_{1}^{2}+\frac{c_{3}^{2}}{4}\left(\frac{\mathbf{q}}{m_{e}}\times \mathbf{v}_{\text{el}}^{\perp}\right)^{2}+\frac{c_{7}^{2}}{4}\left(\mathbf{v} _{\text{el}}^{\perp}\right)^{2}+\frac{c_{10}^{2}}{4}\left(\frac{\mathbf{q}}{m _{e}}\right)^{2}+\frac{j_{\chi}(j_{\chi}+1)}{12}\Bigg{\{}3c_{4}^{2}+\left(4c_ {5}^{2}-2c_{12}c_{15}\right)\left(\frac{\mathbf{q}}{m_{e}}\times\mathbf{v}_{ \text{el}}^{\perp}\right)^{2}\] \[+c_{6}^{2}\left(\frac{\mathbf{q}}{m_{e}}\right)^{4}+\left(4c_{8} ^{2}+2c_{12}^{2}\right)\left(\mathbf{v}_{\text{el}}^{\perp}\right)^{2}+\left(2 c_{9}^{2}+4c_{11}^{2}+2c_{4}c_{6}\right)\left(\frac{\mathbf{q}}{m_{e}}\right)^{2}+ \left(c_{13}^{2}+c_{14}^{2}\right)\left(\frac{\mathbf{q}}{m_{e}}\right)^{2} \left(\mathbf{v}_{\text{el}}^{\perp}\right)^{2}\] \[+c_{15}^{2}\left(\frac{\mathbf{q}}{m_{e}}\right)^{2}\left(\frac{ \mathbf{q}}{m_{e}}\times\mathbf{v}_{\text{el}}^{\perp}\right)^{2}+2c_{13}c_{14 }\left(\frac{\mathbf{q}}{m_{e}}\cdot\mathbf{v}_{\text{el}}^{\perp}\right) \left(\frac{\mathbf{q}}{m_{e}}\cdot\mathbf{v}_{\text{el}}^{\perp}\right) \Bigg{\}}\,, \tag{13}\] where \(\mathbf{v}_{\text{el}}^{\perp}=\mathbf{v}-\frac{\mathbf{q}}{2\mu_{\chi e}}- \frac{\mathbf{\ell}}{m_{e}}\) with \(\mathbf{v}\) being the DM initial velocity in the detector rest frame and \(\mu_{\chi e}\) the DM-electron reduced mass. \(c_{i}\)'s are the effective couplings, and \(j_{\chi}\) is the DM spin which we typically set to \(1/2\). \[2m_{e}\overline{\Re\left[\mathcal{M}(\nabla_{\mathbf{p}_{1}} \mathcal{M}^{*})_{\mathbf{p}_{1}=0}\cdot\frac{\mathbf{q}-\mathbf{k}^{\prime}}{m _{e}}\right]}= \Bigg{[}\frac{c_{3}^{2}}{2}\left(\left(\frac{\mathbf{q}}{m_{e}} \cdot\mathbf{v}_{\text{el}}^{\perp}\right)\frac{\mathbf{q}}{m_{e}}-\left( \frac{\mathbf{q}}{m_{e}}\right)^{2}\mathbf{v}_{\text{el}}^{\perp}\right)-\frac{ c_{7}^{2}}{2}\mathbf{v}_{\text{el}}^{\perp}\Bigg{]}\cdot\frac{\mathbf{q}- \mathbf{k}^{\prime}}{m_{e}}\] \[+\frac{j_{\chi}(j_{\chi}+1)}{6}\Bigg{\{}\bigg{[}\left(4c_{5}^{2}+c_ {15}^{2}\left(\frac{\mathbf{q}}{m_{e}}\right)^{2}\right)\left(\left(\frac{ \mathbf{q}}{m_{e}}\cdot\mathbf{v}_{\text{el}}^{\perp}\right)\frac{\mathbf{q}}{m _{e}}-\left(\frac{\mathbf{q}}{m_{e}}\right)^{2}\mathbf{v}_{\text{el}}^{\perp} \right)\] \[-\left(4c_{8}^{2}+2c_{12}^{2}+(c_{13}^{2}+c_{14}^{2})\left(\frac{ \mathbf{q}}{m_{e}}\right)^{2}\right)\mathbf{v}_{\text{el}}^{\perp}\Bigg{]}\cdot \frac{\mathbf{q}-\mathbf{k}^{\prime}}{m_{e}}\] \[-2c_{12}c_{15}\left(\left(\frac{\mathbf{q}}{m_{e}}\cdot\mathbf{v}_{ \mathrm{el}}^{\perp}\right)\frac{\mathbf{q}}{m_{e}}-\left(\frac{\mathbf{q}}{m_{e }}\right)^{2}\mathbf{v}_{\mathrm{el}}^{\perp}\right)\cdot\frac{\mathbf{q}- \mathbf{k}^{\prime}}{m_{e}}\] \[-2c_{13}c_{14}\left(\frac{\mathbf{q}}{m_{e}}\cdot\mathbf{v}_{ \mathrm{el}}^{\perp}\right)\frac{\mathbf{q}}{m_{e}}\cdot\frac{\mathbf{q}- \mathbf{k}^{\prime}}{m_{e}}\Bigg{\}}\,, \tag{47}\] and \[m_{e}^{2}\overline{\left|\left(\nabla_{\mathbf{p}_{1}}\mathcal{ M}\right)_{\mathbf{p}_{1}=0}\cdot\frac{\mathbf{q}-\mathbf{k}^{\prime}}{m_{e}} \right|^{2}}= \left(\frac{c_{3}^{2}}{4}\left(\frac{\mathbf{q}}{m_{e}}\right)^ {2}+\frac{c_{7}^{2}}{4}\right)\left(\frac{\mathbf{q}-\mathbf{k}^{\prime}}{m_{ e}}\right)^{2}-\frac{c_{3}^{2}}{4}\left(\frac{\mathbf{q}}{m_{e}}\cdot\frac{ \mathbf{q}-\mathbf{k}^{\prime}}{m_{e}}\right)^{2}\] \[+\frac{j_{\chi}(j_{\chi}+1)}{12}\Bigg{\{}\left(\frac{\mathbf{q}- \mathbf{k}^{\prime}}{m_{e}}\right)^{2}\left[\left(4c_{5}^{2}+c_{13}^{2}+c_{14 }^{2}-2c_{12}c_{15}\right)\left(\frac{\mathbf{q}}{m_{e}}\right)^{2}+4c_{8}^{2 }+2c_{12}^{2}\right.\] \[+\left.c_{15}^{2}\left(\frac{\mathbf{q}}{m_{e}}\right)^{4}\right] +\left(\frac{\mathbf{q}}{m_{e}}\cdot\frac{\mathbf{q}-\mathbf{k}^{\prime}}{m_ {e}}\right)^{2}\left[-4c_{5}^{2}-c_{15}^{2}\left(\frac{\mathbf{q}}{m_{e}} \right)^{2}+2c_{12}c_{15}+2c_{13}c_{14})\right]\Bigg{\}}\,. \tag{48}\] Furthermore, to rewrite the above equations one can use the following relations \[\left(\frac{\mathbf{q}}{m_{e}}\times\mathbf{v}_{\mathrm{el}}^{ \perp}\right)^{2}= \left(\frac{\mathbf{q}}{m_{e}}\right)^{2}\left(\mathbf{v}_{ \mathrm{el}}^{\perp}\right)^{2}-\left(\frac{\mathbf{q}}{m_{e}}\cdot\mathbf{v}_ {\mathrm{el}}^{\perp}\right)^{2}\,, \tag{49}\] \[\left(\mathbf{v}_{\mathrm{el}}^{\perp}\right)^{2}|_{\ell=0}= \mathbf{v}^{2}+\frac{\mathbf{q}^{2}}{4\mu_{\chi}^{2}}\frac{m_{ \chi}-m_{e}}{m_{e}+m_{\chi}}-\frac{\Delta E_{e}}{\mu_{\chi e}}\,,\] (50) \[\left(\mathbf{v}_{\mathrm{el}}^{\perp}\cdot\mathbf{q}\right)|_{ \ell=0}= \Delta E_{e}-\frac{\mathbf{q}^{2}}{2m_{e}}\,,\] (51) \[\mathbf{v}_{\mathrm{el}}^{\perp}|_{\ell=0}= \mathbf{v}-\frac{\mathbf{q}}{2\mu_{\chi e}}\,, \tag{52}\] where \(\Delta E_{e}\) is the energy transferred to the target electron. ## Appendix B The lattice structure of graphene Graphene is a two-dimensional hexagonal lattice of carbon atoms as illustrated in the left panel of Fig. 8. The distance between two neighboring carbon atoms is \[a_{\mathrm{CC}}=1.42\,\mathrm{\AA}\,. \tag{53}\] The two lattice vectors, which can also be seen in the left panel of fig. 8 are \[\mathbf{a}_{1}=a\begin{pmatrix}\frac{\sqrt{3}}{2}\\ \frac{1}{2}\\ 0\end{pmatrix}\,,\quad\mathbf{a}_{2}=a\begin{pmatrix}\frac{\sqrt{3}}{2}\\ -\frac{1}{2}\\ 0\end{pmatrix}\,, \tag{54a}\] such that \[a\equiv|\mathbf{a}_{1}|=|\mathbf{a}_{2}|=\sqrt{3}\,a_{\mathrm{CC}}\approx 2.46\, \mathrm{\AA}\,. \tag{54b}\] The same figure also shows the vectors \(\mathbf{N}_{i}\), pointing to a carbon atom's three closest neighbor atoms, \[\mathbf{N}_{1} =a_{\mathrm{CC}}\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix}=\begin{pmatrix}\frac{a}{\sqrt{3}}\\ 0\\ 0\end{pmatrix}\,, \tag{55a}\] \[\mathbf{N}_{2} =a_{\mathrm{CC}}\begin{pmatrix}-\frac{1}{2}\\ \frac{\sqrt{3}}{2}\\ 0\end{pmatrix}=\begin{pmatrix}-\frac{a}{2\sqrt{3}}\\ \frac{a}{2}\\ 0\end{pmatrix}\,,\] (55b) \[\mathbf{N}_{3} =a_{\mathrm{CC}}\begin{pmatrix}-\frac{1}{2}\\ -\frac{\sqrt{3}}{2}\\ 0\end{pmatrix}=\begin{pmatrix}-\frac{a}{2\sqrt{3}}\\ -\frac{a}{2}\\ 0\end{pmatrix}\,. \tag{55c}\] The lattice vectors \(\mathbf{b}_{i}\) of the reciprocal lattice, illustrated in the right panel of Fig. 8, are \[\mathbf{b}_{1}=b\begin{pmatrix}\frac{1}{2}\\ \frac{\sqrt{3}}{2}\\ 0\end{pmatrix}\,,\quad\mathbf{b}_{2}=b\begin{pmatrix}\frac{1}{2}\\ -\frac{\sqrt{3}}{2}\\ 0\end{pmatrix}\,,\] (56a) with \[b\equiv|\mathbf{b}_{1}|=|\mathbf{b}_{2}|=\frac{4\pi}{\sqrt{3}a}\,. \tag{56b}\] The figure also shows the three high-symmetry point in the first Brillouin zone (BZ) (shaded in blue) in \(\mathbf{k}\)-space, \[\mathbf{\Gamma}=\begin{pmatrix}0\\ 0\\ 0\end{pmatrix}\,,\;\mathbf{M}=\begin{pmatrix}\frac{2\pi}{3a_{\mathrm{CC}}}\\ 0\\ 0\end{pmatrix}=\begin{pmatrix}\frac{2\pi}{\sqrt{3}a}\\ 0\\ 0\end{pmatrix}\,, \tag{57a}\] \[\mathbf{K}=\begin{pmatrix}\frac{2\pi}{3a_{\mathrm{CC}}}\\ \frac{2\pi}{3\sqrt{3}a_{\mathrm{CC}}}\\ 0\end{pmatrix}=\begin{pmatrix}\frac{2\pi}{\sqrt{3}a}\\ \frac{2\pi}{3a}\\ 0\end{pmatrix}\,. \tag{10}\] ## Appendix C The tight-binding approximation ### General review In this section, we review the tight-binding approximation. In particular, we emphasize how the energy dispersion \(E_{i}(\mathbf{k})\), appearing in Eq. (28), and wave function coefficients \(C_{ij}(\mathbf{k})\) of Eq. (34) are evaluated in general using this technique [70]. The translational symmetry of a lattice should also be reflected in the wave functions. In particular, the electron wave function \(\Psi(\mathbf{x})\) has to satisfy _Bloch's theorem_, \[\mathbf{T}_{\mathbf{a}}\Psi(\mathbf{x})\equiv\Psi(\mathbf{x}+\mathbf{a})=e^{i \mathbf{k}\cdot\mathbf{a}}\Psi(\mathbf{x})\,. \tag{11}\] Here, we introduced the translation operator \(\mathbf{T}_{\mathbf{a}}\) along one of the lattice vectors \(\mathbf{a}\), and the lattice momentum \(\mathbf{k}\). One way to write down a generic wave function satisfying Bloch's theorem can be obtained using the _tight binding_ approximation. A tight-binding Bloch function \(\Phi_{j\mathbf{k}}(\mathbf{x})\) is an approximation to the system's wave functions which is defined by summing up the wave functions of the \(j^{\mathrm{th}}\) atomic orbital of isolated atoms at their respective lattice site, \[\Phi_{j\mathbf{k}}(\mathbf{x})=\frac{1}{\sqrt{N}}\sum_{k=1}^{N}e^{i\mathbf{k} \cdot\mathbf{R}_{k}}\varphi_{j}(\mathbf{x}-\mathbf{R}_{k})\,,\quad j=1,\dots, n\,. \tag{12}\] This way, Bloch functions sum up the wave functions \(\varphi_{j}(\mathbf{x})\) of \(N\) unit cells weighted with a lattice site dependent phase which ensures that Eq. (11) is satisfied. Note that, even if the isolated atomic wave functions \(\varphi_{j}(\mathbf{x})\) are normalized, the overlaps between neighboring wave functions generally render the Bloch functions as non-normalized. The actual electron wave functions of the material are linear combinations of the Bloch functions, mixing the different atomic orbitals (but not different lattice momenta), \[\Psi_{i\mathbf{k}}(\mathbf{x})=\mathcal{N}_{\mathbf{k}}\sum_{j=1}^{n}C_{ij}( \mathbf{k})\Phi_{j\mathbf{k}}(\mathbf{x})\,, \tag{13}\] where the constant \(\mathcal{N}_{\mathbf{k}}\) ensures that the wave function \(\Psi_{i\mathbf{k}}(\mathbf{x})\) is normalized. Using Schrodinger's equation, the energy values of these \(n\) states are \[E_{i}(\mathbf{k}) =\frac{\langle\Psi_{i}|\mathcal{H}|\Psi_{i}\rangle}{\langle\Psi _{i}|\Psi_{i}\rangle} \tag{14a}\] \[=\frac{\int\mathrm{d}^{3}\mathbf{x}\;\Psi_{i\mathbf{k}}^{*}( \mathbf{x})\;\mathcal{H}\;\Psi_{i\mathbf{k}}(\mathbf{x})}{\int\mathrm{d}^{3} \mathbf{x}\;\Psi_{i\mathbf{k}}^{*}(\mathbf{x})\Psi_{i\mathbf{k}}(\mathbf{x})} \tag{14b}\] Substituting Eq. (13), the energy eigenvalues can be expressed as \[E_{i}(\mathbf{k}) =\frac{\sum_{j=1}^{n}\sum_{j^{\prime}=1}^{n}C_{ij}^{*}(\mathbf{k} )C_{ij^{\prime}}(\mathbf{k})\,\langle\Phi_{j}|\mathcal{H}|\Phi_{j^{\prime}} \rangle}{\sum_{j=1}^{n}\sum_{j^{\prime}=1}^{n}C_{ij}^{*}(\mathbf{k})C_{ij^{ \prime}}(\mathbf{k})\,\langle\Phi_{j}|\Phi_{j^{\prime}}\rangle} \tag{15a}\] \[\equiv\frac{\mathbf{C}_{i}^{\dagger}(\mathbf{k})\cdot\mathbf{ \mathcal{H}}(\mathbf{k})\cdot\mathbf{C}_{i}(\mathbf{k})}{\mathbf{C}_{i}^{ \dagger}(\mathbf{k})\cdot\mathbf{\mathcal{S}}(\mathbf{k})\cdot\mathbf{C}_{i}( \mathbf{k})}\,. \tag{15b}\] Here, we defined the coefficient vectors \[\mathbf{C}_{i}(\mathbf{k})=\begin{pmatrix}C_{i1}(\mathbf{k})\\ \vdots\\ C_{in}(\mathbf{k})\end{pmatrix}\,, \tag{16}\] w Figure 8: The honeycomb lattice of graphene (left) and the reciprocal lattice (right), together with their respective lattice vectors \(\mathbf{a}_{i}\) and \(\mathbf{b}_{i}\). The red (blue) shaded area is the unit cell (Brillouin zone) of the (reciprocal) lattice. as well as the transfer integral matrix \(\mathbf{\mathcal{H}}(\mathbf{k})\), and the overlap integral matrix \(\mathbf{\mathcal{S}}(\mathbf{k})\). Their components are defined as \[\left[\mathbf{\mathcal{H}}(\mathbf{k})\right]_{ij} =\langle\Phi_{i}|\mathcal{H}|\Phi_{j}\rangle\, \tag{100}\] \[\left[\mathbf{\mathcal{S}}(\mathbf{k})\right]_{ij} =\langle\Phi_{i}|\Phi_{j}\rangle. \tag{101}\] As already seen in Eq. (100), the normalization of \(\Psi_{i\mathbf{k}}(\mathbf{x})\) can be written in terms of the coefficient vector \(\mathbf{C}_{i}(\mathbf{k})\) and the overlap matrix \(\mathbf{\mathcal{S}}(\mathbf{k})\), \[\langle\Psi_{i}|\Psi_{i}\rangle=\mathcal{N}_{\mathbf{k}}^{2}\ \mathbf{C}_{i}^{\dagger}(\mathbf{k})\cdot\mathbf{\mathcal{S}}(\mathbf{k})\cdot \mathbf{C}_{i}(\mathbf{k})\] \[\Rightarrow\mathcal{N}_{\mathbf{k}}=\left[\mathbf{C}_{i}^{ \dagger}(\mathbf{k})\cdot\mathbf{\mathcal{S}}(\mathbf{k})\cdot\mathbf{C}_{i}( \mathbf{k})\right]^{-1/2}\,. \tag{102}\] The entries of the coefficient vector \(\mathbf{C}_{i}(\mathbf{k})\) are obtained using the variational principle by minimizing the energy eigenvalues \(E_{i}(\mathbf{k})\), i.e. \[\frac{\partial E_{i}(\mathbf{k})}{\partial C_{ij}^{*}}=0\,. \tag{103}\] This equation is equivalent to the following general eigenvalue problem, \[\left[\mathbf{\mathcal{H}}-E_{i}(\mathbf{k})\mathbf{\mathcal{S}}\right]\cdot\mathbf{ C}_{i}(\mathbf{k})=0\,. \tag{104}\] Non-vanishing eigenvectors can only be found, if the _secular equation_ applies, \[\det\left[\mathbf{\mathcal{H}}-E_{i}(\mathbf{k})\mathbf{\mathcal{S}}\right]=0\,. \tag{105}\] For a fixed \(\mathbf{k}\), this polynomial of degree \(n\) can be solved for the \(n\) eigenvalues, i.e. the energy dispersion \(E_{i}(\mathbf{k})\). Furthermore, the eigenvectors determine the electron wave functions \(\Psi_{i\mathbf{k}}(\mathbf{x})\). Finally, the entries of the transfer integral matrix \(\mathbf{\mathcal{H}}(\mathbf{k})\), and the overlap integral matrix \(\mathbf{\mathcal{S}}(\mathbf{k})\) are often fixed to specific values, which ensure that the correct band structure of the material of interest is reproduced. These values are obtained either experimentally or from first-principles calculations. ### Tight-binding approximation for graphene We will apply the results of the previous section to the case of graphene. As illustrated in Fig. 8, the unit cell of graphene consists of two carbon atoms, denoted \(A\) and \(B\). The relevant atomic orbitals of the carbon atoms on these locations are \(2s\), \(2p_{x}\),\(2p_{y}\), and \(2p_{z}\) (the \(1s\) orbitals form a low energy core state and do not contribute to the valence band levels). In graphene, the first three orbitals combine or hybridize and form the so-called \(\sigma\)-bands and the \(2p_{z}\) orbitals hybridize to form the \(\pi\)-bands. As a result, there are \(n=2(6)\)\(\pi(\sigma)\)-bonding atomic orbitals in each unit cell, and Eq. (105) constitutes a two-(six-) dimensional eigenvalue problem for the \(\pi(\sigma)\) band. As we will describe in detail, the energy dispersion and wave function for the \(\pi\) electrons can be expressed analytically, whereas the \(\sigma\) electrons require numerical methods to solve the six-dimensional secular equation. #### The \(\mathbf{\pi}\)-electrons The \(\pi\)-electrons are a hybridization of the atomic \(2p_{z}\) orbitals of carbon. In order to compute the corresponding energy bands by solving Eq. (105), the main step is to evaluate the transfer integral and overlap integral matrices \(\mathbf{\mathcal{H}}(\mathbf{k})\) and \(\mathbf{\mathcal{S}}(\mathbf{k})\), which we defined in Eqs. (100) and (101). In this case, they are \(2\times 2\) matrices. Starting with the former, we substitute Eq. (104) into Eq. (100). The diagonal entries read \[\mathbf{\mathcal{H}}_{AA} =\langle\Phi_{A}|\mathcal{H}|\Phi_{A}\rangle\] \[=\frac{1}{N_{\text{cell}}}\sum_{k,k^{\prime}=1}^{N_{\text{cell}}} e^{i\mathbf{k}\cdot(\mathbf{R}_{k}-\mathbf{R}_{k^{\prime}})}\left\langle\varphi_{A}( \mathbf{R}_{k^{\prime}})|\mathcal{H}|\varphi_{A}(\mathbf{R}_{k})\right\rangle\,. \tag{106}\] If we only take the terms into account for which \(\mathbf{R}_{k}=\mathbf{R}_{k^{\prime}}\) and neglect sub-dominant contributions with \(\mathbf{R}_{k}\neq\mathbf{R}_{k^{\prime}}\), we find \[\mathbf{\mathcal{H}}_{AA}\approx\varepsilon_{2p}\,,\quad\text{with }\varepsilon_{2p}\equiv\langle\varphi_{A,k}|\mathcal{H}|\varphi_{A,k}\rangle. \tag{107}\] By analogy, \(\mathbf{\mathcal{H}}_{BB}=\varepsilon_{2p}\). For the off-diagonal components, we can do a similar (nearest-neighbor) approximation. \[\mathbf{\mathcal{H}}_{AB} =\langle\Phi_{A}|\mathcal{H}|\Phi_{B}\rangle\] \[=\frac{1}{N_{\text{cell}}}\sum_{k,k^{\prime}=1}^{N_{\text{cell}}} e^{i\mathbf{k}\cdot(\mathbf{R}_{k}-\mathbf{R}_{k^{\prime}})}\left\langle\varphi_{A}( \mathbf{R}_{k^{\prime}})|\mathcal{H}|\varphi_{B}(\mathbf{R}_{k})\right\rangle\,. \tag{108}\] Next, we only involve the contributions of the three nearest neighbors of each atom A, \[\approx\frac{1}{N_{\text{cell}}}\sum_{k^{\prime}=1}^{N_{\text{cell}}} \sum_{k=1}^{3}e^{i\mathbf{k}\cdot((\mathbf{R}_{k^{\prime}}+\mathbf{N}_{k})- \mathbf{R}_{k^{\prime}})}\] \[\times\langle\varphi_{A}(\mathbf{R}_{k^{\prime}})|\mathcal{H}| \varphi_{B}(\mathbf{R}_{k^{\prime}}+\mathbf{N}_{k})\rangle \tag{109}\] where the three vectors \(\mathbf{N}_{k}\) are given by Eq. (104). We define the parameter \(t\equiv\langle\varphi_{A}(\mathbf{R}_{k^{\prime}})|\mathcal{H}|\varphi_{B}( \mathbf{R}_{k^{\prime}}+\mathbf{N}_{k})\rangle\), which is identical for all \(k\) due to the rotational symmetry of the \(2p_{z}\) wave function. This leaves us with \[\mathbf{\mathcal{H}}_{AB}=t\times\sum_{k=1}^{3}e^{i\mathbf{k}\cdot\mathbf{N}_{k}}\,. \tag{110}\] The other off-diagonal is simply \(\mathbf{\mathcal{H}}_{BA}=\mathbf{\mathcal{H}}_{AB}^{*}\). In summary, the full transfer integral matrix for the \(\pi\)-electrons in the nearest-neighbor approximation is given by \[\mathbf{\mathcal{H}}\approx\begin{pmatrix}\varepsilon_{2p}&tf(\mathbf{k})\\ tf(\mathbf{k})^{*}&\varepsilon_{2p}\end{pmatrix}\,. \tag{111}\] The function \(f(\mathbf{k})\) is defined as \[f(\mathbf{k})\equiv\sum_{k=1}^{3}e^{i\mathbf{k}\cdot\mathbf{N}_{\mathbf{k}}}\,. \tag{109}\] The square of this function can be evaluated as \[|f(\mathbf{k})|^{2}=3+2\sum_{k=1}^{3}\cos(\mathbf{a}_{k}\cdot \mathbf{k})\,,\text{ with }\mathbf{a}_{3}\equiv\mathbf{a}_{2}-\mathbf{a}_{1}\,. \tag{110}\] For the overlap matrix, the previous steps can essentially be repeated to find \[\mathbf{\mathcal{S}}\approx\begin{pmatrix}1&sf(\mathbf{k})\\ sf(\mathbf{k})^{*}&1\end{pmatrix}\,, \tag{111}\] where \(s\equiv\langle\varphi_{A}(\mathbf{R}_{j})|\varphi_{B}(\mathbf{R}_{j}+\mathbf{ N}_{k})\rangle\). Energy bands of \(\pi\)-electronsGiven the explicit form of the two matrices, we can show that the corresponding eigenvalues solving the secular equation in Eq. (102) are given by \[E_{\pi}(\mathbf{k})=\frac{\varepsilon_{2p}\pm t|f(\mathbf{k})|}{1\pm s|f( \mathbf{k})|}\,. \tag{112}\] This energy dispersion is visualized in Fig. 9 for \(\mathbf{k}\in\text{BZ}\), as well as along the path between the high-symmetry points given by Eq. (105), which is also indicated in the figure. In order to reproduce the band structure of graphene to a good degree, we used the parameters \(\varepsilon_{2p_{z}}=0\) (by convention), \(s=0.129\) and \(t=-3.033\text{eV}\)[70]. Since \(t\) is negative, the '\(+^{\prime}\) solution is lower energy and corresponds to the bonding or valence \(\pi\)-band, whereas the '\(-^{\prime}\) solution is the anti-bonding or conduction \(\pi^{*}\)-band. The bands are degenerate at the high-symmetry point \(\mathbf{K}\). Next, we turn our attention towards the \(\mathbf{C}_{i}(\mathbf{k})\) coefficients. The normalized eigenvectors corresponding to the eigenvalues of Eq. (112) are obtained by solving Eq. (101). For the \(\pi\)-electrons we find \[\mathbf{C}_{\pi} =\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ \pm e^{i\varphi_{\mathbf{k}}}\end{pmatrix}\,, \tag{113a}\] \[\text{with }\varphi_{\mathbf{k}} =-\arctan\frac{\text{Im}f(\mathbf{k})}{\text{Re}f(\mathbf{k})}\,. \tag{113b}\] Hence, the \(\pi\)-electron wave functions can be written as \[\Psi_{\pi\mathbf{k}}(\mathbf{x})=\frac{\mathcal{N}_{\mathbf{k}}} {\sqrt{2N_{\text{cell}}}} \times\sum_{k=1}^{N_{\text{cell}}}\left[e^{i\mathbf{k}\cdot \mathbf{R}_{k}^{A}}\varphi_{2p_{z}}(\mathbf{x}-\mathbf{R}_{k}^{A})\right.\] \[\left.+e^{i\varphi_{\mathbf{k}}+i\mathbf{k}\cdot\mathbf{R}_{k}^{ B}}\varphi_{2p_{z}}(\mathbf{x}-\mathbf{R}_{k}^{B})\right]. \tag{114}\] As opposed to the treatment in [25], we do not perform a nearest-neighbor approximation on this level, as it is not well-defined here. Instead it is necessary to sum over all \(N\) unit cells. However, the nearest-neighbor approximation can be applied when computing the norm of \(\Psi_{\pi\mathbf{k}}(\mathbf{x})\), \[\langle\Psi_{\pi}|\Psi_{\pi}\rangle=\frac{\mathcal{N}_{\mathbf{k} }^{2}}{2N_{\text{cell}}}\sum_{k,k^{\prime}=1}^{N_{\text{cell}}}\Bigg{[}\] \[e^{i\mathbf{k}\cdot(\mathbf{R}_{k^{\prime}}^{A}-\mathbf{R}_{k}^{ A})}\big{\langle}\varphi_{2p_{z}}(\mathbf{x}-\mathbf{R}_{k}^{A})\big{|} \varphi_{2p_{z}}(\mathbf{x}-\mathbf{R}_{k^{\prime}}^{A})\big{\rangle}\] \[+e^{i\mathbf{k}\cdot(\mathbf{R}_{k^{\prime}}^{B}-\mathbf{R}_{k}^{ A})}\big{\langle}\varphi_{2p_{z}}(\mathbf{x}-\mathbf{R}_{k}^{A})\big{|}\varphi_{2p_{z}}( \mathbf{x}-\mathbf{R}_{k^{\prime}}^{B})\big{\rangle}\,e^{i\varphi_{\mathbf{k}}}\] Figure 9: Energy bands of the \(\pi\)-electrons in graphene evaluated in the tight-binding approximation, see Eq. (112). \[+e^{-i\mathbf{k}\cdot(\mathbf{R}_{k}^{B}-\mathbf{R}_{k^{\prime}}^{A})} \left\langle\varphi_{2p_{x}}(\mathbf{x}-\mathbf{R}_{k}^{B})\big{|}\varphi_{2p_{x }}(\mathbf{x}-\mathbf{R}_{k^{\prime}}^{A})\right\rangle e^{-i\varphi_{\mathbf{k}}}\] \[+e^{-i\mathbf{k}\cdot(\mathbf{R}_{k}^{B}-\mathbf{R}_{k^{\prime}}^ {B})}\left\langle\varphi_{2p_{x}}(\mathbf{x}-\mathbf{R}_{k}^{B})\big{|}\varphi_ {2p_{x}}(\mathbf{x}-\mathbf{R}_{k^{\prime}}^{B})\right\rangle\bigg{]} \tag{101}\] Next, we perform the sum over \(k^{\prime}\) using the nearest-neighbor approximation (but also taking next-nearest neighbors into account). In addition to the neighboring contributions, the first and last lines contain terms with \(\mathrm{R}_{k}=\mathrm{R}_{k^{\prime}}\), each contributing with \(N_{\mathrm{cell}}\) to the final sum. Hence, we find \[\langle\Psi_{\pi}|\Psi_{\pi}\rangle\approx\frac{\mathcal{N}_{ \mathbf{k}}^{2}}{2N_{\mathrm{cell}}}\Bigg{\{}2N_{\mathrm{cell}}+\sum_{k^{ \prime}=1}^{N_{\mathrm{cell}}}\sum_{k=1}^{3}\Bigg{[}\] \[e^{i\mathbf{k}\cdot(\mathbf{R}_{k^{\prime}}^{A}+\mathbf{R}_{k}- \mathbf{R}_{k^{\prime}}^{A})}s^{\prime}+e^{i\mathbf{k}\cdot(\mathbf{R}_{k^{ \prime}}^{A}+\mathbf{N}_{k}-\mathbf{R}_{k^{\prime}}^{A})}se^{i\varphi_{ \mathbf{k}}}\] \[+e^{-i\mathbf{k}\cdot(\mathbf{R}_{k^{\prime}}^{A}+\mathbf{N}_{k} -\mathbf{R}_{k^{\prime}}^{A})}se^{-i\varphi_{\mathbf{k}}}+e^{-i\mathbf{k} \cdot(\mathbf{R}_{k^{\prime}}^{B}+\mathbf{a}_{k}-\mathbf{R}_{k^{\prime}}^{B} )}s^{\prime}\Bigg{]} \tag{102}\] \[=\mathcal{N}_{\mathbf{k}}^{2}\bigg{[}1+\sum_{k=1}^{3}\left(s\cos( \mathbf{k}\cdot\mathbf{N}_{k}+\varphi_{\mathbf{k}})+s^{\prime}\cos(\mathbf{k }\cdot\mathbf{a}_{k})\right)\bigg{]}\,. \tag{103}\] Here, \(s^{\prime}\) denotes the overlap integral of the atomic orbitals at next-to-nearest neighboring sites. We find that the two leading terms are consistent with the general expression of Eq. (100), \[\langle\Psi_{\pi}|\Psi_{\pi}\rangle \approx\mathbf{C}_{\pi}^{\dagger}(\mathbf{k})\cdot\mathbf{\mathcal{S }}(\mathbf{k})\cdot\mathbf{C}_{\pi}(\mathbf{k})\] \[=1+s\sum_{k=1}^{3}\cos(\mathbf{k}\cdot\mathbf{N}_{k}+\varphi_{ \mathbf{k}})\,, \tag{104}\] and hence \[\mathcal{N}_{\mathbf{k}}=\left[1+s\sum_{k=1}^{3}\cos(\mathbf{k}\cdot\mathbf{N }_{k}+\varphi_{\mathbf{k}})\right]^{-1/2}\,. \tag{105}\] Next, we shift our attention from position space to momentum space. The Fourier-transformed Bloch wave functions are \[\widetilde{\Phi}_{\mathbf{k}}(\mathbf{\ell})=\frac{1}{\sqrt{N_{\mathrm{cell}}}} \sum_{k=1}^{N_{\mathrm{cell}}}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{R}_{k}} \widetilde{\varphi}_{j}(\mathbf{\ell})\,, \tag{106}\] where \(\mathbf{\ell}\) is the conjugate momentum to \(\mathbf{x}\). Therefore, the \(\pi\)-electrons' wave functions in momentum space read \[\widetilde{\Psi}_{\pi\mathbf{k}}(\mathbf{\ell})=\frac{\mathcal{N}_{ \mathbf{k}}}{\sqrt{2N_{\mathrm{cell}}}}\widetilde{\varphi}_{2p_{x}}(\mathbf{\ell}) \sum_{k=1}^{N_{\mathrm{cell}}}\left[e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{R} _{k}^{A}}+e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{B}+i\varphi_{\mathbf{\ell} }}\right]. \tag{107}\] Using \(\mathbf{R}_{i}^{B}=\mathbf{R}_{i}^{A}+\mathbf{N}_{1}\), we can write this as \[=\frac{\mathcal{N}_{\mathbf{k}}}{\sqrt{2N}}\widetilde{\varphi}_{2p_{x}}(\mathbf{ \ell})\left(1+e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{N}_{1}+i\varphi_{\mathbf{ \ell}}}\right)\sum_{k=1}^{N_{\mathrm{cell}}}e^{i(\mathbf{\ell}+\mathbf{k})\cdot \mathbf{R}_{k}^{A}}\,. \tag{108}\] For the evaluation of the exponential sum, we can follow the steps outlined in Sec. II.4.1 and use the identity in Eq. (36). In the end, we find \[\widetilde{\Psi}_{\pi\mathbf{k}}(\mathbf{\ell}) =\frac{\mathcal{N}_{\mathbf{k}}}{\sqrt{2N}}\widetilde{\varphi}_{2p _{x}}(\mathbf{\ell})\,A_{\mathrm{uc}}^{-1}\sum_{\mathbf{G}}\,(2\pi)^{2}\delta^{(2)} (\mathbf{\ell}^{\|}+\mathbf{k}-\mathbf{G})\] \[\times\Big{\{}1+e^{i[\varphi_{\mathbf{\ell}}+(\mathbf{\ell}+\mathbf{k})\cdot \mathbf{\delta}]}\Big{\}}\,\,. \tag{109}\] #### c.2.1 The \(\mathbf{\sigma}\)-electrons The \(\sigma\)-electrons are in a superposition of the carbon atoms' \(2s\), \(2\mathrm{p}_{x}\), and \(2\mathrm{p}_{y}\) orbitals. Hence, the transfer and overlap matrices are \(6\times 6\) matrices, which we can express in terms of four \(3\times 3\) sub-matrices, \[\mathbf{\mathcal{S}} \approx\begin{pmatrix}\mathbf{\mathcal{S}}_{AA}&\mathbf{\mathcal{S}}_{AB} \\ \mathbf{\mathcal{S}}_{AB}&\mathbf{\mathcal{S}}_{BB}\end{pmatrix}\,, \tag{110a}\] \[\mathbf{\mathcal{H}} \approx\begin{pmatrix}\mathbf{\mathcal{H}}_{AA}&\mathbf{\mathcal{H}}_{AB} \\ \mathbf{\mathcal{H}}_{AB}&\mathbf{\mathcal{H}}_{BB}\end{pmatrix}\,. \tag{110b}\] The diagonal sub-matrices are diagonal, \[\mathbf{\mathcal{S}}_{AA} =\mathbf{\mathcal{S}}_{BB}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix} \tag{110c}\] \[\mathbf{\mathcal{H}}_{AA} =\mathbf{\mathcal{H}}_{BB}=\begin{pmatrix}\varepsilon_{2s}&0&0\\ 0&\varepsilon_{2p}&0\\ 0&0&\varepsilon_{2p}\end{pmatrix}\,, \tag{110d}\] with \(\varepsilon_{2s}=-8.87\)eV and \(\varepsilon_{2p}=0\). The off-diagonal matrices are given by \[\mathbf{\mathcal{S}}_{AB}=\begin{pmatrix}\mathcal{S}_{ss}&\mathcal{S}_{sp_{x}}& \mathcal{S}_{sp_{y}}\\ -\mathcal{S}_{sp_{x}}&\mathcal{S}_{p_{x}p_{x}}&\mathcal{S}_{p_{x}p_{y}}\\ -\mathcal{S}_{sp_{y}}&\mathcal{S}_{p_{x}p_{y}}&\mathcal{S}_{p_{y}p_{y}}\end{pmatrix}\,, \tag{111e}\] with entries \[\mathcal{S}_{ss}=S_{ss}\left(e^{ik_{1}a_{\mathrm{CC}}}+2e^{-ik_{1}a_{\mathrm{CC} }/2}\cos\left(\frac{\sqrt{3}k_{2}a_{\mathrm{CC}}}{2}\right)\right)\,, \tag{112}\] The energy bands of the \(\sigma\)-electrons are obtained by solving the secular equation i.e. Eq. (110). Since we are dealing with \(6\times 6\) matrices, we use the numerical functionality of the Eigen library [50]. The resulting energy bands for the \(\sigma\)-electrons are depicted in Fig. 10. The same Eigen function that computes the eigenvalues of the matrices in Eq. (111), i.e. the energy bands of the \(\sigma\)-electrons, by solving the secular equation also result in the six-dimensional eigenvectors \(\mathbf{C}_{\sigma_{i}}(\mathbf{k})\) and therefore the wave functions according to Eq. (110). \[\Psi_{\sigma_{i}\mathbf{k}}(,\mathbf{x})=\mathcal{N}_{\mathbf{k}}\sum_{j=1}^{6 }C_{\sigma_{i}j}(\mathbf{k})\Phi_{j\mathbf{k}}(\mathbf{x})\,,\;(i=1,2,3)\,.\] (112a) The six Bloch wave functions are given by \[\Phi_{1\mathbf{k}}(\mathbf{x}) =\frac{1}{\sqrt{N_{\mathrm{cell}}}}\sum_{k=1}^{N_{\mathrm{cell}}} e^{i\mathbf{k}\cdot\mathbf{R}_{k}^{A}}\varphi_{2s}(\mathbf{x}-\mathbf{R}_{k}^{A})\,,\] (112b) \[\Phi_{2\mathbf{k}}(\mathbf{x}) =\frac{1}{\sqrt{N_{\mathrm{cell}}}}\sum_{k=1}^{N_{\mathrm{cell}}} e^{i\mathbf{k}\cdot\mathbf{R}_{k}^{A}}\varphi_{2p_{x}}(\mathbf{x}-\mathbf{R}_{k}^{A})\,,\] (112c) \[\Phi_{3\mathbf{k}}(\mathbf{x}) =\frac{1}{\sqrt{N_{\mathrm{cell}}}}\sum_{k=1}^{N_{\mathrm{cell}}} e^{i\mathbf{k}\cdot\mathbf{R}_{k}^{A}}\varphi_{2p_{y}}(\mathbf{x}-\mathbf{R}_{k}^{A})\,,\] (112d) \[\Phi_{4\mathbf{k}}(\mathbf{x}) =\frac{1}{\sqrt{N_{\mathrm{cell}}}}\sum_{k=1}^{N_{\mathrm{cell}}} e^{i\mathbf{k}\cdot\mathbf{R}_{k}^{B}}\varphi_{2s}(\mathbf{x}-\mathbf{R}_{k}^{B})\,,\] (112e) \[\Phi_{5\mathbf{k}}(\mathbf{x}) =\frac{1}{\sqrt{N_{\mathrm{cell}}}}\sum_{k=1}^{N_{\mathrm{cell}}} e^{i\mathbf{k}\cdot\mathbf{R}_{k}^{B}}\varphi_{2p_{x}}(\mathbf{x}-\mathbf{R}_{k}^{B})\,,\] (112f) \[\Phi_{6\mathbf{k}}(\mathbf{x}) =\frac{1}{\sqrt{N_{\mathrm{cell}}}}\sum_{k=1}^{N_{\mathrm{cell}}} e^{i\mathbf{k}\cdot\mathbf{R}_{k}^{B}}\varphi_{2p_{y}}(\mathbf{x}-\mathbf{R}_{k}^{B})\,.\] (112g) Finally, the Fourier transform of the wave function is \begin{table} \begin{tabular}{|l l|l l|} \hline \(\mathcal{S}\) & value & \(\mathcal{H}\) & value [eV] \\ \hline \hline \(s\) & 0.129 & \(t\) & -3.033 \\ \(s^{\prime}\) & 0.0087 & \(\varepsilon_{2s}\) & -8.868 \\ & & \(\varepsilon_{2p}\) & 0.0 \\ \(S_{ss}\) & 0.212 & \(H_{ss}\) & -6.769 \\ \(S_{sp}\) & 0.16 & \(H_{sp}\) & -5.580 \\ \(S_{\sigma}\) & 0.146 & \(H_{\sigma}\) & -5.037 \\ \(S_{\pi}\) & 0.129 & \(H_{\pi}\) & -3.033 \\ \hline \end{tabular} \end{table} Table 2: Parameters of the overlap and transfer matrices for the \(\sigma\)-electrons Figure 10: Energy bands of the \(\pi\) and \(\sigma\)-electrons obtained using the tight-binding approximation. Solid lines show the valence bands, dashed lines are the conduction bands. given by \[\widetilde{\Psi}_{\sigma_{i}\mathbf{k}}(\mathbf{\ell}) =\frac{\mathcal{N}_{\mathbf{k}}}{\sqrt{N_{\text{cell}}}}\sum_{k=1}^{N _{\text{cell}}}\] \[\Bigg{\{}\widetilde{\varphi}_{2s}(\mathbf{\ell})\bigg{[}C_{\sigma_{i}1 }e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{A}}+C_{\sigma_{i}4}e^{i(\mathbf{ \ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{B}}\bigg{]}\] \[\quad+\widetilde{\varphi}_{2p_{x}}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 2}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{A}}+C_{\sigma_{i}5}e^{i(\mathbf{ \ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{B}}\bigg{]}\] \[\quad+\widetilde{\varphi}_{2p_{y}}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 3}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{A}}+C_{\sigma_{i}6}e^{i(\mathbf{ \ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{B}}\bigg{]}\Bigg{\}} \tag{126}\] \[=\frac{\mathcal{N}_{\mathbf{k}}}{\sqrt{N_{\text{cell}}}}\sum_{k=1 }^{N_{\text{cell}}}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{R}_{k}^{A}}\] \[\Bigg{\{}\widetilde{\varphi}_{2s}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 1}+C_{\sigma_{i}4}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{\delta}}\bigg{]}\] \[\quad+\widetilde{\varphi}_{2p_{x}}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 2}+C_{\sigma_{i}5}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{\delta}}\bigg{]}\] \[\quad+\widetilde{\varphi}_{2p_{y}}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 3}+C_{\sigma_{i}6}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{\delta}}\bigg{]}\Bigg{\}}\,. \tag{127}\] Here, we again used \(\mathbf{R}_{j}^{B}=\mathbf{R}_{j}^{A}+\mathbf{\delta}\). Just like in the case of the \(\pi\)-electrons, we use Eq. (36) to express the exponential sum in terms of a sum over the reciprocal lattice vectors \(\mathbf{G}\), \[\widetilde{\Psi}_{\sigma_{i}\mathbf{k}}(\mathbf{\ell}) =\frac{\mathcal{N}_{\mathbf{k}}}{\sqrt{N}}A_{\text{uc}}^{-1}\,\sum _{\mathbf{G}}\left(2\pi\right)^{2}\delta^{(2)}(\mathbf{\ell}^{\parallel}+\mathbf{ k}-\mathbf{G})\] \[\Bigg{\{}\widetilde{\varphi}_{2s}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 1}+C_{\sigma_{i}4}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{\delta}}\bigg{]}\] \[\quad+\widetilde{\varphi}_{2p_{x}}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 2}+C_{\sigma_{i}5}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{\delta}}\bigg{]}\] \[\quad+\widetilde{\varphi}_{2p_{y}}(\mathbf{\ell})\bigg{[}C_{\sigma_{i} 3}+C_{\sigma_{i}6}e^{i(\mathbf{\ell}+\mathbf{k})\cdot\mathbf{\delta}}\bigg{]}\Bigg{\}}\,. \tag{128}\] ## Appendix D Atomic wavefunctions The evaluation of the graphene response function requires a specific form of the atomic wavefunctions of carbon \(\varphi_{i}(\mathbf{x})\) (and its Fourier transform \(\widetilde{\varphi}_{i}(\mathbf{\ell})\)). We present results for two particular choices. As proposed by Hochberg et al. (2015), we start by approximating the wavefunctions of carbon with hydrogenic wave functions with a re-scaled \(Z_{\text{eff}}\) factor. We improve upon this choice by using Roothaan-Hartree-Fock (RHF) wavefunctions for the ground states of carbon Hochberg et al. (2015). In this appendix, we summarize the explicit wave functions both in position and momentum space and present a comparison. The wave function of the atomic state \((n,\ell,m)\) in position space is given by \[\varphi_{nlm}(\mathbf{x})=R_{nl}(r)Y_{l}^{m}(\hat{\mathbf{x}})\,, \tag{129}\] where \(Y_{l}^{m}(\hat{\mathbf{x}})\) are spherical harmonics, and \(R_{nl}(r)\) is the radial component of the wave function. Regardless of the choice of form for the radial component, the corresponding Fourier-transformed wave function in momentum space \(\widetilde{\varphi}_{i}(\mathbf{\ell})\) for a given position space wave function \(\varphi_{i}(\mathbf{x})\) is defined as \[\widetilde{\varphi}_{i}(\mathbf{\ell})=\int\mathrm{d}^{3}\mathbf{x}\,\varphi_{i}( \mathbf{x})e^{-i\mathbf{\ell}\cdot\mathbf{x}}\,, \tag{130}\] which fixes the normalization of the wave function to \[\int\mathrm{d}^{3}\mathbf{x}\;|\varphi_{i}(\mathbf{x})|^{2}=1\,,\quad\int\frac {\mathrm{d}^{3}\mathbf{\ell}}{(2\pi)^{3}}\;|\widetilde{\varphi}_{i}(\mathbf{\ell})|^{2 }=1\,. \tag{131}\] For the evaluation of the graphene response function using the TB approximation, the required wavefunctions are those of the atomic orbitals \(2s\) and \(2p\) in the environment of a carbon atom. Additionally for the \(2p\) orbitals, the crystal structure of graphene gives rise to the \(2p_{x}\), \(2p_{y}\), and \(2p_{z}\) orbitals. The corresponding wavefunctions are given by \[\varphi_{2p_{i}}(\mathbf{x}) =R_{2p}(r)Y_{i}(\hat{\mathbf{x}})\,,\;\text{with} \tag{132a}\] \[Y_{i}(\hat{\mathbf{x}}) \equiv\sqrt{\frac{3}{4\pi}}\frac{x_{i}}{r}\,. \tag{132b}\] Similarly, the relevant momentum wave functions can be written as \[\widetilde{\varphi}_{nlm}(\mathbf{\ell})=\chi_{nl}(\ell)Y_{l}^{m}(\hat{\mathbf{\ell}})\,, \tag{133}\] and hence for the \(2p_{i}\) states we find \[\widetilde{\varphi}_{2p_{i}}(\mathbf{\ell})=\chi_{2p}(\ell)Y_{i}(\hat{\mathbf{\ell}})\,. \tag{134}\] ### Hydrogenic wave functions We list the hydrogenic wavefunctions proposed in Hochberg et al. (2015) to approximate the groundstate wave functions of carbon atoms. In position space, the wave functions are given by \[\varphi_{2s}(\mathbf{x}) =\sqrt{\frac{(Z_{\text{eff}}^{2s})^{3}}{56\pi a_{0}^{3}}}\left(1- \frac{Z_{\text{eff}}^{2s}r}{a_{0}}\right)e^{-Z_{\text{eff}}^{2s}r/(2a_{0})}\,, \tag{135a}\] \[\varphi_{2p_{x}}(\mathbf{x}) =\sqrt{\frac{(Z_{\text{eff}}^{2p_{x}/y})^{5}}{32\pi a_{0}^{3}}} \frac{r}{a_{0}}e^{-Z_{\text{eff}}^{2p_{x}/y}r/(2a_{0})}\sin\theta\cos\varphi\,,\] (135b) \[\varphi_{2p_{y}}(\mathbf{x}) =\sqrt{\frac{(Z_{\text{eff}}^{2p_{x}/y})^{5}}{32\pi a_{0}^{3}}} \frac{r}{a_{0}}e^{-Z_{\text{eff}}^{2p_{x}/y}r/(2a_{0})}\sin\theta\sin\varphi\,, \tag{135c}\] \[\varphi_{2p_{z}}(\mathbf{x})=\sqrt{\frac{(Z_{\text{eff}}^{2p_{z}})^{5}}{32\pi a_{0} ^{3}}}\frac{r}{a_{0}}e^{-Z_{\text{eff}}^{2p_{z}}r/(2a_{0})}\cos\theta\,. \tag{104}\] Following [25], the effective charge \(Z_{\text{eff}}\) parameters are determined to reproduce the overlap integrals for graphene listed in Tab. 2, \[Z_{\text{eff}}^{2s}=4.59\,,\quad Z_{\text{eff}}^{2p_{z/y}}=5.49\,,\quad Z_{\text {eff}}^{2p_{z}}=4.02\,. \tag{105}\] While this improves the self-consistency of the TB formalism, the resulting wave functions are not close to those of the more accurate Roothaan-Hartree-Fock wavefunctions for carbon atoms, as seen in Fig. 11. The momentum space wave functions required to describe the electrons in graphene can be approximated as \[\widetilde{\varphi}_{2s}(\mathbf{\ell})=\sqrt{8\pi}\left(Z_{\text{eff}}^{2s} \right)^{5/2}a_{0}^{3/2}\frac{a_{0}^{2}|\mathbf{\ell}|^{2}-\left(Z_{\text{eff}}^{2 s}/2\right)^{2}}{(a_{0}^{2}|\mathbf{\ell}|^{2}+\left(Z_{\text{eff}}^{2s}/2\right)^{2}) ^{3}}\,, \tag{106a}\] \[\widetilde{\varphi}_{2p_{x}}(\mathbf{\ell})\approx\sqrt{8\pi}\left(Z_{\text{eff }}^{2p_{z/y}}\right)^{7/2}a_{0}^{3/2}\frac{a_{0}\ell_{x}}{\left(a_{0}^{2}|\mathbf{ \ell}|^{2}+(Z_{\text{eff}}^{2p_{z/y}}/2)^{2}\right)^{3}}\,,\] (106b) \[\widetilde{\varphi}_{2p_{y}}(\mathbf{\ell})\approx\sqrt{8\pi}\left(Z_{\text{eff }}^{2p_{z/y}}\right)^{7/2}a_{0}^{3/2}\frac{a_{0}\ell_{y}}{\left(a_{0}^{2}|\mathbf{ \ell}|^{2}+(Z_{\text{eff}}^{2p_{z/y}}/2)^{2}\right)^{3}}\,,\] (106c) \[\widetilde{\varphi}_{2p_{x}}(\mathbf{\ell})\approx\sqrt{8\pi}\left(Z_{\text{eff }}^{2p_{z}}\right)^{7/2}a_{0}^{3/2}\frac{a_{0}\ell_{z}}{\left(a_{0}^{2}|\mathbf{ \ell}|^{2}+(Z_{\text{eff}}^{2p_{z}}/2)^{2}\right)^{3}}\,, \tag{106d}\] where only the expression for \(\widetilde{\varphi}_{2s}(\mathbf{\ell})\) is exact. ### Roothaan-Hartree-Fock wavefunctions Instead of re-scaled hydrogenic wavefunctions, we recommend using Roothaan-Hartree-Fock (RHF) wave functions that can be found in [51]. The radial part of the RHF wavefunction is given in Eq. (48) as a linear combination of Slater-type orbitals (STOs). We repeat the expression here for convenience. \[R_{nl}(r)=\sum_{j}C_{nlj}R_{\text{STO}}(r,Z_{lj},n_{lj})\,.\] (107a) An STO is defined as \[R_{\text{STO}}(r,Z,n)\equiv a_{0}^{-3/2}\frac{(2Z)^{n+1/2}}{\sqrt{(2n)!}}\left( \frac{r}{a_{0}}\right)^{n-1}e^{-\frac{Z_{r}}{a_{0}}}\,.\] (107b) Finally, the RHF coefficients \[C_{nlj}\] as well as the parameters \[n_{nl}\] and \[Z_{nl}\] for carbon are tabulated in [51], and summarized in Tab. 3 for convenience. Moving on to momentum space, the wave function is obtained via Eq. (104). Using the plane-wave expansion Figure 11: Comparison of the hydrogenic and the RHF wavefunctions for the \(2s\) and \(2p\) orbitals of carbon. The left (right) panel shows the radial wavefunction \(R_{nl}(r)\) (\(\chi_{nl}(\ell)\)) in position (momentum) space. Note that the values of \(Z_{\text{eff}}\) for the hydrogenic wavefunctions were tuned to reproduce the overlap integrals of graphene, see Eq. (105). \begin{table} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{**2s**} \\ \hline \(n_{lj}\) & \(Z_{lj}\) & \(C_{nlj}\) \\ \hline \hline 1 & 8.4936 & -0.071727 \\ 1 & 4.8788 & 0.438307 \\ 3 & 15.466 & -0.000383 \\ 2 & 7.05 & -0.091194 \\ 2 & 2.264 & -0.393105 \\ 2 & 1.4747 & -0.579121 \\ 2 & 1.1639 & -0.126067 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{**2p**} \\ \hline \(n_{lj}\) & \(Z_{lj}\) & \(C_{nlj}\) \\ \hline \hline 2 & 7.05 & 0.006977 \\ 2 & 3.2275 & 0.070877 \\ 2 & 2.1908 & 0.230802 \\ 2 & 1.4413 & 0.411931 \\ 2 & 1.0242 & 0.350701 \\ \hline \end{tabular} \end{table} Table 3: Coefficients of RHF wavefunctions as defined in Eq. (48) for the \(2s\) (left) and \(2p\) (right) orbital of carbon. Values taken from [51]. of the exponential, we can write the radial part of the wave function in Eq. (46) as the spherical Bessel transform of \(R_{nl}(r)\), \[\chi_{nl}(\ell)=4\pi i^{l}\int\mathrm{d}r\;r^{2}R_{nl}(r)j_{l}(\ell r)\,, \tag{47}\] where \(j_{l}(x)\) is the spherical Bessel function. Evaluating this expression for the RHF wavefunction given in Eq. (48) yields \[\chi_{nl}(\ell)=\sum_{j}C_{nlj}\left[\frac{2\pi a_{0}}{Z_{lj}} \right]^{3/2}2^{n_{lj}-l}\left[\frac{ia_{0}\ell}{Z_{lj}}\right]^{l}\frac{(n_{lj }+l+1)!}{\sqrt{(2n_{lj})!}}\frac{{}^{2}F_{1}\left(\frac{1}{2}(2+l+n_{lj}), \frac{1}{2}(3+l+n_{lj}),\frac{3}{2}+l,-\left(\frac{a_{0}\ell}{Z_{lj}}\right)^{ 2}\right)}{\Gamma\left(\frac{3}{2}+l\right)}\,, \tag{48}\] where \({}_{2}F_{1}\left(a,b,c,z\right)\) is the hypergeometric function. Using these expressions, we can evaluate all relevant atomic orbitals for the graphene response function by applying Eq. (45). When we compare the hydrogenic to the RHF wavefunctions in Fig. 11, we find the hydrogenic wavefunctions to be a poor match to the RHF wavefunctions, which are accepted as a good description for the atomic carbon groundstate wave functions. Instead, we conclude the necessity to use actual carbon atomic wavefunctions. This conclusion becomes even more robust when comparing the graphene response functions computed with TB and DFT. ## Appendix E Comparison of our TB treatment to Hochberg et al. (2017) The first study of graphene targets for sub-GeV DM searches was published by Hochberg et al. (2015). Therein, the authors chose a semi-analytic approach to describe the electron wavefunctions in graphene based on the tight-binding (TB) approximation. In reproducing their results, we noticed a number of deviations to our TB treatment of the electron wavefunctions in graphene. * The modeling of the Bloch wavefunctions in the above-mentioned work does not satisfy the Bloch's theorem given in Eq. (43). This also gives rise to a different normalization factor \(\mathcal{N}_{\mathbf{k}}\). * While the hydrogenic wavefunctions proposed in Hochberg et al. (2015) give overlap integrals in agreement with the TB theory, they differ significantly from more accurate RHF atomic wavefunctions of carbon atoms, as shown in App. D. * After a careful evaluation of the formula for the DM induced electron ejection rate, we find an extra factor of 1/2 coming from normalizing to the number of unit cells instead of the number of carbon atoms in the system. In this appendix, we review how we improved the TB treatment of graphene wavefunctions and how the updated electron ejection rate compares to the one presented by Hochberg et al. 4 Footnote 4: We note that we were able to reproduce the signal energy spectra reported in Hochberg et al. (2015) by accounting for all differences. ### Bloch states and their normalization The "Bloch states" proposed in Hochberg et al. (2015) are given by \[\Phi_{A\mathbf{k}}(\mathbf{x}) =\varphi_{A}(\mathbf{x})\,, \tag{49a}\] \[\Phi_{B\mathbf{k}}(\mathbf{x}) =\sum_{k=1}^{3}e^{i\mathbf{k}\cdot\mathbf{N}_{\mathbf{k}}}\varphi _{B}(\mathbf{x}-\mathbf{N}_{k})\,. \tag{49b}\] Compared to the expression of Eq. (49), these states, while capturing the nearest-neighbour approximation, do not formally correspond to Bloch wavefunctions. Consequently, they do not satisfy Bloch's theorem or rigorously describe a periodic system, see Eq. (43). One consequence of this choice of Bloch states is a deviating normalization factor \(\mathcal{N}_{\mathbf{k}}\). As an example, using the Bloch states by Hochberg et al., it is possible to compute the _exact_ normalization factor for the \(\pi\)-electrons, \[\mathcal{N}_{\mathbf{\ell}}=\left[2+s\sum_{k=1}^{3}\cos(\mathbf{\ell} \cdot\mathbf{N}_{k}+\varphi_{\mathbf{\ell}})+s^{\prime}\sum_{k=1}^{3}\cos(\mathbf{ \ell}\cdot\mathbf{a}_{k})\right]^{-1/2}\,. \tag{50}\] While this factor correctly normalizes the electron wavefunctions involving the Bloch states of (49), it deviates from our respective expression for the \(\pi\)-electrons in Eq. (46). Furthermore, our normalization constant is consistent with Eq. (47), i.e. the general expression for the normalization constant for Bloch wavefunctions as given by Eq. (49). Similar arguments hold for the \(\sigma\)-electrons. ### Atomic wavefunctions In [25], Hochberg et al. propose to describe the atomic wavefunctions of carbon by using hydrogenic wavefunctions with a re-scaled \(Z_{\text{eff}}\) factor. The re-scaling ensures that the overlap integrals of the wavefunctions are consistent with the TB parameters that reproduce the band structure of graphene listed in Tab. 2. For completeness, we list these wavefunctions in App. D.1. In contrast, we model the carbon wavefunctions using Roothaan-Hartree-Fock (RHF) wavefunctions [51], which we summarize in App. D.2. By comparison, we find that the re-scaled hydrogenic wavefunctions are a poor approximation for the required ground state wavefunctions of atomic electrons in carbon, as can be seen in Fig. 11. While the RHF wavefunctions are a better description of carbon electrons, it should be noted that their overlap integrals are not consistent with the overlap parameters for graphene in Tab. 2. A re-scaling of the RHF wavefunctions similar to the approach by Hochberg et al. is possible, but spoils the accurate description of electrons in atomic carbon. However, this seems to be a general feature of evaluating electron wavefunctions in the TB approximation. ### Electron ejection rate In [25], the total rate of DM-induced electron ejections in graphene is given as \[R=2\frac{\rho_{\chi}}{m_{\chi}}N_{C}A_{uc}\sum_{i}\int\frac{\mathrm{d}^{2} \boldsymbol{\ell}}{(2\pi)^{2}}\int\mathrm{d}^{3}\mathbf{v}g(\mathbf{v})v \sigma_{i}(\boldsymbol{\ell})\,, \tag{101}\] where \[v\sigma_{i}(\boldsymbol{\ell}) =\frac{\overline{\sigma}_{e}}{\mu^{2}}\int\frac{\mathrm{d}^{3} \mathbf{k}_{f}}{(2\pi)^{3}}\frac{\mathrm{d}^{3}\mathbf{q}}{4\pi}\left|F_{ \text{DM}}(q)\right|^{2}\left|\widetilde{\Psi}_{i}\left(\boldsymbol{\ell}, \mathbf{q}-\mathbf{k}_{f}\right)\right|^{2}\] \[\times\delta\left(\frac{k_{f}^{2}}{2m_{e}}-E_{i}(\boldsymbol{ \ell})+\Phi+\frac{q^{2}}{2m_{\chi}}-\mathbf{q}\cdot\mathbf{v}\right)\,. \tag{102}\] Here, we use the notation of Hochberg et al. This needs to be compared to our Eqs. (31) and (32). Here, we use the replacement Eq. (44) for two-dimensional targets, and further identify \[R_{\text{free}}\rightarrow\frac{16\pi m_{e}^{2}m_{\chi}^{2}}{\mu_{e\chi}^{2}} \overline{\sigma}_{e}|F_{\text{DM}}(q)|^{2} \tag{103}\] to facilitate the comparison. We find agreement between our expressions with one exception. Instead of the number of carbon atoms \(N_{C}\), we find that the electron ejection rate is proportional to the number of unit cells \(N_{\text{cell}}\). Hence our expressions for the electron ejection rates differ by a factor of 2. In Fig. 12, we compare the energy spectrum for a DM particle of 100 MeV mass using the TB approach presented by Hochberg et al. and compare it to the improved version presented in this work. Note that the interaction model used by Hochberg et al. corresponds to \(\mathcal{O}_{1}\) interactions in our general framework. We find that our predicted spectrum is about one order of magnitude lower than the one presented in [25].
2305.09330
Consumer-side Fairness in Recommender Systems: A Systematic Survey of Methods and Evaluation
In the current landscape of ever-increasing levels of digitalization, we are facing major challenges pertaining to scalability. Recommender systems have become irreplaceable both for helping users navigate the increasing amounts of data and, conversely, aiding providers in marketing products to interested users. The growing awareness of discrimination in machine learning methods has recently motivated both academia and industry to research how fairness can be ensured in recommender systems. For recommender systems, such issues are well exemplified by occupation recommendation, where biases in historical data may lead to recommender systems relating one gender to lower wages or to the propagation of stereotypes. In particular, consumer-side fairness, which focuses on mitigating discrimination experienced by users of recommender systems, has seen a vast number of diverse approaches for addressing different types of discrimination. The nature of said discrimination depends on the setting and the applied fairness interpretation, of which there are many variations. This survey serves as a systematic overview and discussion of the current research on consumer-side fairness in recommender systems. To that end, a novel taxonomy based on high-level fairness interpretation is proposed and used to categorize the research and their proposed fairness evaluation metrics. Finally, we highlight some suggestions for the future direction of the field.
Bjørnar Vassøy, Helge Langseth
2023-05-16T10:07:41Z
http://arxiv.org/abs/2305.09330v1
# Consumer-side Fairness in Recommender Systems: A Systematic Survey of Methods and Evaluation ###### Abstract In the current landscape of ever-increasing levels of digitalization, we are facing major challenges pertaining to scalability. Recommender systems have become irreplaceable both for helping users navigate the increasing amounts of data and, conversely, aiding providers in marketing products to interested users. The growing awareness of discrimination in machine learning methods has recently motivated both academia and industry to research how fairness can be ensured in recommender systems. For recommender systems, such issues are well exemplified by occupation recommendation, where biases in historical data may lead to recommender systems relating one gender to lower wages or to the propagation of stereotypes. In particular, consumer-side fairness, which focuses on mitigating discrimination experienced by users of recommender systems, has seen a vast number of diverse approaches for addressing different types of discrimination. The nature of said discrimination depends on the setting and the applied fairness interpretation, of which there are many variations. This survey serves as a systematic overview and discussion of the current research on consumer-side fairness in recommender systems. To that end, a novel taxonomy based on high-level fairness interpretation is proposed and used to categorize the research and their proposed fairness evaluation metrics. Finally, we highlight some suggestions for the future direction of the field. Introduction Recommender systems have become integral parts of modern digital society. An exponential increase in data poses significant challenges to users and consumers, who cannot feasibly sift through everything to find what they are looking for. Recommender systems help mitigate these challenges by capturing their users' preferences and presenting them with prioritized options. Thus, recommender systems have seen widespread application in e-commerce, multimedia platforms, and social networks. Their tactical relevance in the industry has led to a high degree of cooperation between the industry and academia in further developing the field. In recent years, the notion of fairness in machine learning has steadily gained attention. High-profile cases have succeeded in bringing the topic to the general public's attention, like the analysis performed by ProPublica suggesting the presence of racial bias in the COMPAS system used for predicting the likelihood of recidivism of inmates (ProPublica, 2016). Subsequently, fairness challenges have also been identified for recommender systems, and the works of Burke (2017) formalized the presence of multi-stakeholder fairness dynamics mirroring the multi-stakeholder nature of recommender systems. Provider stakeholders may take issue if their products are disproportionally less exposed than similar popular products. Career seekers may feel discriminated against if they are predominantly recommended careers that are stereotypically and historically associated with their gender. An increased focus on fairness in recommender systems is not only ethically beneficial for society as a whole but also helps the actors applying them in satisfying an increasingly fairness-aware user base and retaining good relations and cooperation with providers. While provider-side fairness research has a dominant subgroup in research pursuing popularity bias, which is the notion of disproportional amounts of attention given to popular items, consumer-side fairness research has a greater focus on group-based fairness relating to demographic information of the users, i.e., making sure that users are not discriminated against based on aspects such as race, gender, or age. Despite the focus on a specific high-level fairness setting, consumer-side fairness in recommender systems displays a high degree of variation in approaches. The approaches for introducing fairness awareness take place in all parts of the recommender system pipeline, span most established and upcoming model architectures, and are designed for respecting various fairness interpretations. Some models opt for adjusting recommendations post hoc, others modify the input data directly, while others still explicitly model the fairness-awareness. Fairness has been incorporated through penalizing discrimination during optimization, altering user representation to be more neutral, probabilistically modelling the influence of sensitive attributes, or re-ranking unaltered recommendation, all while adhering to different definitions of what discrimination and fairness entails. There is also variation in the application setting of these approaches; most adhere to the regular asymmetric setting where users and items make up fundamentally different concepts, while others consider reciprocal settings where users are recommended to other users like matchmaking. Yet another dynamic is considered in two-sided settings that seek to achieve both consumer- and provider-side fairness concurrently. Despite the great variety, the breadth of consumer-side fairness approaches has yet to be covered in detail by any existing surveys. We further argue for this claim in Section 2.5, where we discuss relevant surveys. In this survey, we have systematically surveyed the existing literature that proposes and evaluates approaches considering consumer-side fairness. Critical aspects of the qualified literature are discussed, compared, and categorized, leading to the proposal of a taxonomy that highlights fairness interpretation and how it has been incorporated into the approaches. Further, we provide a comprehensive overview of metrics used to evaluate the fairness of the approaches and some thoughts on the field's future directions. Our key contributions are: 1. Propose a taxonomy for categorizing consumer-side fairness approaches in recommender systems based on how the fairness is incorporated and high-level conceptual fairness definitions. 2. Provide a comprehensive overview, categorization, and comparison of available consumer-side fairness approaches in recommender systems and their proposed fairness evaluation. The remaining sections of this survey include a background section on fairness definitions, terminology, related concepts, and related works; methodology covering the literature selection process and the proposed taxonomy; a detailed discussion and comparison of the identified literature; analysis of applied fairness metrics and datasets; and a final discussion of our thoughts on the future directions of the topic. ## 2 Background As a primer to this survey's core content and discussion, we introduce key established fairness concepts and terms that appear frequently or are subject to ambiguity. The background also covers a discussion of recommender systems concepts related to consumer-side fairness and a look into existing surveys on fairness in recommender systems and how this survey differs. ### Terminology The following definitions have been added to mitigate confusion stemming from mixing similar terms or different interpretations of specific terms. A low degree of consensus, especially within fairness-focused research, has led to multiple different terms being used for the same concept and other words like _preference_ are contextually ambiguous. **Rating:** In rating-based recommender systems, we are interested in the rating given by a specific user to a specific item and is contrasted with ranking-based recommender systems. Ratings can be discrete and continuous and typically have a set range, e.g., between 1 and 5. **Ranking:** Ordering of items or entities according to users' (perceived) preference. **IR Ranking:** The field of Information Retrieval comprise an array of different approaches for retrieving information from data storage. We will consider intent the key factor separating IR Ranking and recommender systems: recommender systems seek to suggest novel, but relevant, information to their users while IR Ranking seeks to retrieve the most relevant information. Furthermore, IR Ranking approaches often involve a query and are rarely personalized. **Top-\(k\):** Top \(k\) ranked item, where \(k\) is an integer indicating the number of items that are of interest. \(k\) is usually quite small, often in the range of 5-20, as user attention is a limiting factor. **Preference (score):** Continuous measure of user preference used to produce rankings. Score/value/measure may be omitted in the text if the context allows it. **Ranking-based recommender systems:** Recommender systems that learn to rank in order to present the user with the top list of suggested items. Usually applies classification-based optimization. **Rating-based recommender systems:** Recommender systems that attempt to match ratings given to items by users, and predict new ratings given by users to unrated items. Usually applies regression-based optimization. **Sensitive attribute:** Unifying term used to describe demographic attributes that are used to segment users into different groups for which fairness considerations are applied. Similar concepts, both symmetric and asymmetric, have been referred to as _demographic_, _protected_, _minority_, _marginalized_ and _private_ in the selected studies. _Sensitive attribute_ is found to be sufficient for explaining most approaches, but more thorough explanations are provided in cases where asymmetry or special dynamics of a sensitive attribute take a more nuanced role. **Sensitive group:** A group of users that share a specific instance of a sensitive attribute, e.g., all male users in a setting where gender is considered a sensitive attribute. ### Recommender System Definition Recommender systems comprise many varied approaches designed for varied settings and present no clear singular definition. We will focus on personalized recommender systems, i.e., those that seek to accommodate different individuals with customized recommendations based on their individual preferences. As alluded to in Section 2.1, this survey will distinguish between rating-based and ranking-based recommender systems. When applying the notation presented in Table 1, both flavours attempt to capture how a set of entities \(\mathcal{U}\) will value another set of entities \(\mathcal{V}\) on an individual level. \(\mathcal{U}\) are typically exemplified as _users_ and \(\mathcal{V}\) as _items_, and the overall goal is to _recommend_ novel items to the users. For rating-based recommender systems, the level objective is to predict individual ratings given by a user \(u\) to an item \(v\), \(r_{uv}\), i.e., \(\hat{r}_{uv}=r_{uv}\). Ranking-based recommender systems instead take the approach of capturing the general preferences of the users and using this to present the same users with selections of items predicted to be of the users' liking. The resulting objective is analogous to the rating-based objective, \(\hat{y}_{uv}=y_{uv}\), but does present slightly different challenges owing to the non-continuous nature of ranking. Both Rating-based and Ranking-based recommender systems may adapt rating data, but Ranking-based recommender systems can more easily adapt data of a more implicit nature, e.g., interaction events. Recommender systems are implemented using a plethora of different models and methods like Neighbourhood-based Collaborative Filtering (Nikolakopoulos et al, 2022), Matrix Factorization (Koren et al, 2022), various types of Deep Neural Networks (Zhang et al, 2022), Autoencoders (Li and She, 2017), Reinforcement Learning (Afsar et al, 2021), Graph-based models (Wang et al, 2021), and various Probabilistic models. Detailed background theories of various models have been left out to avoid significant inflation of the survey's length. However, as this is a comprehensive survey focusing on tangible model proposals, some technical details will be discussed. Readers are encouraged to consult auxiliary sources, like the provided references, when needed. ### Formal Fairness Definitions Several formal fairness definitions have been proposed for classification settings. While some of these can be trivially adapted to the recommendation setting, others are more challenging. One such challenge relates to adaptations of definitions based on confusion matrix statistics, as the interpretations and implications of these statistics may differ between classification and recommender systems. Confusion matrixes are not computable nor relevant for rating-based recommender systems. Conversely, the confusion matrixes of ranking-based recommender systems are heavily influenced by the fixed number of recommendations and the number of correct recommendations, which usually vary by user. Furthermore, the implications of some definitions may be enhanced in scenarios where a positive label is deemed a positive outcome for a stakeholder, even if it was a False Positive. An example of this could be an applicant applying \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{**Symbols Description**} \\ \hline \(\mathcal{U}\) & Set of all users \\ \(\mathcal{V}\) & Set of all items/recommendable entities \\ \(\mathcal{S}\) & Set of all possible sensitive attribute configurations \\ \(r\) & Rating in rating-based recommender systems. Double as preference in mixed usage \\ pref & Intermediate measure of preference used in ranking-based recommender systems \\ \(y\) & Binary indicator for the presence of recommendation in ranking-based recommender systems \\ \_-_ & Modifier, indicate predicted output as opposed to ground truth \\ \(u,v,s\) & Indicate relation with specific users, items or sensitive values respectively \\ Rec & Set of (top-\(k\)) recommendations \\ Util(\(\cdot\)) & Open-ended/arbitrary utility function \\ P(\(\cdot\)) & Probability \\ \(\widehat{\mathbb{E}}[\cdot]\) & Arithmetic Mean \\ \hline \hline \end{tabular} \end{table} Table 1: Notation for a loan; the applicant will be happy if the application is accepted regardless of whether it was the correct verdict according to bank policies. In consumer-side recommendation settings, this is rarely true, in which case, a False Positive in a top-\(k\) recommendation setting will simply be the presence of an item that the user does not care for among the top recommendations. A selection of fairness definitions is covered here, along with accompanying descriptions of recommender system-specific adaptions. The reader is encouraged to consult Gajane (2017), Caton and Haas (2020), and Li et al (2022c) for a more in-depth discussion of formal fairness definitions in both machine learning and recommender systems. #### 2.3.1 Fairness Through Unawareness One of the more naive fairness definitions is achieved by simply omitting explicit sensitive attributes in the modelling. This definition is widely disregarded as it fails to consider implicit biases present in other attributes and is therefore not sufficient for mitigating discrimination (Gajane, 2017). #### 2.3.2 Statistical Parity Statistical parity in classification requires that each group has an equal probability of being assigned a positive label. \[\mathrm{P}(\hat{y}=1|s=s_{1})=\mathrm{P}(\hat{y}=1|s=s_{2})=...\] Where \(\mathrm{P}(\cdot)\) represents probability, \(\hat{y}\) is the predicted label, and \(s\) is a sensitive attribute. Statistical Parity is adaptable in recommender systems by replacing the notion of classification labels with ratings or discrete recommended items, but its evaluation may be trickier. #### 2.3.3 Equal Opportunity Equal opportunity in classification requires that the true positive rate of different sensitive groups is equal. \[\mathrm{P}(\hat{y}=1|y=1,s=s_{1})=\mathrm{P}(\hat{y}=1|y=1,s=s_{2})=...\] #### 2.3.4 Equalized Odds The Equalized Odds definition is stricter than Equal Opportunity in also requiring that the false positive rates of the different sensitive groups are equal. \[\mathrm{P}(\hat{y}=1|y=1,s=s_{1}) =\mathrm{P}(\hat{y}=1|y=1,s=s_{2})=...\] \[\&\ \mathrm{P}(\hat{y}=1|y=0,s=s_{1}) =\mathrm{P}(\hat{y}=1|y=0,s=s_{2})=...\] As previously mentioned, false positives may benefit some stakeholders in certain scenarios. However, false positives may also be the most detrimental type of error in other scenarios. Thus, the decision to pursue Equalized Odds instead of Equal Opportunity may be motivated by a wish to balance either a boon or a bane. ### Related Recommender System Concepts Many flavours of recommender systems have arisen to cover different needs as they appeared or were made known. It is not uncommon that the concepts considered in these different flavours partly overlap, share underlying issues, or share similar mitigation strategies. A number of the most relevant recommender system flavours, when compared with consumer-side fairness, are listed in this section, focusing on similarities and dissimilarities. The intention of this section is two-fold: The first is to highlight related topics that may be of interest to readers and that may help put consumer-side fairness into a broader context. The second motive is to highlight dissimilarities that disqualify certain research from being covered by the scope of this survey, despite occasionally adopting fairness terminology. #### 2.4.1 Provider-side fairness As the name entails, provider-side fairness takes a polar opposite view to consumer-side fairness. A significant part of the research focuses on mitigating popularity bias, which occurs when popular items are given disproportional amounts of exposure by the recommender system. However, the broadness of the definition also covers research that is more similar to many consumer-side fairness approaches in considering fairness for groupings based on sensitive information from a provider perspective. #### 2.4.2 Cold-start, Long-tail and Diversity Cold-start, long-tail, and diversity in recommender systems all make out similar concepts with partly overlapping causes and mitigation approaches: **Cold-start** specifically focuses on the scenario of providing good recommendations for new users or new items, through facing the challenge of comparatively little data for the new entity. **Long-tail** recommender system approaches more generally attempt to improve the recommendation of items that have few interactions compared to the more popular items or the analogous user-centric alternative in improving the recommendations for inactive users. Approaches that optimize for **Diversity** attempt to diversify the top-\(k\) recommendations given to users, motivated by popularity bias issues and an effort to enhance the user experience. While the definitions differ in generality and perspective, they are sometimes used interchangeably in the literature, especially cold-start and long-tail. Provider-centric approaches of all three fields share similarities, or directly overlap, with popularity bias mitigation approaches proposed as provider-side fairness-focused recommenders. Similarly, user-centric approaches that seek to balance the performance of _all individual users_ may overlap with consumer-side fairness approaches and are included in this survey, given that they satisfy all acceptance criteria. The last point typically boils down to whether a fairness perspective is applied, i.e., posed as individual fairness, along with fairness evaluation. However, as a general strategy, research that seeks to balance the utility of user groups based on _number of user interactions_ have been excluded because they fit perfectly within the mature fields of cold-start or long-tail recommender systems, and are better represented when compared with such methods. #### 2.4.3 IR Ranking Fairness IR Ranking is typically not personalized, i.e., the produced rankings are not affected by user-specific interaction with the system. Subsequently, IR Ranking fairness objectives usually have a provider-side point of view, e.g., balancing the exposure achieved by similar items or the representation of different item groups given non-personalized queries. The work of Pitoura et al (2022) provides an overview of fairness in the IR Ranking setting. #### 2.4.4 Group Recommendation Group recommendation approaches seek to recommend a set of items to a collection of users, e.g., recommending a travel destination for a group of friends with different preferences. This field frequently applies the term _fairness_ when explaining their motive of balancing the consideration of the various users in the groups, and most consumer-side fairness concepts covered in this survey are applicable within the groups or aggregated over groupings. However, because the field is specialized for a specific recommender system scenario and has received significant attention both before and after the notion of fairness gained traction, these approaches are not included in this survey. #### 2.4.5 Privacy Privacy in recommender systems covers many approaches that seek to protect privacy in different stages of the recommender system pipeline. For instance, _federated learning_ can be applied to mitigate the issues of having a centralized model that may be breached and mined for sensitive information (Li et al, 2020). _Differential privacy_ has been applied to provide protection guarantees for the sensitive information of individual users (Friedman and Schuster, 2010). Some privacy approaches seek to completely remove the information of specific attributes within user representations or data, which overlaps with a class of fairness approaches that do the same with the intention of not having the attributes influence the recommendation. ### Related Work There has been a recent surge in proposed surveys of fairness in recommender systems. Pitoura et al (2022) surveys both fairness in IR ranking and recommender systems, while Deldjoo et al (2023); Wang et al (2022); Li et al (2022c) focus on recommender systems. Pitoura et al (2022) seeks to serve as an overview of fairness in both IR ranking and recommender systems, which makes it the broadest and most high-level survey among the ones considered relevant. They propose using multiple established concepts as a basis for their categorization, e.g., individual/group fairness and provider/consumer side fairness, as well as novel classifications of fairness incorporation methods. Because of the broad scope and since it is the oldest survey considered relevant, only two of the studies covered in this survey were covered in Pitoura et al (2022). Deldjoo et al (2023); Wang et al (2022); Li et al (2022c) were all first made publicly available within a few months of each other in 2022 and all consider a broad scope comprising all types of fairness in recommender systems. This scope is wider than the one applied in this survey by covering provider-side fairness and group recommendation. Additionally, all three surveys also cover research that is theoretical in nature or performs analysis using established approaches and datasets, i.e., not necessarily proposing new models or methods. Deldjoo et al (2023) investigate the current state of fairness in recommender systems, and focus on charting how the available research is distributed with respect to different high-level concepts. They additionally propose a single-level categorization of the fairness metrics considered in the research they cover. Li et al (2022c) has a more conceptual take and provides a thorough introduction to fairness theory and fairness in other machine learning fields before addressing their identified research through multiple different taxonomies based on binary concepts. Some of the fairness concepts they use to categorize the research are well established, like group/individual and consumer/provider, while others have not previously been focused on, like short-term/long-term and black box/explainable. Wang et al (2022) propose a hierarchical taxonomy of fairness definitions that are since used to categorize the optimization and fairness metrics applied in their identified studies. Our work differs from previous surveys by specializing in tangible solutions proposed for consumer-side fairness in recommender systems. The specialization allows for a complete overview of the available literature and a higher focus on technical aspects to enhance comparisons. We also categorize our identified research using a new taxonomy centred on high-level notions of fairness interpretations and incorporation methods, which has a purposely high-level and general definition to be applicable and extendable to new fairness interpretations and incorporation methods. The completeness of our survey is exemplified by Table 2, which indicates that when adjusting for time, the largest coverage overlap with the broader surveys only comprises 18 out of the 43 articles we identified in the same time interval. \begin{table} \begin{tabular}{l l l l l} \hline \hline & **Adj. Coverage** & **\% Adj. Coverage** & **Tot. Coverage** & **\% Tot. Coverage** \\ Deldjoo et al (2023) & 14/43 & 33\% & 14/47 & 30\% \\ Li et al (2022c) & 18/43 & 42\% & 18/47 & 38\% \\ Wang et al (2022) & 16/41 & 39\% & 16/47 & 34\% \\ \hline \hline \end{tabular} \end{table} Table 2: Table displaying the coverage in the most relevant surveys of the articles identified in this survey. Raw counts and percentages are presented, both adjusted and unadjusted for the publish date of the last considered article in each survey. Methodology The methodology of this survey covers the systematic selection process applied for identifying and screening relevant studies, followed by the definition and justification of applied taxonomies, as well as descriptions of how the taxonomies are used to categorize and structure further discussion. ### Selection Process The selection process comprised the definition of concise acceptance criteria, identification of relevant publication indexes, query definition, two rounds of article screenings, and final in-depth reading of the remaining candidate articles. This section presents the acceptance criteria, details the queries and how they were defined, and presents a full overview of the number of studies involved in each step of the selection process. #### 3.1.1 Acceptance criteria Five acceptance criteria have been defined in line with our goals of examining the existing literature of tangible models considering consumer-side fairness in recommender systems: 1. The study considers _recommender systems_, see Section 2.2. 2. The study considers consumer-side fairness, either explicitly or through a multi-stakeholder focus. **Note**, Group recommendation and Long tail/Cold-start recommender systems are excluded, see Section 2.4. 3. The study is published in a peer-reviewed conference or journal. 4. The study proposes a novel model or method. 5. The study evaluates the fairness of the proposed model or method. #### 3.1.2 Query Definition and Overview The keywords were kept general to avoid filtering out potential relevant research. The search queries were chronologically bound by 2016-01-01 and 2022-10-01, where the lower bound was set based on prior knowledge of the topic and preliminary querying for validation purposes. The topic started gaining noticeable traction in 2017, but the early adopters had three publications before this, (Kamishima et al, 2012, 2013, 2016). The first two articles do not appear to have inspired other researchers, but since 2016, there has been a gradual increase in the number of articles each year. The chronological bound was combined with the keyword combination "recommend*" and "fairness", and both keywords had to be matched in the title, the abstract, or both. "Recommendment" was given a wildcard suffix matcher to match both "recommender" and "recommendation". A similar wildcard, "fair*", was used instead of "fairness" in the DBLP index to compensate for not being able to match within the abstracts. Observations in known research and research found through preliminary querying confirmed that all articles that matched "fair" in the title also matched "fairness" when considering the abstract. The wildcard was only used in title-only queries since it significantly increased the number of false positives when matching in both title and abstract. Fairness is becoming a well-established concept within the ML community, and most, if not all, research uses the full term at least once before potentially switching over to the shorthand "fair". The full selection process is detailed in a PRISMA flow diagram (Page et al, 2021) in Figure 1. Figure 1: A PRISMA flow diagram illustrating the full selection process. ### Taxonomy While there have been previous attempts at proposing novel taxonomies for categorizing fairness approaches in recommender systems based on which Fairness Interpretation is pursued, we argue that there are alternative taxonomies that offer additional insight and value. The most recent taxonomy is proposed by Wang et al (2022), who first proposes splitting between process and outcome focus, then two alternatives for splitting outcome-focused fairness on target and concept. One challenge when applying this to consumer-side fairness research is that many of the named _concept-based_ fairness categories do not occur that often, and the vast majority of identified research would be classified as either optimizing and evaluating for _Process Fairness_ or _Consistency Fairness_. We also argue that there may be value in further separating different high-level Fairness Interpretations, e.g., _Consistency Fairness_ may consider distance notions that only compare the distribution of predictions given to different groups, but it can also consider distance notions that measure differences in how the predictions of the same groups match the targets. We propose a new taxonomy centred on which high-level Fairness Interpretation is considered when optimizing and evaluating models. Besides resulting in a balanced segmentation of the identified research, the taxonomy separates key differences in mentality when approaching fairness, some of which fundamentally conflict with each other. To further structure and analyze the research, we propose applying two other, more established, concepts which detail how/when the fairness consideration is incorporated and which type of recommender model is applied, respectively. #### 3.2.1 Fairness Interpretation Taxonomy While several fairness definitions from the fields of law and psychology have been formally defined for machine learning, see Section 2.3, they cannot trivially be applied for categorizing the studies considered in this survey. The formal definitions are occasionally implemented as metrics, but since they mostly consider the model's outcome, it is challenging to define how they should be adhered to during optimization. Another challenge is that some of these definitions are conceptually similar and only differ in minute details. We instead propose categorizing the Fairness Interpretation on a higher and more conceptual level, while remaining compatible with the more low-level formal definitions. For instance, Equality of Opportunity and Equalized Odds share a high-level objective of balancing utility measures evaluated for different sensitive groups that consider both the predictions and how they match the targets. Two identified interpretations have further been assigned sub-categories for finer distinctions between similar concepts. The full taxonomy is illustrated in Figure 2, and the different interpretations are further described in the following sections and illustrated in Figure 3. Recommendation ParityRecommendation Parity methods consider the distribution of ratings, rankings or preference scores. The Recommendation Parity-based approaches and metrics are strictly applied for group fairness views and are optimized when the recommendation distribution given to different users is similar. The strong focus on recommendation distribution, while completely disregarding differences in the achieved utility of different sensitive groups, makes Recommendation Parity a highly contrasting Fairness Interpretation to Utility-Based fairness and various Custom Fairness Interpretations. Recommendation Parity for consumer side fairness can be further split based on at which level the parity is optimized or measured. Some optimize and measure parity at a **global** level, while others consider parity in the rating of individual **items** or **item groups**. The former is less constricting, as sensitive groups may disagree on individual items as long as the disagreements cancel out globally, i.e., if one group is more fond of an item than another, perfect global parity is regained if an identical reversed profile exists for another item. Local-level parity requires different sensitive groups also to rate/prefer individual items/item groups similarly. Neutral RepresentationAdversarial and orthogonality approaches both consider the case where the model is oblivious to the sensitive attributes of the users _fair_ and achieve this by making sure the intermediate representations of users do not reflect any of their sensitive information. A special case of this, achievable through modelling latent variables independent of the sensitive variables in causal models, is defined as Counterfactual Fairness by Kusner et al (2017). Counterfactual Fairness is considered an individual form of fairness, in the sense that changing the sensitive attributes in individual cases should not affect the outcome. In the more general case, the indiscriminate and global removal of sensitive information based on correlations between sensitive attributes and representations arguably resembles Recommendation Parity more, as recommender systems using sensitive-neutral representations should inherently lead to sensitive-neutral recommendations. However, given the unique perspective of focusing on representations rather than the recommendations, characteristic optimizations, representation-centric evaluation, and overall prevalence of approaches, a dedicated interpretation category for neutral user representations is still deemed warranted. There are no conceptual sub-categories of this Fairness Interpretation, but four main strategies for neutralizing representations have been identified: _Adversarial_, _Orthogonality_, _Fairness-aware sampling_ and _Probabilistic_. Succinctly, adversarial approaches train classification models to discriminate sensitive attributes from user representations. These models are trained in parallel with the main model(s), and their insights are used to inform the main model how it should be updated to make it harder for the adversarial to discern sensitive attributes accurately. Proposition 2 from Goodfellow et al (2014) propose that given a generator model and an adversarial model with sufficient capacity, the generator can be trained to generate data adhering to the original data distribution. By replacing the considered task with the task of dis Figure 2: The proposed taxonomy based on Fairness Interpretation. tinguishing a binary sensitive attribute given representations, a similar proposition can be made for adversarial approaches for producing neutral representations, i.e., given sufficient capacities, neutral representations are achievable. Orthogonality approaches utilize the notion of representation space in considering the representations to be vectors in a high-level semantic space. They identify sensitive dimensions in said space and apply different methods to ensure that the representations are orthogonal to these dimensions. Thus, the representations will ideally not contain any intrinsic sensitive information themselves. Fairness-aware sampling and probabilistic modelling have also been used to reduce the amount of sensitive information in representations of different flavours. Fairness-aware sampling approaches alter sampling done when training representations to be more diverse with respect to sensitive attributes. Probabilistic approaches for producing neutral representations typically explicitly provide or model sensitive information during training, which disincentivizes the rest of the model from modelling the same sensitive information. Figure 3: Diagram that illustrates the high-level differences between the three non-Custom Fairness Interpretations in a scenario where the sensitive groups \(s_{1}\) and \(s_{2}\) display different preferences and the base recommender perform better for \(s_{1}\). The preferences and recommendations given to the groups are illustrated as probability distributions, while model representations are projected into two-dimensional scatterplots. The Recommendation Parity interpretation idealizes when the recommendation distributions overlap, while a Utility-Based interpretation requires that the respective recommendation distributions match and mismatch the “true” distributions equally. The Neutral Representation interpretation is optimized to move from the case where representations of different groups can be separated into distinct clusters to the case where the clusters overlap or are indistinguishable. Utility-Based FairnessUtility-Based fairness is a comprehensive class focusing on differences in utility aspects of the recommendations given to different sensitive _groups_ or different _individuals_. That is, the distribution of recommendations of different individuals/groups is free to differ significantly as long as it does not affect the utility of various entities in a way deemed unfair relative to the utility of other entities. The one key requirement of the utility that separates this interpretation from the other interpretations is that the utility functions considered **must be tied to the ground truth targets** in some way. That is, correctly recommending an item will directly affect the utility value. Numerous variations of optimization terms and metrics fall under this interpretation, as the notion of fairness may depend on the utility measure and the scenario. The observed variations are based on equal recommendation metric scores, equal utility variance, or the loss of individual utility in two-sided fairness settings etc. CustomThe final interpretation encompasses measures of adherence to custom fairness definitions and is a collection of optimizations that do not fit in any other interpretation category, e.g., parity with respect to derived attributes or balancing of custom utility measures that are independent of the ground-truth targets. #### 3.2.2 Fairness Incorporation The notions of pre-, in- and post-processing methods for injecting additional specialized considerations in approaches are well established both within the field of recommender system fairness and machine learning as a whole (Caton and Haas, 2020; Mehrabi et al, 2021; Deldjoo et al, 2023), and categorize if the methods take place before, within or after the application of the model they enhance. To structure the identified research, we propose extending this taxonomy with an additional level to represent better the observed variety of approaches applied to incorporate fairness awareness. The second level contains a single sub-category each for pre- and post-processing approaches, but we propose four sub-categories to cover the diversity of processing methods. The full overview is illustrated in Figure 4, and each proposed sub-category has been given a brief description in this section. Data Augmentation:The only sub-category of pre-processing methods is Data Augmentation, which covers all methods that inject fairness consideration by augmenting the model's input data. Loss enhancement:Loss enhancement methods encourage fairness consideration through additional terms in the loss used for optimizing the model. Positive aspects of loss enhancement methods are that they can be applied to many model types, are flexible in definition, and can significantly change predictions through minimal changes to an approach. However, extra loss terms do not inherently improve the modelling capacity of a model but may introduce more complex dynamics that would benefit from more modelling capacity or changes to the model architecture. Probabilistic:The probabilistic fairness approaches apply probabilistic concepts to encourage independence of recommendation and sensitive features, apply soft constraints, or filter out sensitive information. Unlike the other in-processing sub-categories, probabilistic fairness approaches are not easily achieved through a smaller extension to an arbitrary model. This variation of Fairness Incorporation usually requires that the applied model is probabilistic in nature itself, at least partially. Algorithmic:Algorithmic approaches incorporate fairness by changing smaller aspects of an existing algorithm or through one-time processing, e.g., through selective sampling or removal of sensitive projections in representation space. Adversarial:Adversarial approaches apply adversarial models to identify sensitive information from intermediate representations, to identify how the main model can be updated to better filter out sensitive information. Re-Ranking:Re-ranking approaches re-rank the recommendation of one or more base recommender systems according to new or changed objectives, e.g., introducing fairness objectives that are optimized along with regular recommendation utility. #### 3.2.3 Model Types A third categorization system is used to categorize approaches by which type of recommender system model architectures they fall under. The model type can affect how fairness awareness can be incorporated and influence the general recommendation task. Comparing approaches based on the same model types is also enhanced by sharing similar implementation details and premises. Several model groups have been defined based on the prevalence and shared concepts and can be found listed by an acronym and a description in the following list. * **CF:** Neighbourhood-based Collaborative filtering. * **MF:** Matrix-Factorization. * **NCF:** Neural Collaborative Filtering, taken to mean neural network-based collaborative filtering methods that more specialized model groups do not cover. * **Graph:** Various Graph-based models and methods. Graph Neural Networks, Graph Convolutional Networks, Graph Embeddings etc. * **AE:** (Variational) Auto Encoders. * **Probabilistic:** Various Probabilistic models and methods. Probabilistic Soft Logic, Latent models, Bayesian Networks etc. * **Classification:** Various Classification methods. Random Forest, Gradient Boosting, Naive Bayes etc. Figure 4: Fairness Incorporation categories. * **Bandit:** Contextual Bandit. #### 3.2.4 Structuring of Main Discussion The three different categorizations will all be used when discussing and comparing the identified approaches. Three sections are reserved for pre-, in- and post-processing Fairness Incorporation approaches, and their content is structured by the corresponding Fairness Incorporation sub-categories and both Fairness Interpretation and model types. The Fairness Interpretation taxonomy is used in an overview and for a focused comparison of fairness optimization. In contrast, model types are used to structure a more general technical discussion to highlight comparable implementational choices. ### Full Model Overview This section presents a preliminary analysis and overview of all identified research according to model type and Fairness Incorporation method. The motive is to put the topic into the broader context of general recommender systems and to provide an overview of all covered research. A full overview is found in Table 3. Note that the same article may fall under multiple types of Fairness Incorporation and model types, since the proposed approach may apply multiple types of Fairness Incorporation strategies and be applied on multiple base models. Also note that while many incorporation methods can be applied to multiple different model types, especially pre- and post-processing methods, only observed combinations are covered. The fact that a method is adaptable for other model types does not guarantee that un-documented combinations will achieve similar results or improvements. Furthermore, the current trends of the field are better reflected when keeping to the combinations that have been actively researched. #### 3.3.1 Model Analysis Some clear trends can be observed in the full table. The field has experienced rapid growth, with most research taking place in the most recent years. Pure loss enhancement approaches saw a lot of attention among the early adopters, especially when used together with matrix factorization models but are currently rarely used as the sole Fairness Incorporation method. Re-ranking methods saw a similar burst of attention in 2020 and 2021. Still, they did not dominate the field, unlike early loss enhancement approaches, as multiple other directions were researched simultaneously. Adversarial approaches were slow to appear but have since become popular while seemingly being used with a more varied selection of base recommender system types. Bayesian and algorithmic approaches are the smallest in-processing groups but are characterized by being pretty evenly distributed across time and being applied with specific types of recommender systems. There also appears to be a recent trend of applying multiple Fairness Incorporation strategies instead of relying solely on a single strategy. ## 4 Pre-Processing Methods While numerous studies consider the effect of data augmentation, we only found three papers that pass all acceptance criteria. In particular, several candidates were rejected for not proposing formalized approaches or not presenting an evaluation of the achieved fairness. Pre-processing methods comprise the smallest Fairness Incorporation main category. ### Fairness Optimization #### 4.1.1 Utility-Based Fairness Rastegarpanah et al (2019); Fang et al (2022) propose exploiting collaborative filtering dynamics by training new synthetic users that will influence the recommendation of the real users. When training synthetic user, Rastegarpanah et al (2019) enhances the loss by adding terms for penalizing both the group-level and the individual-level variance of mean squared errors. In contrast, Fang et al (2022) utilize similar loss terms based on the utility metrics proposed by Yao and Huang (2017), see Section 7.3.1, and also global Recommendation Parity. #### 4.1.2 Custom The fairness optimization proposed by Slokom et al (2021) shares similarities with in-processing approaches optimizing for Neutral Representations but alters the input data to remove correlation between the user profiles and the sensitive attributes of the users, instead of altering the intermediate user representations. The approach achieves this by adding items that are indicative of the other sensitive groups, identified using auxiliary models, to the user profiles, and they also explore removing items at random or based on how indicative they are of the actual sensitive attributes of the user. ### Architecture and Method The three selected papers all propose pre-processing methods that can be applied to a wide variety of model types, but all have used matrix factorization as one of their base recommender models. Rastegarpanah et al (2019) propose a method for training supplementary data that can influence polarization and improve individual and group fairness. The key insight is that introducing additional users will affect the recommendation of the original users. This insight is exploited by adding a few synthetic users with accompanying data and allowing gradients from loss terms designed to influence the polarization and fairness to flow back to these. Further, they propose two computationally cheap heuristics. The fairness optimization idealizes equal utility of individual users and groups explicitly. Fang et al (2022) apply the same base approach but focus on optimizing multiple fairness objectives more efficiently and smoothly by projecting the gradients of different objectives onto each other if they conflict. The fairness objectives fall under both Utility-Based fairness and Recommendation Parity. Slokom et al (2021) modify the data of existing users through an extension of the approach proposed by Weinsberg et al (2012) instead of training new ones. An auxiliary logistic regression model is trained to tell how indicative items are of the gender of the users that like them. This \begin{table} \begin{tabular}{l l l} \hline \hline & & **Data Augmentation** \\ \hline **Recommendation Parity** & **Global** & Fang et al (2022) \\ & **Local** & \\ \hline **Neutral Representation** & & \\ \hline **Utility-Based** & **Group** & Rastegarpanah et al (2019) \\ & & Fang et al (2022) \\ & **Individual** & Rastegarpanah et al (2019) \\ \hline **Custom** & & Slokom et al (2021) \\ \hline \hline \end{tabular} \end{table} Table 4: Overview of the identified pre-processing approaches structured by the Fairness Interpretation and Fairness Incorporation of their optimization. Approaches that consider multiple Fairness Interpretations are listed in multiple rows. information is used to select items to be added or removed from user data to make the data less indicative of gender. The addition process specifically intersects lists of indicative items with recommendations from a user-based collaborative filtering model to motivate the addition of relevant items. ## 5 In-Processing Methods In-processing methods are the most represented among the main categories, and their dominance has been constant since the birth of the field. They are characterized by being the most specialized approaches, as the base models themselves are adapted and changed. ### Fairness Optimization #### 5.1.1 Recommendation Parity Optimization of Recommendation Parity fairness, i.e., the statistical parity of recommendations given to different sensitive groups, is mainly found among the in-processing methods and was popular during the field's early years. Global Recommendation Parity:Kamishima et al (2013); Kamishima and Akaho (2017) propose adding loss terms for matching mean rating and preference of different sensitive groups, while Dickens et al (2020) devise a probabilistic soft logic rule of similar design for the same goal. More comprehensive approaches for matching global recommendation distributions beyond the first momentum are proposed by Kamishima et al (2012, 2016, 2018). Kamishima et al (2012, 2018) introduce different loss terms for minimizing the mutual information of the ratings and the sensitive groups in matrix factorization. In a slightly different approach, Kamishima et al (2016) apply a latent factor model where the rating variable is considered independent of the sensitive group variable and optimizes their model using the Expectation Maximization algorithm. Local Recommendation Parity:In the case of local Recommendation Parity, all relevant research we have found only considers the first moment when matching the recommendations of different sensitive groups. Kamishima et al (2013) propose adding a loss term that penalizes the squared difference of item ratings between different sensitive groups as an alternative to the already mentioned global version. Similarly, Islam et al (2021) apply the same idea but opt for an absolute difference instead of a squared difference. The probabilistic soft logic approach proposed in Farnadi et al (2018) defines rules for encouraging both item-group and item-level parity. #### 5.1.2 Neutral Representation The objective of Neutral Representation fairness cannot be achieved without altering the model, thus, it is only pursued by in-processing approaches. Optimization of this Fairness Interpretation can be achieved by applying different strategies for filtering out intrinsic sensitive information in representations within the model. The following paragraphs are structured by the technique applied to achieve neutral representations, see also Fig 4. Adversarial:The approaches proposed by Resheff et al (2019); Wu et al (2021); Xu et al (2021); Borges and Stefanidis (2022); Rus et al (2022) all apply adversarial models directly on model representations. Resheff et al (2019) pass the latent user factors of their matrix factorization approach to their adversarial, while Wu et al (2021) do the same with one of the multiple user representations they train in a composite NCF model. Xu et al (2021) feed their adversarial model a linear combination of the user representation in a base recommender model and a representation they base on an auxiliary knowledge graph for modelling sensitive user attributes. Rus et al (2022) propose a neural classification model and applies an adversarial model on a hidden layer in the said model. Finally, Borges and Stefanidis (2022) apply an adversarial model to discriminate the latent representation in their variational autoencoder-based model. A slightly more intricate scheme is proposed by Wei and He (2022), who concatenate the observed ratings to the representations that are fed to the adversarial model, which they argue will improve the neutrality of the representation and also make the representations independent with respect to the sensitive attribute conditioned on the observed ratings. They further add a second adversarial model, which is fed predicted ratings along with corresponding observed values and item embeddings. Bose and Hamilton (2019); Li et al (2021b) argue for letting users dynamically decide which sensitive attributes they are comfortable with the model using. To support this, both propose training optional filters for filtering out different types or combinations of sensitive information from user representations in graph- and matrix-factorization models. The filters are trained using adversarial models. A similar approach is proposed by Wu et al (2022b), who train _adaptors_(Houlsby et al, 2019) within the _transformers_(Vaswani et al, 2017) that make out their model. The adaptors dynamically filter out different combinations of sensitive attributes based on user- and task-based settings in a sequential recommendation setting. Wu et al (2021b); Liu et al (2022a,b,c) all consider graph neural network methods and the construction of higher-order graph representations by accumulating neighbouring representations in the recommendation graph. The approaches apply adversarial models to discourage the encoding of sensitive attributes in the user- and item-level representations, which also mitigate the accumulation of sensitive information in the higher-order neighbourhood representations. Liu et al (2022c) further supplements the adversarial discrimination loss with a loss term on the covariance of the predicted attribute and the actual sensitive attribute. Liu et al (2022a) instead designs and utilizes self-supervising loss terms to enhance the representations and mitigate imbalance issues caused by imbalanced sensitive attributes. Orthogonality:Orthogonality-based approaches apply additional loss terms or explicit removal of sensitive projections to make representations orthogonal to explicit or implicit sensitive dimensions in the representation space. Wu et al (2021a) model two separate user representations: one for inferring sensitive information and one for providing neutral representations. They devise a loss term that encourages the two representations to be orthogonal and further encourages the neutrality of the second representation through an adversarial approach. A more explicit approach is pursued by Islam et al (2019, 2021), where a post hoc step infers sensitive dimensions in the representation space by taking the difference of the mean representation of each sensitive group. The projections of the sensitive dimension onto each representation are then explicitly subtracted. In the case of Islam et al (2021), the orthogonality processing supplements the Recommendation Parity loss term (see Section 5.1.1). Sampling Based Representation Training:Rahman et al (2019); Li et al (2022b) both adjust the sampling strategy used when training representations. Rahman et al (2019) proposes to balance the sampling of the next user according to sensitive groups when training graph representations using random walks. In contrast, Li et al (2022b) adjust the probability of sampling the triplets needed for training knowledge graph representations in a manner that balances out the correlation of sensitive groups and items across all users. Probabilistic Approaches:The models by Buyl and Bie (2020); Li et al (2022a) are fitted using a prior that is informed of sensitive attributes to allow the rest of the model to focus on other aspects. When the model is used, the sensitive prior is replaced by one oblivious to the sensitive attributes. The intention is to produce fair recommendations along with neutral representations. Frisch et al (2021) explicitly model a variable for representing the contribution of the sensitive attributes instead of using a sensitive prior. Ideally, this sensitive variable can soak up all sensitive information, leaving the rest of the model neutral. When recommending, the model drops the parts of the model that are dependent on the sensitive attribute. #### 5.1.3 Utility-Based Fairness Utility-Based fairness optimization attempts to balance utility measures that involve ground truth consideration on a group or individual level. While only group-level optimizations are found among the in-processing methods, there is still significant variation in the considered approaches. Yao and Huang (2017) proposes four Utility-Based fairness metrics for recommender systems, then adapt and applies each metric as loss terms in a matrix factorization approach. One of the metrics is similarly adapted by Dickens et al (2020) as a probabilistic soft logic rule. Numerous variations of straight-forward loss terms based on Group Utility-Based Fairness Interpretations are proposed for different models: the contextual bandit approach proposed by Huang et al (2021) penalizes differences in cumulative mean rewards of different sensitive groups. Liu et al (2022c) and Borges and Stefanidis (2022) both supplement adversarial approaches with Utility-Based fairness loss enhancement. Liu et al (2022c) penalize the absolute differences in pairwise recommendation loss of different sensitive groups, while Borges and Stefanidis (2022) penalize differences in reconstruction loss of a protected group and that of the average user in their variational autoencoder model. Finally, Yao and Huang (2021) train personalized regularization weights based on the loss of a specific sensitive group to force the matrix factorization model to focus more on their achieved utility. Wan et al (2020) considers a unique recommender system setting where users and items are segmented into market segments based on sensitive groups and item groups and argue that the utility within the market segments should be similar. The proposed model applies loss terms that penalize error variation between user groups, item groups, and market segments. The authors also explore a market segment-level parity alternative by penalizing the variances of predicted ratings instead of errors. Li et al (2021a) propose a less direct way of encouraging the model to value the utility of non-mainstream user groups more by adding decoder components to their representations and corresponding loss terms for reconstructing the inputs. The intention is to provide the model with a stronger incentive for properly encoding all users and items, which in turn may mitigate issues with favouring the utility of mainstream user groups at the expense of everyone else. A similar goal is pursued by Liu et al (2022a), who devise a set of auxiliary goals for encouraging their model to produce richer representations of all users. For the reciprocal setting, Zheng et al (2018) proposes to consider both the utility of the user that receives the recommendation and the utility of the recommended users themselves. On a global level, this scheme balances the utility of two user groups based on the user's role in individual recommendations, i.e., reference users and users being recommended to reference users. #### 5.1.4 Custom Bobadilla et al (2021) utilize empiric trends in the input data to design a set of indexes that represent users' and items' intrinsic _sensitive value_. They further design a loss term to penalize recommending items to users if the index values differ significantly. Loss enhancement is also applied in the neighbourhood-based collaborative filtering model proposed by Burke et al (2018) to balance the contribution of peers of different sensitive groups when recommending. Specifically, the added loss term penalizes the absolute difference of the model-specific user-to-user weights of different sensitive groups. ### Architecture and Method #### 5.2.1 Loss enhancement Matrix FactorizationThe early works of Kamishima et al; Kamishima et al are the earliest identified research that satisfies all acceptance criteria in this survey. The four publications (Kamishima et al, 2012, 2013, 2018; Kamishima and Akaho, 2017) all propose matrix factorization models where the fairness aspects are modelled through different, but related loss terms. They all share the overall goal and fairness objective of ensuring statistical parity of recommendations. Additionally, all train different sets of parameters for the different sensitive groups they consider. In the first iteration, Kamishima et al (2012) propose a loss term that is an approximation of the mutual information of the rating and the sensitive attributes. Next, Kamishima et al (2013) introduces an efficiency improvement with alternative loss terms that penalize differing ratings per sensitive group averaged over all items or individually. The paper by Kamishima and Akaho (2017) considers similar loss terms, but through a implicit-feedback recommender system using ranking-based evaluation. It is noted that the approach has little effect on the ranking order of each user. Finally, Kamishima et al (2018) returns to rating-based recommender systems, introducing two new methods matching the first and second moment of the distributions for ensuring statistical parity. Both methods approximate rating distributions given the sensitive attribute with normal distributions, and then penalize Bhattacharyya distance (Bhattacharyya, 1943) and mutual information respectively. Another early contribution was by Yao and Huang (2017), who argue for fairness definitions based on matching utility rather than Recommendation Parity. They propose four new Utility-Based _unfairness_ metrics that measure imbalances in how well the system recommends for different sensitive groups. Further, they devise loss terms based on these metrics and an additional parity-based metric to compare how well models trained with the different loss terms fare when evaluated using all metrics. Zheng et al (2018) is concerned with recommending matches after speed-dating, which is a reciprocal recommendation setting in the sense that consumers are recommended to other consumers. The model predicts user impression of speed-dates with different partners and considers a Custom utility metric based on the similarity of user's expectation of a partner and their impression of the speed-date partner. The utility metric is also used in the added loss term, which is designed to maximize the utility of both users in each potential match. The motivation is to balance the utility achieved by users being recommended for and the utility achieved by the users said users are recommended. Considering the utility of both involved users may also improve the overall success of this specific application, as mutual interest is ideal in a matchmaking scenario. The approach proposed by Wan et al (2020) is designed to address retail marketing bias by better balancing the achieved utility in different market segments. In particular, they define market segments based on sensitive user groups and attributes of models used in marketing different items, e.g., one segment may make out male users and items only marketed using female models. The proposed approach attempt to achieve similar utility within each distinct market segment by penalizing error variance between the different segments and other groupings. An alternative configuration is also considered where the model instead penalizes predicted rating variance, resulting in a Recommendation Parity Fairness Interpretation. The last identified loss enhancement-based matrix factorization approach was proposed by Yao and Huang (2021). The key idea of the model is to improve the utility of disadvantaged users through personalized regularization. This is achieved through a multi-step process that alternates between training for the general recommendation task while keeping the personalized regularization weights static and updating the same parameters based on the recommendation loss of the disadvantages. Neighbourhood-based Collaborative FilteringBurke et al (2018) propose enhancing the loss of a user-based collaborative filtering approach to encourage users' neighborhood of peers to be better balanced with respect the considered sensitive attributes. To this end, they devise a loss term that penalizes if the coefficients used for weighting influence of peers are skewed towards a specific group, i.e., the sum of male peer coefficients is greater than that of female peers. Neural Collaborative FilteringBobadilla et al (2021) are unorthodox in terms of fairness definition and approach. They give each item a value based on the distribution of, e.g., the gender of users who like it, thus representing how _gendered_ the item is. The items are then used reversely to measure how gendered each _user_ is based on the items they like. The authors go on to penalize recommending items with a gendered value far from the user's. Li et al (2021) aims to improve the utility of collaborative filtering models for users that are not mainstream. Their approach involves a factorization step that involves user and item representations, where the representations double up as the latent representations in two autoencoders. The autoencoders are added to encourage the model to properly encode all input information in the latent representations, and not neglect information only relevant to a subset of the users. BanditThe only identified bandit approach was proposed by Huang et al (2021), and is a contextual bandit method that penalizes differences in cumulative mean rewards of different sensitive groups. The authors construct a synthetic dataset for video recommendations and define a reward function that, for instance, rewards cases where the gender of the user and the video speaker is the same. #### 5.2.2 Probabilistic GraphBuyl and Bie (2020) consider link prediction in social graphs and applies a probabilistic model for training graph representations based on the work by Kang et al (2019). They encode prior knowledge of the relations in the network using a prior term, which frees up the representations from encoding the same knowledge. Buyl and Bie (2020) leverage this by designing priors that contain sensitive information to be used during training but replaced in the final recommendations. Li et al (2022) further adapt the approach for peer recommendation in online learning and introduce changes to the negative sampling used during training. Probabilistic ModelFarnadi et al (2018) and Dickens et al (2020) both apply models based on Probabilistic Soft-Logic (PSL) (Bach et al, 2017) for fairness-aware recommendation. PSL allows probabilistic models using human-interpretable first-order logic. Both models apply a base set of logical rules for the general recommendation task, e.g., \[\text{SIMILAR\_USER}(u_{1},u_{2})\land\text{RATING}(u_{1},i) \implies\text{RATING}(u_{2},i),\] \[\text{SIMILAR\_ITEM}(i_{1},i_{2})\land\text{RATING}(u,i_{1}) \implies\text{RATING}(u,i_{2}).\] Farnadi et al (2018) extend the model with fairness rules based on parity, e.g., one sensitive group's rating of an item implies the rating of a different sensitive group and vice versa. Dickens et al (2020) consider courser parity-based rules and add others for encouraging equal utility for the different sensitive groups. Further, they allow modellers to adjust the importance of different fairness terms. They also discuss using the model together with an arbitrary black-box model to inject fairness and interpretability, which can be thought of as a form of re-ranking. In the work by Kamishima et al (2016), two different graphical models are proposed for modelling ratings independent of a sensitive group membership. The models are realized as latent class models and optimized through the Expectation-Maximization algorithm. Frisch et al (2021) propose using a latent block model for clustering users and items, then model a new rating-mean based on the associated cluster mean, individual variables for the item- and user-specific influences, and finally an item-specific variable that is controlled by the user's sensitive attribute. This final variable models the sensitive information, and is only used during training, similar to how informed priors are used in Buyl and Bie (2020). The model is optimized using variational inference. #### 5.2.3 Algorithmic Neural Collaborative FilteringIslam et al (2019) explicitly subtract sensitive projections in user representations in a neural collaborative filtering model. They consider both scenarios where there is a single or multiple binary sensitive attribute(s), e.g., male/female and young/senior. Some of the same authors (Islam et al, 2021) propose a more intricate approach where they utilize transfer learning to pre-train user representations and neural network weights in a non-sensitive recommendation setting. The user representations are then processed similarly as by Islam et al (2019) to be used in a sensitive recommendation setting. The non-sensitive settings considered are film recommendation and social media action recommendation, while the sensitive settings are occupation and college major recommendation, respectively. A parity-based loss term is applied in addition to the user representation processing to incorporate fairness in sensitive settings. Li et al (2022b) propose a fairness-aware sequential recommender system in which an integral part is to train item representations for representing the contextual information of the items and their relations. The authors use fairness-aware sampling when training said representations. Specifically, the sampling probability is set to adjust for any empirical skewness in how an item is preferred by different sensitive groups. GraphThe approach by Rahman et al (2019) is designed to be used in reciprocal settings and tested on recommending peers in social networks while considering sensitive groups based on gender and race. The base representation algorithm performs random walks over the graph by sampling the next user among the current user's peers, i.e., the users the current user has a relation to in the observed data. Their fairness view is introduced by first sampling the peer's sensitive attribute uniformly, then sampling as usual from the qualified peers only. Xu et al (2021) work with knowledge graph-based recommender systems. They propose training user representations of an auxiliary graph for representing sensitive attributes and their hidden relationships through a multi-layered neural network. This user representation is combined with that of the original recommender system in a linear transformation and then factorized with the item representation from the original recommender system. Additionally, an adversarial network is trained to classify sensitive attributes from the compound user representations and used to filter out said information. The purpose of the auxiliary graph representation is stated to be to improve the modelling of multiple sensitive attributes and their interactions. #### 5.2.4 Adversarial Matrix FactorizationResheff et al (2019) apply an adversarial gradient-based model to remove information like gender and age in the latent user factors. The authors list both privacy and fairness aspects as motivations for adversarial filtering. Li et al (2021b) adopt the approach proposed by Bose and Hamilton (2019) using multiple different recommender system specific models, as opposed to the more general setting of link-prediction considered by Bose and Hamilton (2019). The approach is applied using four different models, covering matrix factorization and neural collaborative filtering. They further extend the approach by proposing a secondary option for training single filters for combinations of sensitive attributes, which is compared to the main approach of training one filter for each attribute and taking the mean of filtered representations to apply combinations. GraphBose and Hamilton (2019) proposes to filter combinations of sensitive attributes from graph representations dynamically and considers both link-prediction and recommender system applications using different setups. They train one filter per sensitive attribute and combine filters by aggregating the representations processed by the filters. Each sensitive attribute is further assigned an adversarial for removing the traces of said sensitive attribute. A binary mask is sampled during training to simulate different users who want to restrict the use of different combinations of sensitive attributes. This mask is used in practice to activate the respective filters. Wu et al (2021b) assume a graph-based perspective and that pre-trained user and item representations are provided. They suggest training filter functions for filtering out sensitive information from both representations and using these to build higher-order neighbourhood representations iteratively. For instance, the first-order neighbourhood representation of a user is based on the filtered representations of the items the user has liked or interacted with, the second-order neighbourhood contains the first-order neighbourhood representation of the same items, and so on. A multi-layered neural network is used to simultaneously process the first- and second-order neighbourhood representations into the final network level representation, with the motivation to capture higher-order sensitive information and how the different levels relate. Adversarial models are applied to both the filtered initial user representations and the final network-level user representations. Liu et al (2022a,b,c) also apply neighbourhood representations, along with adversarial models for removing sensitive information in the base representations. However, they differ from Wu et al (2021b) as they consider end-to-end systems where the base representations are trained as part of the same model, and in using the highest-order representations explicitly as the final representations. The three papers themselves differ in how they construct higher-order neighbourhood representations. Liu et al (2022a,c) reduce the contribution of higher-order representations by dividing with a function that increases linearly in the order. Liu et al (2022b) construct higher-order representations by passing the previous-order representations through a neural network and also explicitly considers the representations of neighbours that are increasingly further removed in the graph. The approaches further differ in their application of additional fairness optimization. Liu et al (2022c) propose two new loss terms: one for penalizing the covariance of the actual sensitive attribute and the one outputted by the adversarial model, and another for penalizing differences in pairwise losses of different sensitive groups. The former further enhances the neutrality of the representations, while the latter has an equality of utility motivation..Liu et al (2022a) proposes to enhance the base representations by designing and applying a set of loss terms that encourage the representation of more complex information to mitigate the poor representation of underrepresented sensitive groups in the dataset. Neural Collaborative FilteringA neural collaborative filtering model for fairness-aware news recommendation is proposed by Wu et al (2021a). The key idea is to contain all sensitive information in a part that can be disregarded when the model is applied, similar to the priors in Buyl and Bie (2020). Two separate user representations are trained: one is used for classifying the sensitive attribute and has the intention of aggregating sensitive information, while the other is designed for housing everything else and is coupled with an adversarial model to remove sensitive information. The sum of both user representations is used for recommendation during training while encouraging that the representations are orthogonal through adding a loss term. Only the neutral representation is used once the model is finished training. Rus et al (2022) propose to improve fairness in a job recommender system by filtering out gender information in representations of user resumes. To this end, they first train new word embeddings on resumes and job application texts within a proprietary dataset. The trained word embeddings are then used to encode the resume texts, and the encoded texts serve as inputs to their proposed neural recommender model. An adversarial model is applied to filter out gender information at a specific layer in the multi-layered neural network model. The authors also explore a simple alternative where they instead replace gendered words with neutral words before training the word embeddings. However, the adversarial approach is shown to outperform this alternative. Wu et al (2022b) follow Bose and Hamilton (2019); Li et al (2021b) in letting the users decide which sensitive attributes can be considered when generating recommendations. However, while the preceding research train multiple filters for different sensitive attributes that are dynamically plugged into the recommender system pipeline when activated, the filtering components in Wu et al (2022b) are static parts of the model that dynamically change behaviour based on personalized prompts concatenated with the input. The filtering components are based on the _adaptor_ proposed by Houlsby et al (2019) and trained along with different discriminators while keeping the remaining model parameters frozen. The framework proposed by Wei and He (2022) considers multiple Fairness Interpretations simultaneously. The framework consists of two main loops: an inner loop in which users are initialized with the latest parameters suggested by a meta-model and then optimized for different tasks, and an outer loop where the results of the inner loop are used to update the parameters of the meta-model to produce better user initializations in the next cycle. The framework applies two different adversarial models, which both attempt to detect sensitive attributes: the first one is fed user representations based on trained context representations and the users' observed ratings, while the second considers the predicted ratings, the corresponding observed ratings, and item representations. Variational AutoencoderBorges and Stefanidis (2022) use a variational autoencoder (VAE) as their main model. The VAE is considered a collaborative recommender, where the decoded latent representation is interpreted as an encoding from which recommendations are extracted. The VAE is extended with an adversarial model for training the model to produce neutral latent representations and a loss term for encouraging the model to be equally good at reconstructing the inputs of a specific _protected_ sensitive group as at reconstructing the inputs of all users on average. ## 6 Post-Processing Methods Post-processing methods share one of the main benefits of pre-processing methods in being flexible with respect to which recommender system model is used. Additionally, post-processing methods do not affect the raw data but are arguably the least flexible approaches when it comes to modelling since they are constrained by the provided recommendation and the data used to train the model. Post-processing methods are not as popular as in-processing methods but have received more attention than pre-processing methods. ### Fairness Optimization #### 6.1.1 Recommendation Parity Both the post-processing techniques for Recommendation Parity included in the survey use the _global_ perspective. Ashokan and Haas (2021) propose using results from the training data to align better the rating distribution of the two sensitive groups in a binary setting. To this end, they add the mean rating difference of the two sensitive groups during training to the predicted ratings of one of the groups when using the model for a new recommendation. The approach in Dickens et al (2020) discussed in Section 5.1.1 is also applicable as a re-ranker of a base recommender model. Thus, their proposed probabilistic soft logic rule comprises a second identified strategy for optimizing Global Recommendation Parity in post-processing methods. #### 6.1.2 Utility-Based Fairness Group:The _re-rating_ approach proposed by Ashokan and Haas (2021) considers a scheme where the per-item average rating error for each sensitive group, as observed in the training data, is added to the individual ratings of the users. This method is analogous to the previously covered parity scheme proposed in the same research. The only identified approach that optimizes group Utility-Based fairness in a two-sided fairness approach is proposed by Wu et al (2022a). Through applying the Frank-Wolfe algorithm (Frank and Wolfe, 1956), the authors optimize for consumer-side fairness by minimizing the variance of a differentiable definition of NDCG, along with the general recommendation and provider-side fairness objectives. Individual:A multi-stakeholder approach is proposed by Wu et al (2021c), in which the consumer-side fairness objective is to fairly distribute the loss of utility incurred by the producer-side exposure considerations among the users. They devise a two-step approach, where the first step attempts to identify and fix highly preferred items that still have to reach their maximum exposure in the recommendation lists of the users. Each user is assigned one item at a time in a manner to even out the benefit of choosing first. The second step fills in the free recommendation slots with the items that still require exposure per the provider-side objective. #### 6.1.3 Custom Patro et al (2020a,b); Biswas et al (2021) all consider multi-stakeholder settings where the consumer-side objective is to distribute the loss of utility among users fairly. However, unlike Wu et al (2021c), they propose applying a utility definition that is not tied to the ground truth. Their shared utility measure is purely based on the preference values outputted by the original recommender system and produces values indicating how far from _optimal_ the new recommendations are deemed. Patro et al (2020a) and Biswas et al (2021) both propose similar two-step approaches as that of Wu et al (2021c) but opt for guaranteeing the producer-side objective in the first step by allocating items in need of exposure in turn to users while attempting to prioritize items preferred by the user. The second step fills the remaining slots with the users' most preferred items. Finally, multi-sided fairness in a dynamic setting is explored by Patro et al (2020b) who attempts to retain both provider- and consumer-side fairness when facing different incremental changes to the recommendation, e.g., a gradual transition to a new base \begin{table} \begin{tabular}{l l l} \hline \hline & & **Re-ranking** \\ \hline **Recommendation Par-** & **Global** & Dickens et al (2020) \\ **ity** & & Ashokan and Haas (2021) \\ & **Local** & \\ \hline **Neutral** & & \\ **Representation** & & \\ \hline **Utility-Based** & **Group** & Ashokan and Haas (2021) \\ & & Wu et al (2022a) \\ & **Individual** & Wu et al (2021c) \\ \hline **Custom** & & Edizel et al (2020) \\ & & Paraschakis and Nilsson (2020) \\ & & Patro et al (2020a) \\ & & Patro et al (2020b) \\ & & Biswas et al (2021) \\ & & Do et al (2021) \\ \hline \hline \end{tabular} \end{table} Table 6: Overview of the identified post-processing approaches structured by the Fairness Interpretation and Fairness Incorporation of their optimization. Approaches that consider multiple Fairness Interpretations are listed in multiple rows. recommender model. Individual fairness is preserved by introducing lower-bound user utility constraints in the proposed integer linear programming model. A similar setting is considered by Do et al (2021) whose approach also optimizes for two-sided fairness and applies custom user utility tied to the base recommender's outputted preference scores system. The approach positions itself to maximize the custom utility of worse-off users and items simultaneously in both regular and reciprocal recommendation settings. Edizel et al (2020) focus on providing users recommendations that are uncorrelated with their sensitive attributes. The goal shares many parallels with in-processing approaches that optimize for intermediate representations that are uncorrelated with the sensitive attributes but operate on sets of recommendations due to the post-processing nature. The key idea is to allow users to inherit the recommendations of similar or arbitrary users belonging to different sensitive groups, thus muddying the correlation between the sensitive attributes and both recommendation lists and individual recommendations. Another re-ranking approach is by Paraschakis and Nilsson (2020), which considers an individual fairness definition based on calibration by user preferences. Specifically, they consider a matchmaking setting where users specify how important it is for them to date within their race or religion. The problem is optimized through dynamic programming. ### Architecture and Method #### 6.2.1 Neighbourhood-Based Collaborative Filtering The approach proposed by Ashokan and Haas (2021) is technically a re-_rating_ approach since they consider rating-prediction-based recommender systems but share many similarities with re-ranking approaches. The key idea is to attempt correct predictions of members of different sensitive groups by adding the average prediction errors for each item and sensitive group, as observed in the training set, to new predictions. A second parity-based option instead adds the average difference of the rating predictions given to different sensitive groups for each item. #### 6.2.2 Matrix Factorization Edizel et al (2020) focus on ensuring the generated top-\(k\) recommendations are uncorrelated with sensitive groups by making users inherit the recommendations of other users belonging to a different sensitive group. While parts of the approach are enhanced when the same top-\(k\) recommendations are recommended to multiple users by the base recommender system, the approach also works when all top-\(k\) recommendations are unique, which is not unlikely for a large \(k\) and a large item catalogue. Two different schemes for recommendation inheritance are evaluated: random and based on similarity. The former is shown to be more effective at reducing how indicative the recommended set is of the user's sensitive group but also reduces the utility quicker. The works produced by Patro et al (2020, 2020); Biswas et al (2021) have an overlapping set of authors, and all consider the same two-sided recommendation setting. In Patro et al (2020), the primary goal is to satisfy minimum requirements for provider-side exposure, subject to a secondary goal of distributing the loss of utility among the users fairly. They propose a two-step approach where first items are distributed among the user by allocating one item per user in turn to satisfy the items' minimum exposure criteria. The users are given the best remaining item according to said user's predicted preferences. Secondly, the remaining recommendation list of each user is filled based on the original recommendation. The approach is proven to guarantee recommendations that are _envy-free up to one item_ (EF1) (Budish, 2011). The subsequent work by Biswas et al (2021) improves the model by identifying and removing _envy-circles_, which are directed cycles in a graph representation of user envy, without affecting the EF1 guarantee. Patro et al (2020) look into incrementally updating the recommendation according to new data or major changes while retaining user and provider-side fairness. They consider three scenarios: switching the underlying recommender system, incorporating new data, and considering additional features in relevance estimation. For this approach, user-side fairness is the primary goal, and its performance is enforced through constraints in the proposed integer linear programming approach. Similarly, Do et al (2021) also optimizes consumer- and provider-side fairness with respect to custom utility measures based on the base recommender's predictions and ranking exposure. Their key insight is to consider the utility of each user and item an objective, but ordering the objectives by performance for Pareto efficiency comparisons, e.g., the utility of users \(u_{1}\) and \(u_{3}\) are compared if they achieve the worst utilities in the two compared solutions. This objective formulation and ordering render Pareto efficiency equivalent to Lorenz efficiency, meaning that the utility of the worse-off users and items is maximized, and the model is optimized using the Frank-Wolfe algorithm. Wu et al (2021c) considers a similar two-sided setting as the work above but opens for providers having more than one item each. Further, the provider exposure is not considered uniform across recommendation ranks but higher in better positions on the recommended lists. For each recommendation position, the approach iterates through the users in an order based on the current recommendation utility and attempts to fill the position with a high-ranking item in the user's original recommendation, subject to maximum restrictions on provider exposure. Unfilled positions are since filled with items for meeting exposure requirements. Another two-sided approach is found in Wu et al (2022a), which differs from the aforementioned two-sided approaches by relying more heavily on established optimization models. The consumer-side fairness objective is set to minimize the variance of smooth, i.e., differentiable, NDCG. In contrast, the provider-side fairness objective similarly considers the variance of exposure achieved by different items. The setup is applied by considering the base recommendation of multiple models, including two matrix factorization models and one neural collaborative filtering model, and produces multiple Pareto optimal solutions. The final solution is selected among these by favouring the solution that yields the best outcome for the worst-off objective. Thus, unlike Patro et al (2020a); Wu et al (2021c) whose methods prioritized provide- and consumer-side fairness, respectively, through the order in which the objectives were considered, this approach does not explicitly favour one objective. #### 6.2.3 Classification methods A return to matchmaking recommender systems is found in Paraschakis and Nilsson (2020), which details a calibration-based fairness approach. The approach considers the case where users define their preference for dating within their race or religion on a percentage scale, and the fairness objective is defined as making sure that the recommended list of each user has a racial or religious composition close to their preference. One proposed way to model the fair recommendation problem is to frame it as a Knapsack problem and find optimal solutions using dynamic programming. They also propose an alternative Tabu-search-based approach, which scales better but does not guarantee an optimal solution. The fairness evaluation is based on the same calibration applied throughout the approach as a fairness definition. ## 7 Metrics A plethora of different metrics have been proposed and applied in the identified research. Contributing factors to this great number of metrics are varying fairness definitions and the fact that different recommender system settings pose different requirements, e.g., rating vs ranking, binary vs multivalent sensitive attributes, and user-item vs user-user recommender systems. Another significant factor is how recently the topic has become relevant and the subsequent lack of consensus regarding evaluation. The metrics have all been structured according to the fairness categories they adhere to, and liberties have been taken in grouping similar metrics under new descriptive names. Further, formulas have been adapted and rewritten in the same notation for consistency. A lookup table for said notation can be found in Table 1. Finally, each subsection covers a Fairness Interpretation and presents a table of the identified metrics, key contexts, and a list of the research that applied them. ### Recommendation Parity Metrics #### 7.1.1 Item-level Parity Three identified metrics share a general design for summarizing the disparity of ratings or recommendations at the item level. All three metrics measure the item-level difference of ratings/recommendations aggregated by sensitive groups and a final aggregation by items. \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\big{[}[\hat{\mathbb{E}}_{u\in\mathcal{U}_{ \epsilon_{1}}}[\hat{r}_{uv}]-\hat{\mathbb{E}}_{u\in\mathcal{U}_{\epsilon_{2}}}[ \hat{r}_{uv}]]\big{]}\] Bose and Hamilton (2019); Wu et al (2021b) apply identical metrics that consider the simple absolute difference of item ratings for different groups in binary-sensitive attribute settings. Bose and Hamilton (2019) also consider multivalent sensitive attributes, for which Equation 1 is expanded to consider all possible pairs of sensitive groups. \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\;\Big{|}\ln\left(\hat{\mathbb{E}}_{u \in\mathcal{U}_{\epsilon_{1}}}\left[1\{\hat{y}_{uv}\}\right]\right)-\ln\left( \hat{\mathbb{E}}_{u\in\mathcal{U}_{\epsilon_{2}}}\left[1\{\hat{y}_{uv}\} \right]\right)\Big{|}\;\right] \tag{1}\] \[\max_{v1,v2\in\mathcal{V}|v1\neq v2}\;\Big{|}\left(\hat{\mathbb{E}}_{u\in \mathcal{U}_{\epsilon_{1}}}\left[1\{\hat{r}_{uv_{1}}>\hat{r}_{uv_{2}}\}\right] \right)-\left(\hat{\mathbb{E}}_{u\in\mathcal{U}_{\epsilon_{2}}}\left[1\{\hat{r }_{uv_{1}}>\hat{r}_{uv_{2}}\}\right]\right)\Big{|} \tag{2}\] Islam et al (2021); Frisch et al (2021) both define similar concepts named \(\epsilon\)-(differentially)fair, where individual \(\epsilon\)'s reflect how much the recommendation of a single item differs in a binary sensitive group setting. The former considers the probability of recommending items to different sensitive groups, Equation 1, while the latter considers the probability of ranking an item higher than another item to different sensitive groups, Equation 2. Islam et al (2021) takes inspiration from _differential privacy_(Dwork, 2011), and subsequently have logarithmic terms in the absolute \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline difference, but opt for computing the average \(\epsilon\) and not the maximum. Frisch et al (2021) does not cite differential metrics or concepts, but is concerned with the maximum \(\epsilon\) and not the average. Out of the two aggregations, the maximum poses a stronger guarantee than the average and is more in line with the differential definition. #### 7.1.2 Item-level Rating Deviation \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\sqrt{\hat{\mathbb{E}}_{j\in \mathcal{S}}\left[\left(\hat{\mathbb{E}}_{u\in\mathcal{U}_{j}}[\hat{r}_{uv}]- \mu_{v}\right)^{2}\right]},\] \[\text{where }\mu_{v}=\hat{\mathbb{E}}_{k\in\mathcal{S}}\big{[} \hat{\mathbb{E}}_{u\in U_{k}}[\hat{r}_{uv}]\big{]}.\] While the Item-level Parity metrics consider the mean difference of predicted item ratings between sensitive groups, Xu et al (2021) opt for measuring the mean standard deviation of the same ratings. The metric is inherently capable of considering more than two sensitive groups, and the square difference term leads to larger penalties for large differences than the corresponding penalties in the metrics applying the absolute difference instead. #### 7.1.3 Global-level Parity \[\Big{|}\,\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\hat{\mathbb{E}}_{u\in \mathcal{U}_{s_{1}}}\left[\hat{r}_{uv}\right]\right]-\hat{\mathbb{E}}_{v\in \mathcal{V}}\left[\hat{\mathbb{E}}_{u\in\mathcal{U}_{s_{2}}}\left[\hat{r}_{uv }\right]\right]\Big{|}\] The Global-level Parity metric is applied in multiple research and is simply the absolute difference of the mean predicted rating or preference score of different sensitive groups. #### 7.1.4 Mutual Information, Rating \[\sum_{s\in\mathcal{S}}\int\mathrm{P}(\hat{r},s)\log\frac{\mathrm{P}(\hat{r}|s) }{\mathrm{P}(\hat{r})}d\hat{r}\] Mutual information is a concept from information theory, comprising a measure of the mutual dependency of two variables. It has been applied in fair recommender systems to measure the mutual dependency of the rating and the sensitive attribute. Due to the probabilistic definition, non-probabilistic models applying the metric have resorted to different methods for approximating the measure, e.g., empiric approximations of probabilities and bucketing the ratings into intervals to replace the inner integral with a sum. #### 7.1.5 Kolmogorov-Smirnov Statistic \[\sup_{\hat{r}}|F_{s_{1}}(\hat{r})-F_{s_{2}}(\hat{r})|\] The Kolmogorov-Smirnov(KS) statistic measures how different two probability distributions are, and its estimation involves the cumulative distributions of the probability distributions. Both cumulative distributions are typically empirically defined based on the outputted ratings for different sensitive groups when used in recommender systems. sup is the supremum, meaning that the statistic returns the upper-bounded difference between the distributions of the ratings. #### \(\chi^{2}\)-Test \(\chi^{2}\)-tests are typically used to determine if the difference in collections of categorical data is probable, given that they were sampled from the same distribution. Frisch et al (2021) applied \(\chi^{2}\)-test to test the independence of group membership and user gender. Since group membership influences the rating within their Latent Block Model, the groups' gender composition should ideally reflect the overall gender composition when pursuing recommendation parity. #### 7.1.7 Group-to-group Variance \[\hat{\mathbb{E}}_{a,b\in\mathcal{S}\times\mathcal{S}|a\neq b}[(N_{a} -N_{b})^{2}],\] \[\text{where }N_{a}=N_{s_{i},s_{j}}=\frac{|\{\hat{y}_{u_{1},u_{2}}|u_{1} \in\mathcal{U}_{s_{i}},u_{2}\in\mathcal{U}_{s_{j}}\}|}{|\mathcal{U}_{s_{i}} \times\mathcal{U}_{s_{j}}|}.\] In reciprocal recommendations, each recommendation involves two users who may belong to different sensitive groups. The Group-to-group Variance metric considers the variance of the acceptance rate, i.e., recommendation rate, of different combinations of sensitive groups. #### 7.1.8 Sensitive-group Share \[\mathcal{S}\text{-share}(s)=\frac{1}{|\mathcal{S}|}-\frac{\sum_{u_{1}\in \mathcal{U}}\sum_{u_{2}\in\mathcal{U}_{s}}\frac{1\{u_{2}\in\text{Rec}_{u_{1}} \}}{|\text{Rec}_{u_{1}}|}}{|\mathcal{U}|}\] The Sensitive-group Share metric measures how well an individual sensitive group \(s\) is represented in the recommendations of all users in a reciprocal setting. It subtracts the real representation ratio for said group from the ideal uniform ratio, such that the output represents how far from ideal the recommendations are with respect to single sensitive groups. ### Neutral Representation Metrics Neutral Representation fairness is a special case in that it does not explicitly concern itself with the actual outputs of the model. This also extends to the metrics of the Fairness Interpretation. However, while some research adapting this Fairness Interpretation in their optimization only evaluate how neutral the representations are, others adapt metrics from other interpretations to evaluate the recommendation explicitly. #### 7.2.1 Sensitive Reclassification The vast majority of research optimizing for sensitive neutral representations perform some form of evaluation of how well sensitive information can be reclassified through the representations. This evaluation is usually performed by training an auxiliary classification model specifically for identifying the sensitive attributes of users given their representations, and the re-classification score becomes an inverse measure of how well sensitive information has been eliminated. _Accuracy_, _F1 score_ and _Area Under the ROC Curve_(AUC) are all metrics that have been used for this purpose. AUC is the total area under the curve of the curve you get by plotting the true positive rate and the false positive rate while moving the threshold used to split positive and negative classifications. AUC is by far the most applied classification metric. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Name** & **Sensitive groups** & **Rec.** & **Rec.** & **Research** \\ & & **Dy-** & **Type** & \\ \hline Sensitive Reclassification & Multivalent & Mixed & Mixed & Bose and Hamilton (2019) \\ & & & & Roshoff et al (2019) \\ & & & & Buyl and Bie (2020) \\ & & & & Li et al (2021b) \\ & & & & Li et al (2022a) \\ & & & & Borges and Stefanidis (2022) \\ & & & & Wu et al (2022b) \\ & & & & Wei and He (2022) \\ \hline \hline \end{tabular} \end{table} Table 8: Overview of identified Neutral Representation metrics, their key properties and the research that has applied them. ### Utility-Based Fairness Utility-Based fairness metrics are specifically tied to utility functions that consider the _ground truth_ in their definitions, i.e., the utility or utility contribution of a single user increases if an item in the test set is successfully recommended for said user. This covers established pure-recommendation metrics, as well as other specialized utility measures. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline #### Group Rating Error Difference \[\text{ValueUnfairness}=\] \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\,\bigg{|}\,\hat{ \mathbb{E}}_{u\in\mathcal{U}_{s_{1}}}[\hat{r}_{uv}-r_{uv}]-\hat{\mathbb{E}}_{u \in\mathcal{U}_{s_{2}}}[\hat{r}_{uv}-r_{uv}]\,\bigg{|}\,\right] \tag{3}\] \[\text{AbsoluteUnfairness}=\] \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\,\bigg{|}\,\bigg{|}\, \hat{\mathbb{E}}_{u\in\mathcal{U}_{s_{1}}}[\hat{r}_{uv}-r_{uv}]\,\Big{|}- \Big{|}\,\hat{\mathbb{E}}_{u\in\mathcal{U}_{s_{2}}}[\hat{r}_{uv}-r_{uv}]\, \Big{|}\,\bigg{|}\,\right]\] (4) \[\text{OverestimationUnfairness}=\] \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\,\bigg{|}\,\text{max} \Big{(}0,\hat{\mathbb{E}}_{u\in\mathcal{U}_{s_{1}}}[\hat{r}_{uv}-r_{uv}]\Big{)} -\text{max}\Big{(}0,\hat{\mathbb{E}}_{u\in\mathcal{U}_{s_{2}}}[\hat{r}_{uv}-r _{uv}]\Big{)}\,\bigg{|}\,\right]\] \[\text{UnderestimationUnfairness}=\] \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\,\bigg{|}\,\text{max} \Big{(}0,\hat{\mathbb{E}}_{u\in\mathcal{U}_{s_{1}}}[r_{uv}-\hat{r}_{uv}]\Big{)} -\text{max}\Big{(}0,\hat{\mathbb{E}}_{u\in\mathcal{U}_{s_{2}}}[r_{uv}-\hat{r} _{uv}]\Big{)}\,\bigg{|}\,\right]\] These four metrics were all defined by Yao and Huang (2017) and make out some of the few metrics that have gained a semblance of recognition in subsequent work on consumer-side recommendation fairness. Every metric accumulates absolute differences in per-item errors of two sensitive groups, and all of them involve mechanics for cancelling errors in case the model is similarly underperforming for both groups. The latter two focus on over- and under-estimation, respectively, while the former two consider both error types concurrently while differing in how the errors can cancel out. Specifically, Value Unfairness, Equation 3, allows errors of the same type to cancel out, while Absolute Unfairness, Equation 4, allows all errors to cancel out regardless of error type. \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\,\bigg{|}\,\hat{\mathbb{E}}_{u\in \mathcal{U}_{s_{1}}}\Big{[}|\hat{r}_{uv}-r_{uv}|\Big{]}-\hat{\mathbb{E}}_{u \in\mathcal{U}_{s_{2}}}\Big{[}|\hat{r}_{uv}-r_{uv}|\Big{]}\,\bigg{|}\,\right] \tag{5}\] The metric applied in Wu et al (2021b) resembles Absolute Unfairness, but considers the absolute errors at the user level instead of at the group level. One implication of this is a higher incurred penalty when the errors of the predicted ratings vary a lot within one or both groups, as they are not evened out by taking the group-wise mean before the absolute error is calculated, which is the case for Absolute Unfairness. #### Group Rating Error Deviation \[\hat{\mathbb{E}}_{v\in\mathcal{V}}\left[\sqrt{\hat{\mathbb{E}}_{i \in\mathcal{S}}\left[\left(\hat{\mathbb{E}}_{u\in\mathcal{U}_{s}}\left[|\hat{ r}_{uv}-r_{uv}|\right]-\mu_{v}\right)^{2}\right]}\right],\] \[\text{where }\mu_{v}=\hat{\mathbb{E}}_{j\in\mathcal{S}}\left[\left( \hat{\mathbb{E}}_{u\in U_{j}}\left[|\hat{r}_{uv}-r_{uv}|\right]\right)\right].\] Along with Item-level Rating Deviation, Xu et al (2021) propose another metric that considers the standard deviation of item-level statistics of different sensitive groups. This time, it is a Utility-Based fairness metric, with a central term considering the squared difference of mean user-level absolute item rating errors. While the metric is structurally similar to Group Rating Error, see in particular Equation 5, it is important to note that this metric measures the mean _standard deviation_ of the group-wise rating errors instead of the mean _difference_ of the same errors. Further, the Group Rating Error Deviation metric is inherently capable of considering multivalent sensitive attributes. #### 7.3.3 Group Utility Difference \[|\mathrm{Util}(\mathcal{U}_{s_{1}})-\mathrm{Util}(\mathcal{U}_{s_{2}})|\] Among the considered research, some explicitly calculate the absolute difference of Recall, Precision and NDCG (Jarvelin and Kekalainen, 2002) achieved by different sensitive groups. Also included in this group of metrics are more implicit comparisons of utility metrics through tables or graphs, which has been observed considering MAE, cumulative mean custom rewards and custom matchmaking utility. A special case of this metric is applied in the reciprocal setting of Zheng et al (2018), where the users receiving the recommendation comprise one sensitive group, and the users that make up the recommended entities comprise the other sensitive group. #### Utility Variance \[\hat{\mathbb{E}}_{u_{1}\in\mathcal{U}}\left[\hat{\mathbb{E}}_{u_{2} \in\mathcal{U}|u_{2}\neq u_{1}}\left[\left(\mathrm{Util}(u_{1})-\mathrm{Util}( u_{2})\right)^{2}\right]\right]\] \[\hat{\mathbb{E}}_{s_{1}\in\mathcal{S}}\left[\hat{\mathbb{E}}_{s_ {2}\in\mathcal{S}|s_{2}\neq s_{1}}\left[\left(\mathrm{Util}(\mathcal{U}_{s_{1 }})-\mathrm{Util}(\mathcal{U}_{s_{2}})\right)^{2}\right]\right]\] The variation of the utility of individual users or sensitive groups has been applied as fairness metrics. It is particularly suited for representing the utility spread when there are too many scores to cover individually, i.e., many users or sensitive groups. The identified variations of Utility Variance have centred around the utility metrics Mean Squared Error and NDCG. #### Utility Delta \[\Delta_{s} =\mathrm{Util}(\mathcal{U}_{s})-\mathrm{Util}_{\mathrm{org}}( \mathcal{U}_{s})\] \[\Delta_{\mathrm{diff}} =|\Delta_{s_{1}}-\Delta_{s_{2}}| \tag{6}\] The Utility Delta metrics consider how the utility of specific sensitive groups changes when a fairness-aware model is compared to baselines. From a Utility-Based fairness view, a decrease in utility for a dominant sensitive group may be worth a subsequent increase in the utility of worse-off sensitive groups. Having measures of how the utility of specific user groups has changed is useful when considering such trade-offs. Slokom et al (2021) also consider the absolute difference of the \(\Delta\)-s of two sensitive groups to capture the dissymmetry in the magnitude of the changes, Equation 6. #### Mutual Information, Relevance Kamishima and Akaho (2017) applies mutual information in a Utility-Based fairness evaluation of their ranking-based recommender system. The definition is structurally identical to that of the Recommendation Parity metric in Section 7.1.4. The main difference is that the predicted rating variable is replaced with a binary relevancy variable, i.e., the degree of independence between relevant recommendations and the sensitive attribute is measured. #### Inequality of Odds \[\max(|\mathrm{FPR}_{s_{1}}-\mathrm{FPR}_{s_{2}}|,|\mathrm{TPPR}_{s_{1}}- \mathrm{TPR}_{s_{2}}|)\] Equality of Odds requires the true positive and false positive rates of different sensitive groups to be equal. Li et al (2022) propose a metric based on this definition for measuring the extent of potential infractions. The same research also applies Absolute Between-ROC Area(ABROCA), proposed in Gardner et al (2019), which inherently considers all possible positive-boundary thresholds as opposed to a single fixed threshold. #### 7.3.8 Inequality of Opportunity \[\text{TPR}_{s_{1}}-\text{TPR}_{s_{2}}\] Kamishima and Akaho (2017) considers a metric based on Equality of Opportunity, which requires equality of true positive rates of different sensitive groups. They opt for measuring infractions with a regular difference to avoid abstracting away the orientation of potential imbalances. #### 7.3.9 Protected Utility The protected utility metrics only consider the utility of specific sensitive groups that are considered protected. Yao and Huang (2021) is the only identified research that adopts this metric type, and they use RMSE as their utility measure. #### 7.3.10 Generalized Entropy Index(GEI) \[\text{GEI}(\alpha)=\begin{cases}\frac{1}{|\mathcal{S}|\alpha(1-\alpha)}\sum _{s\in\mathcal{S}}\left(\left(\frac{\text{U}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i }\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} \text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i}\text{i} #### 7.4.1 Normalized Ranking Change \[\text{PrefUtil}_{u}(\text{Rec})=\frac{\sum_{v\in\text{Rec}}\text{ pref}_{uv}}{\sum_{v^{\prime}\in\text{Rec}_{\text{org},u}}\text{pref}_{uv^{\prime}}} \tag{7}\] \[\text{NormRankChgMean}=\hat{\mathbb{E}}_{u\in\mathcal{U}}\left[ \text{PrefUtil}_{u}(\text{Rec}_{u})\right]\] \[\text{NormRankChgStd}=\] \[\sqrt{\hat{\mathbb{E}}_{u_{1}\in\mathcal{U}}\left[\hat{\mathbb{E} }_{u_{2}\neq u_{1}}\left[\left(\text{PrefUtil}_{u_{1}}(\text{Rec}_{u_{1}})- \text{PrefUtil}_{u_{2}}(\text{Rec}_{u_{2}})\right)^{2}\right]\right]}\] Patro et al (2020a,b); Biswas et al (2021) all consider a two-sided fairness setting and interpret the consumer-side fairness as how far the recommendations deviate from the original ranking when producer-side fairness has been taken into account. I.e., they use the intermediate preference scores of the re-ranked top-\(k\) recommendations normalized by the optimal preference scores of the original top-\(k\) recommendations as a proxy of the utility. This utility's mean and standard deviation are considered in their fairness evaluation. #### 7.4.2 Ranking Change Envy \[\text{RankingEnvy}(u,u^{\prime}) =\max(\text{PrefUtil}_{u}(\text{Rec}_{u^{\prime}})-\text{PrefUtil} _{u}(\text{Rec}_{u}),0)\] \[\text{RankChgEnvy} =\hat{\mathbb{E}}_{u_{1}\in\mathcal{U}}\left[\hat{\mathbb{E}}_{u _{2}\neq u_{1}}\left[\text{RankingEnvy}(u_{1},u_{2})\right]\right]\] In multi-sided fairness settings, it is not uncommon to have a situation where the utility of one user may be increased by giving said users the recommendations given to another user instead of their own. The term _Envy_ has been taken to refer to cases where users would fare better given the recommendations of other users. Envy can arise in multi-sided recommendation fairness when auxiliary considerations affect some users more than others, e.g., considerations of provider-side fairness. Patro et al (2020a) and Biswas et al (2021) consider the aggregated envy of their proposed _preference utility_, Equation 7. \begin{table} \begin{tabular}{l l l l l} \hline \hline & **Sensitive groups** & \begin{tabular}{l} **Rec.** \\ **Dy-** \\ \end{tabular} & \begin{tabular}{l} **Rec.** \\ **Type** \\ \end{tabular} & \begin{tabular}{l} **Research** \\ \end{tabular} \\ \hline Normalized Ranking Change & Multivalent & \begin{tabular}{l} User- \\ item \\ \end{tabular} & Ranking & Patro et al (2020a) \\ & & & & Patro et al (2020b) \\ & & & Biswas et al (2021) \\ Ranking Change Envy & Multivalent & \begin{tabular}{l} User- \\ item \\ \end{tabular} & Ranking & Patro et al (2020a) \\ & & & Biswas et al (2021) \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ \hline \hline \end{tabular} \end{table} Table 10: Overview of identified Custom fairness metrics, their key properties and the research that has applied them. #### 7.4.3 Gini Coefficient, Preference \[\text{PrefUtilExp}(u) =\sum_{v\in\mathcal{V}}\text{pref}_{uv}\mathbf{P}_{uv}\mathbf{w} \tag{8}\] \[\text{GiniPref} =\frac{\sum_{u_{1},u_{2}\in\mathcal{U}\times\mathcal{U}}|\text{ PrefUtilExp}(u_{1})-\text{PrefUtilExp}(u_{2})|}{2|\mathcal{U}|^{2}\mathbb{E}_{u\in \mathcal{U}}[\text{PrefUtilExp}(u)]}\] Do et al (2021) propose adapting the Gini coefficient, frequently used to measure wealth inequality, to measure individual consumer fairness in their two-sided fairness setting. The Gini coefficient is measured for a custom utility measure based on the outputted preference scores of their base recommender system and the ranking positions of the re-ranked recommendations. Their proposed utility measure is defined in Equation 8, where \(\mathbf{P}_{\mathbf{w}}\) is a row vector of probabilities for recommending user \(u\) item \(v\) in different ranking positions, and \(\mathbf{w}\) is a column vector of exposure/rank weights for the same ranking positions. #### 7.4.4 Protected Item-group Recommendation Parity \[\frac{\sum_{u\in\mathcal{U}_{\epsilon_{1}}}\sum_{v\in\text{Rec}_{u}}\frac{ \gamma(v)}{|\mathcal{U}_{\epsilon_{1}}|}}{\sum_{u^{\prime}\in\mathcal{U}_{ \epsilon_{2}}}\sum_{v^{\prime}\in\text{Rec}_{u^{\prime}}}\frac{\gamma(v^{ \prime})}{|\mathcal{U}_{\epsilon_{2}}|}}\] Here \(\gamma\) is a function that returns "1" if the item belongs to any protected item groups and "0" if none of its item groups is protected, and the metric measures how balanced the recommendation of the protected item groups are between two sensitive groups, i.e., a value of 1 is optimal. Burke et al (2018) applied this metric to evaluate their method's fairness on a film dataset and a micro-loan dataset. The protected item groups were selected among film genres that are unevenly recommended to different genders for the film dataset, and among the most unfunded regions in the micro-loan dataset. #### 7.4.5 Preferential Calibration \[1-\frac{\delta_{u}-\delta_{\text{min}\_u}}{\delta_{\text{max}\_u}-\delta_{ \text{min}\_u}}\] Paraschakis and Nilsson (2020) consider a matchmaking scenario where users can set their preference for being matched with people belonging to the same sensitive group as them. The proposed metric measures how well the provided recommendation respects the user's preferences, a concept known as calibrated recommendation (Steck, 2018). First, the optimal recommendation composition is calculated based on the provided user preference and the sensitive group composition of the full population. \(\delta_{u}\) is then calculated as the absolute difference between the ideal and actual composition. Finally, the normalized \(\delta\) is subtracted from 1 after identifying the best and worst possible \(\delta\)-s for the actual user, yielding a metric for which values closer to 1 indicate that the users' preferences are better respected. #### 7.4.6 Intrinsic Sensitive Attribute Match \[\text{IntrinsicSensitiveMatch}(u,v)=(\text{UserIntrinsic}_{u}-\text{ ItemIntrinsic}_{v})^{2}\] Bobadilla et al (2021) devise a notion of intrinsic sensitive properties in both items and users. They assign values to the said property for items by first considering the ratio of female users who like the items compared to the ratio of female users who dislike them, then taking the difference between that value and the equivalent value calculated for male users. These item values are then used reversely to assign values to the same intrinsic properties of the individual users. The fairness of the individual recommendations is set to be the squared difference between the intrinsic user value and the intrinsic item value. #### 7.4.7 Sensitive Neutral Recommended Items \[\mathrm{IF}(u) =\sum_{v\in\mathrm{Rec}_{u}}\mathrm{SensitiveEntropy}_{v}\] \[\mathrm{DIF}(u) =\mathrm{IF}(u)-\mathrm{IF}_{\mathrm{ground\_truth}}(u),\] Here \(\mathrm{SensitiveEntropy}_{v}\) is the information entropy of the sensitive attribute of the users involved in the interactions with the item \(v\). The information entropy of an item is maximized when different sensitive groups historically have interacted with it at an identical rate, which is considered ideal in the fairness view of Li et al (2022). The full metric is the difference between the summed entropy of the recommended items and the summed entropy of the _ground truth_ recommendations, which can be interpreted as how much more neutral are the recommended items compared with the _correct_ recommendations. #### 7.4.8 Segment Recommendation Frequency Parity Wan et al (2020) considers a segmented market, where each segment covers a group of users and a group of products. Within this setting, they argue that the distribution of recommendations given across segments should match the observed data across the same segments. To that end, they construct frequency distributions that represent how the segments are represented in the observations and the recommendations and calculate a distance, i.e., unfairness, between the two using _Kullback-Leibler_ divergence _of_ the recommended frequencies _from_ the observed frequencies. #### 7.4.9 Sensitive Reclassification, Pre-/Post- Analogous to the Neutral Representation metric Sensitive Reclassification, some studies measure how well sensitive attributes can be identified using the input or output data. The pre-processing approach of Slokom et al (2021) reports the AUC achieved by auxiliary classifiers tasked with identifying sensitive attributes given user data modified by their approach. Similarly, the in-processing approach of Wu et al (2021), and the post-processing approach of Edizel et al (2020) collectively report accuracy, macro-F1 and _Balanced Error Rate_(BER) achieved by analogous classifiers that are fed recommendation sets. BER is the sum of the false positive rate and the false negative rate divided by two. ## 8 Datasets Like all machine learning models, recommender systems rely heavily on the datasets used to train them, i.e., the recommender systems are optimized to capture the correlations that can be observed in the datasets. The datasets are also pivotal in evaluating and comparing different approaches, and can highlight how well the approaches perform, scale, and generalize. While there is no lack of actors that could benefit from recommender systems and who possess vast amounts of user data to train models on, the sensitive nature of user data often limits the viability, or even legality, of sharing the data publicly. The sensitive nature of the data is further enhanced if sensitive attributes of the users are included, which is required when training many fairness-aware approaches. Furthermore, high-profile examples demonstrating that anonymization of data may not suffice in protecting the identity of the users (Narayanan and Shmatikov, 2008), along with an increasing focus on user protection in international discourse and legislation, have likely further deterred actors from sharing their data. There is a conspicuous lack of relevant datasets for evaluating consumer-side fairness in recommender systems. This discrepancy is both in terms of the total number of available datasets and the relevancy of the domains they represent. The ethical ramifications of discriminatory recommender systems are better highlighted by a career recommendation setting than through a movie recommendation setting. While it is not unlikely that learned social roles partly explain current differences in the movie preferences of male and female consumers, the further propagation of such preferences is arguably less severe than consistently recommending low-income careers to female career seekers because of biases in the data. An overview of all datasets that contain information on sensitive attributes can be found in Table 11 and covers the context of the datasets, the presence of a selection of consumer-side sensitive attributes and a tally of the number of studies that have evaluated their models on the specific datasets. The MovieLens datasets(Harper and Konstan, 2015) dominate the field. Among the variations of the dataset, the 1-million and the 100-thousand alone contain sensitive consumer attributes in the form of gender, age and occupation. A wide adoption of the same dataset poses numerous benefits, like improved comparisons and calibrations. However, one ideally wants multiple widely adapted datasets, as different datasets usually pose different challenges, and good performance on one dataset does not necessitate good general performance. Eight studies considers various datasets based on LastFM, who all share domain but vary in size, scope and time of collection. The second most adapted _singular_ dataset applied for consumer-side fairness is the Google Local Review dataset(He et al, 2017), yet it is only considered by a total of four different studies, neither of which consider the MovieLens dataset. Of the remaining datasets, only a handful are used for evaluation in more than a single study, and many of these only appear more than once because they are applied in multiple studies by the same research group. For instance, three of the four studies using the Google Local Review datasets share a subset of authors. It is safe to say that the field currently lacks an established set of benchmark datasets. Regarding the domains covered by the different datasets, most cover the recommendation of digital media, commodities or services. The few datasets that present more sensitive scenarios have not managed to attract attention in a field that is starved for relevant data, which may imply other limiting factors like restricted availability, dataset size and data quality. In particular, the aforementioned privacy concerns likely play an essential role in the lack of relevant datasets. When factoring in dataset occurrence counts, most studies consider datasets that provide information on the consumer's gender, age and occupation. Among these, gender is the most widely adopted sensitive attribute and is typically split into two groups, male and female. The adoption of age as a sensitive attribute is also prevalent, and the attribute has been split into two or more groups based on age intervals. Occupation is rarely used, which has been attributed to difficulties related to the high number of highly skewed labels that make empiric evaluation difficult and possibly misleading. The datasets listed in Table 12 do not explicitly provide sensitive information and have either been used to evaluate individual fairness or have supplemented such information using other means. For instance, Bose and Hamilton (2019) compiled a dataset using the Reddit API (Reddit, 2022), only comprising users who have explicitly provided gender in communities they partake in that require this. Paraschakis and Nilsson (2020) consider matchmaking and use demographic data on the religious adherence of different races in the US to probabilistically model a "same religion" attribute for linking with an explicitly provided "same religion preference" attribute. Finally, Fang et al (2022) derived gender labels of Yelp users Yelp (2022) based on their provided names. ## 9 Future Directions Given how new the field is, it is not easy to identify and recommend promising directions in terms of model architecture, etc. There is likely also not a single fairness definition that everyone will be able to agree on, so the field is undoubtedly going to continue exploring multiple parallel directions in the foreseeable future. However, regarding reproducibility aspects and other measures for improving the credibility of the research and approaches, various points could benefit from additional focus in the coming years. In particular, we perceive a need for con solidating fairness concepts, working towards standardizing the fairness metrics and improving comparisons with other approaches. \begin{table} \begin{tabular}{l l l l l l l l l l l l l l} \hline \hline & & & & \multicolumn{6}{c}{**Sensitive attribute**} & \multicolumn{6}{c}{**Count**} \\ \cline{3-14} & & & & & & & & & & & & & & \\ \hline **MovieLens** & Harper & and & Films & ✓ & ✓ & ✓ & & & & 2 & 22 & 3 & 27 \\ & Konstan & & & & & & & & & & & & & \\ **LastFM1** & Last.fm (2022) & Music & ✓ & ✓ & ✓ & & & & 1 & 3 & 4 & 8 \\ **FourSquare** & Liu & et al & Locations & ✓ & & & & & & 0 & 3 & 0 & 3 \\ **IJCAI2015** & Tianchi & Shopping & & ✓ & & & & & 0 & 3 & 0 & 3 \\ **Amazon** & Wan et al & Electronic & ✓ & & & & & & & 0 & 3 & 0 & 3 \\ **Electronic** & (2020) & Anticides & ✓ & & & & & & & 0 & 3 & 0 & 3 \\ **Sushl** & Kamishima & Sushi & ✓ & ✓ & & & & & & 0 & 2 & 0 & 2 \\ **Spoeddate** & Fisman et al & Dating & & & & & & & & & 0 & 1 & 1 & 2 \\ **Facebook3** & Koinnski et al & College & ✓ & & & & & & & 0 & 2 & 0 & 2 \\ **Book-** & Ziegler et al & Books & ✓ & ✓ & & & & & & 0 & 2 & 0 & 2 \\ **Crossing** & (2005) & & & & & & & & & & 0 & 2 & 0 & 2 \\ **Kiva1** & Kiva (2022) & Micro- & & & & & & & & & \(\surd^{2}\) & 0 & 1 & 0 & 1 \\ **Twitter, ex- & get topic** & Ge et al & Experts & + & & & & & \(\surd^{2}\) & & & 0 & 1 & 0 & 1 \\ **DBLP** & Tang et al & (2008) & Co-authors & & & & & & & \(\surd^{2}\) & & 0 & 1 & 0 & 1 \\ **Insurance** & Zindi (2022) & Insurance & ✓ & & ✓ & ✓ & & & & & 0 & 1 & 0 & 1 \\ **ModCloth** & Wan et al & Clothing & & & & & & & & \(\surd\) & 0 & 1 & 0 & 1 \\ **MSN News** & Wu et al & News (2019) & & & & & & & & & & 0 & 1 & 0 & 1 \\ **Instagram** & Zhang et al & Locations & ✓ & ✓ & & & & & & & 0 & 1 & 0 & 1 \\ **MathNation4** & MathNation & Learning & ✓ & & & & & & & & 0 & 1 & 0 & 1 \\ **CIKM 2019** & Tianchi (2022) & E- & ✓ & ✓ & & & & & & & 0 & 1 & 0 & 1 \\ & commerce & & & & & & & & & & 0 & 1 & 0 & 1 \\ **Taobao Ad** & Tianchi & Ads & ✓ & ✓ & & & & & & & 0 & 1 & 0 & 1 \\ & (2018a) & & & & & & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 11: Table representing the different datasets applied in fair consumer-side recommender systems research with sensitive attributes. ### Consolidated Fairness Concepts and Definitions A recurring observation in the studies covered in this survey is the lack of a common language when it comes to fairness concepts and definitions. It often falls to the reader to interpret exactly what the authors consider fair by examining the text, the implementation choices and the evaluation. This survey highlights that there are multiple fairness definitions researched that differ significantly on a conceptual level and that are often conflicting in terms of optimization and goals. These factors complicate the examination of new research as well as comparisons of different models, and a common understanding of high-level fairness concepts could do much in to remedy such challenges. One may enhance the reader's ability to put new approaches, as well as implementation and evaluation choices, into context by immediately and accurately conveying the high-level fairness interpretation. In this case, the readers do not have to fully grasp the finer details and implications of the specific interpretation before they are able to make sense of the discussion and draw parallels with approaches they are familiar with. This may also assist researchers in identifying relevant research, and help structure further research while leaving room for more specific formal definitions within the high-level interpretations. The Fairness Interpretations taxonomy proposed in Section 3.2.1 is one suggestion for such high-level conceptual categories. ### Consensus on Fairness Metrics Section 7 demonstrates a great number of applied fairness metrics and a high degree of overlap in what they essentially seek to measure. While this is natural for a budding field, and enhanced by the presence of multiple distinct and conflicting fairness definitions, it is currently a contributing factor in making the comparisons challenging. Guided by rigorous analysis of the properties of different metrics, the field as a whole could benefit from reducing the number of metrics applied by identifying the best among metrics that have higher degrees of overlap. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & **Reference** & \multicolumn{4}{c}{**Setting**} & \multicolumn{4}{c}{**Pre-**} & \multicolumn{1}{c}{**In-**} & \multicolumn{1}{c}{**Post-**} & \multicolumn{1}{c}{**Total**} \\ \hline **Google Local Review** & He et al (2017) & Locations & 0 & 0 & 4 & 4 \\ **Flexster** & Jamali and Ester (2010) & Films & 1 & 2 & 0 & 3 \\ **Reddit1** & Reddit (2022) & Forum Boards & 0 & 1 & 1 & 2 \\ **Amaon** & He and McAuley (2016) & E-commerce & 0 & 1 & 1 & 2 \\ **Yelp Challenge** & Yelp (2022)2 & Locations & 1 & 0 & 0 & 1 \\ **Fesbase15k-237** & Toutanova and Chen (2015) & Knowledge Completion & Base & 0 & 1 & 0 & 1 \\ **BoerAdvocate** & McAuley et al (2012) & Beers & 0 & 1 & 0 & 1 \\ **DPG Recruitment3** & DPG-Recruitment (2022) & Jobs & 0 & 1 & 0 & 1 \\ **Twitter, scientific rumour** & De Domenico et al (2013) & Followers & 0 & 0 & 1 & 1 \\ **Ctrip3** & Ctrip (2022) & Flights & 0 & 0 & 1 & 1 \\ \hline \hline \end{tabular} \end{table} Table 12: Table representing the different datasets applied in fair consumer-side recommender systems research, without sensitive attributes. ### Comparison Despite a growing number of studies covering similar fairness concepts, there is still a low degree of comparative analysis of different approaches. While it is interesting to see how fairness-aware contributions affect the fairness over the base recommender approaches, it is also essential to compare with relevant fairness-aware approaches, if present. This aspect seems to have improved recently, but there is still room for further improvement. One contributing factor to the lack of comparative analysis is likely visibility. The research field is still relatively new, and the nomenclature has yet to consolidate, making it challenging to identify similar research. There is also an issue of visibility across different types of approaches, in particular recommender systems, IR Ranking and link-prediction. Both IR Ranking and link-prediction approaches may be considered recommender systems, depending on the setting or the applied dataset. However, since they use different terms than those used in the recommender system research and intermingling between fields can be uncommon, such approaches may not be known by researchers proposing similar recommender systems. Visibility has also been limited so far by the lack of updated surveys that chart out the field's current state. However, recent contributions like the comparative analysis in Boratto et al (2022) and future surveys will hopefully improve this aspect. ### Datasets As noted in Section 8, there are currently not that many relevant datasets for evaluating consumer-side recommender systems fairness. A wider selection of benchmarking datasets could improve evaluation and comparisons and add credibility to the research. New datasets should ideally vary in size and sources to offer different challenges related to aspects like scalability and adaptability, focusing on filling in the gaps not covered by the datasets applied today. In particular, many current datasets are getting old, and their application may fail to reflect performance in a shifting environment. Finally, to better highlight the need and application of fair recommender systems, it would be useful to have datasets for which the ethical implications of a discriminatory recommender system are more severe. ### Online Evaluation None of the considered studies performs an online evaluation of neither recommendation utility nor fairness. While offline evaluation has some practical benefits, it is usually restricted to only being able to reward recommendation of items/actions we know the user likes, not serendipitous recommendations of items the user will like but was not aware of when the dataset was created. Online A/B testing, on the other hand, can reward such recommendations and may bring along other benefits of testing the model in the environment it will be used, granted that they are designed and executed well. Further, online evaluation allows more subjective feedback, e.g., asking the users if they suspect that the recommender system discriminates against them or presents them with biased recommendations influenced by their inferred or stated sensitive attributes. While researchers like Saxena et al (2019) looks into the public pre-conceived perception and attitude towards formal fairness definitions, the impression of those using a fairness-aware recommender system may differ. Multiple approaches covered in this survey strive to make their models or recommendations independent of sensitive attributes. It would be interesting to see how different users perceive such a system in different recommendation settings. ## Declarations ### Funding This publication has been partly funded by the SFI NorwAI, (Centre for Research-based Innovation, 309834). The authors gratefully acknowledge the financial support from the Research Council of Norway and the partners of the SFI NorwAI.
2305.06003
On Riccati contraction in time-varying linear-quadratic control
Contraction properties of the Riccati operator are studied within the context of non-stationary linear-quadratic optimal control. A lifting approach is used to obtain a bound on the rate of strict contraction, with respect to the Riemannian metric, across a sufficient number of iterations. This number of iterations is related to an assumed uniform controllability and observability property of the dynamics and stage-cost in the original formulation of the problem.
Jintao Sun, Michael Cantoni
2023-05-10T09:23:27Z
http://arxiv.org/abs/2305.06003v2
# On Riccati contraction in time-varying linear-quadratic control ###### Abstract Contraction properties of the Riccati operator are studied within the context of non-stationary linear-quadratic optimal control. A lifting approach is used to obtain a bound on the rate of strict contraction, with respect to the Riemannian metric, across a sufficient number of iterations. This number of iterations is related to an assumed uniform controllability and observability property of the dynamics and stage-cost in the original formulation of the problem. Discrete-time linear systems, Non-stationary optimal control, Riccati difference equations ## I Introduction Consider the following infinite-horizon linear-quadratic (LQ) optimal control problem: \[\min_{u,x}\sum_{k\in\mathbb{N}_{0}}x_{k}^{\prime}Q_{k}x_{k}+u_{k}^{\prime}R_{ k}u_{k}\] (1a) subject to \[x_{0}=\xi\] and \[x_{k+1}=A_{k}x_{k}+B_{k}u_{k},\quad k\in\mathbb{N}_{0}, \tag{1b}\] where \(Q_{k}\) is positive semi-definite, \(R_{k}\) is positive definite, \(A_{k}\in\mathbb{R}^{n\times n}\), \(B_{k}\in\mathbb{R}^{n\times m}\), and the initial state \(\xi\in\mathbb{R}^{n}\) is given. The task is to determine the cost minimizing input \(u=(u_{0},u_{1},\dots)\) and corresponding state sequence \(x=(x_{0},x_{1},\dots)\) over the infinite horizon. Under assumptions of uniform stabilizability and uniform detectability, it is well-known (e.g., see [1, 2]) that the optimal policy for (1) is given by the stabilizing linear time-varying state-feedback controller \[u_{k}=-(R_{k}+B_{k}^{\prime}P_{k+1}B_{k})^{-1}B_{k}^{\prime}P_{k+1}A_{k}x_{k}, \ k\in\mathbb{N}_{0}, \tag{2}\] where \(P_{k}\) is the unique positive semi-definite solution of \[P_{k}=\mathcal{R}_{k}(P_{k+1}) \tag{3}\] and the Riccati operator is given by \[\mathcal{R}_{k}(P):=Q_{k}+A_{k}^{\prime}\left(P-PB_{k}(R_{k}+B_{k}^{\prime}PB _{k})^{-1}B_{k}^{\prime}P\right)A_{k}. \tag{4}\] For the infinite-horizon problem, the recursion (3) does not have a boundary condition. Its unique symmetric positive semi-definite solution is stabilizing and attractive for all symmetric positive semi-definite solutions of (3) over a finite horizon with suitable boundary conditions [3]. The cost associated with the optimal policy (2) is given by \(\xi^{\prime}P_{0}\xi\)[1]. By the principle of optimality [2], the least infinite-horizon cost is achieved by the receding finite-horizon control scheme given by the feedback policy \[u_{k}=u_{k}^{*}(x_{k})\] for \(k\in\mathbb{N}_{0}\), where \[(u_{k}^{*}(x_{k}),\dots,u_{k+T-1}^{*}(x_{k}))\] \[=\operatorname*{arg\,min}_{u}\sum_{l=k}^{k+T-1}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! strict contraction rate of a sufficient number of iterations of the Riccati recursion (3), building upon a foundation result from [12]. The sufficient number of iterations is related to an assumed uniform controllability and observability property of the time-varying dynamics and stage costs. The development involves a lifted reformulation of the problem (1), in which the system model evolves by this fixed number of steps per stage. The fixed number of steps and the corresponding bound on the strict contraction rate are given explicitly in terms of the original problem data. The paper is organized as follows. Contraction properties of the Riccati operator with respect to the Riemannian metric are presented in Section II. The lifting approach for characterizing the strict contraction rate is developed in Section III. A numerical example is presented in Section IV. Some concluding remarks are provided in Section V. Notation:\(\mathbb{N}\) denotes the set of natural numbers, and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). The \(n\times n\) identity matrix is denoted by \(I_{n}\). The \(a\times b\) matrix of zeros is denoted by \(0_{a,b}\). Given the indexed collection of matrices \((M_{a},M_{a+1},\ldots,M_{b})\), where \(a,b\in\mathbb{N}_{0}\), and \(a<b\), the corresponding block-diagonal matrix is denoted by \(\oplus_{j=a}^{b}M_{j}\). The transpose of the matrix \(M\) is denoted by \(M^{\prime}\). The induced \(2\)-norm of \(M\) is denoted by \(\|M\|_{2}\); this corresponds to the maximum singular value. For \(n\in\mathbb{N}\), the set of \(n\times n\) real symmetric matrices is denoted by \(\mathbb{S}\), the positive semi-definite matrices by \(\mathbb{S}_{+}^{n}\subset\mathbb{S}\), and positive definite matrices by \(\mathbb{S}_{++}^{n}\subset\mathbb{S}_{+}^{n}\). The minimum eigenvalue of \(M\in\mathbb{S}\) is denoted by \(\lambda_{\min}(M)\in\mathbb{R}\). ## II Riccati operator contraction properties In this section, a result in [12] is used to establish that \(\mathcal{R}_{k}\) in (4) is a contraction with respect to the Riemannian metric on the set of positive definite matrices. **Assumption 1**.: \(A_{k}\) _in (1b) is non-singular for all \(k\in\mathbb{N}_{0}\)._ This standing assumption and the following lemma enable access to a foundation result from [12] in subsequent developments. A proof is given in Appendix I. **Lemma 1**.: _The operator \(\mathcal{R}_{k}\) in (4) can be written as the linear fractional transformation_ \[\mathcal{R}_{k}(P)=(E_{k}P+F_{k})(G_{k}P+H)^{-1}, \tag{6}\] _where_ \[E_{k} =A_{k}^{\prime}+Q_{k}A_{k}^{-1}B_{k}R_{k}^{-1}B_{k}^{\prime}, \tag{7a}\] \[F_{k} =Q_{k}A_{k}^{-1},\] (7b) \[G_{k} =A_{k}^{-1}B_{k}R_{k}^{-1}B_{k}^{\prime},\] (7c) \[H_{k} =A_{k}^{-1}. \tag{7d}\] **Definition 1**.: _The Riemannian distance between \(U,V\in\mathbb{S}_{++}^{n}\) is given by_ \[\delta(U,V)=\left(\sum_{i=1}^{n}\log^{2}\lambda_{i}\right)^{\frac{1}{2}},\] _where \(\lambda_{1},\ldots,\lambda_{n}\) are the eigenvalues of \(UV^{-1}\)._ Note, \(\delta(\cdot,\cdot):\mathbb{S}_{++}^{n}\times\mathbb{S}_{++}^{n}\to\mathbb{R}\) is a metric [12]. The following result is taken from [12, Theorem 1.7]. **Proposition 1**.: _Consider the operator \(\mathcal{R}_{k}\) in (6). If the corresponding matrices in (7) are such that \(E_{k}\) is non-singular and \(F_{k}E_{k}^{\prime},E_{k}^{\prime}G_{k}\in\mathbb{S}_{+}^{n}\), then for any \(X,Y\in\mathbb{S}_{++}^{n}\),_ \[\delta(\mathcal{R}_{k}(X),\mathcal{R}_{k}(Y))\leq\delta(X,Y).\] _Further, if \(F_{k}E_{k}^{\prime},E_{k}^{\prime}G_{k}\in\mathbb{S}_{++}^{n}\), then for any \(X,Y\in\mathbb{S}_{++}^{n}\),_ \[\delta(\mathcal{R}_{k}(X),\mathcal{R}_{k}(Y))\leq\rho_{k}\cdot\delta(X,Y) \tag{8}\] _with \(\rho_{k}=\zeta_{k}/(\zeta_{k}+\epsilon_{k})<1\), where_ \[\zeta_{k}=\|(F_{k}E_{k}^{\prime})^{-1}\|_{2}\quad\text{ and }\quad\epsilon_{k}=\lambda_{\min}((E_{k}^{\prime})^{-1}G_{k}^{\prime}). \tag{9}\] Under Assumption 1, since \(R_{k}+B_{k}^{\prime}(A_{k}^{\prime})^{-1}Q_{k}A_{k}^{-1}B_{k}\in\mathbb{S}_{++} ^{n}\), application of the Woodbury matrix identity yields \[E_{k}^{-1}\] \[=(A_{k}^{\prime})^{-1} -(A_{k}^{\prime})^{-1}Q_{k}A_{k}^{-1}B_{k}\] \[\times(R_{k}+B_{k}^{\prime}(A_{k}^{\prime})^{-1}Q_{k}A_{k}^{-1}B _{k})^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{-1}.\] That is, \(E_{k}\) is non-singular. On the other hand, \[F_{k}E_{k}^{\prime}=Q_{k}+Q_{k}A_{k}^{-1}B_{k}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{ \prime})^{-1}Q_{k} \tag{10}\] and \[E_{k}^{\prime}G_{k}=B_{k}(R_{k}^{-1}+R_{k}^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{ -1}Q_{k}A_{k}^{-1}B_{k}R_{k}^{-1})B_{k}^{\prime} \tag{11}\] are positive semi-definite but not necessarily positive definite. So in view of Proposition 1 and Lemma 1, the operator \(\mathcal{R}_{k}\) in (4) is a contraction, but not necessarily a strict contraction. A sufficient condition for strict contraction follows. **Proposition 2**.: _Consider \(\mathcal{R}_{k}\) in (4). If \(Q_{k}\in\mathbb{S}_{++}^{n}\), and \(B_{k}\) has full row rank, then for any \(X,Y\in\mathbb{S}_{++}^{n}\), (8) holds with \(\rho_{k}=\zeta_{k}/(\zeta_{k}+\epsilon_{k})<1\), where_ \[\zeta_{k} \!=\!\|(Q_{k}+Q_{k}A_{k}^{-1}B_{k}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{ \prime})^{-1}Q_{k})^{-1}\|_{2}, \tag{12a}\] \[\epsilon_{k} \!=\!\lambda_{\min}(A_{k}^{-1}B_{k}(R_{k}\!+\!B_{k}^{\prime}(A_{k }^{\prime})^{-1}Q_{k}A_{k}^{-1}B_{k})^{-1}\!B_{k}^{\prime}(A_{k}^{\prime})^{-1}). \tag{12b}\] Proof.: From (10) and (11), if \(Q_{k}\) is positive definite and \(B_{k}\) has full row rank, then \(F_{k}E_{k}^{\prime},E_{k}^{\prime}G_{k}\in\mathbb{S}_{++}^{n}\), and the strict contraction properties follow from Proposition 1. Consider \(E_{k},F_{k},G_{k}\) in (7). Then, (9) leads to (12). In particular, by application of the Woodbury matrix identity, \[(E_{k}^{\prime})^{-1}G_{k}^{\prime}\] \[= (A_{k}+B_{k}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{-1}Q_{k})^{-1 }B_{k}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{-1}\] \[= (I_{n}\!+\!A_{k}^{-1}B_{k}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{\prime })^{-1}Q_{k})^{-1}A_{k}^{-1}B_{k}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{-1}\] \[= (I_{n}\!-\!A_{k}^{-1}B_{k}(R_{k}\!+\!B_{k}^{\prime}(A_{k}^{\prime })^{-1}Q_{k}A_{k}^{-1}B_{k})^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{-1}Q_{k})\] \[\times A_{k}^{-1}B_{k}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{-1}\] \[= A_{k}^{-1}B_{k}\Big{(}I_{n}\!-\!(R_{k}\!+\!B_{k}^{\prime}(A_{k}^{ \prime})^{-1}Q_{k}A_{k}^{-1}B_{k})^{-1}\] \[\times B_{k}^{\prime}(A_{k}^{\prime})^{-1}Q_{k}A_{k}^{-1}B_{k} \Big{)}R_{k}^{-1}B_{k}^{\prime}(A_{k}^{\prime})^{-1},\] which with (9) yields (12b). ## III Lifting to a strict contraction A lifted reformulation of problem (1) is developed below for which the corresponding Riccati operator is strictly contractive. In the lifted representation, each stage of the system model corresponds to multiple steps of (1b), with a view to satisfying the conditions of Proposition 2. This is achieved under a combined uniform controllability and observability assumption on the original formulation (1). Given \(d\in\mathbb{N}\), with reference to (1b), define the \(d\)-step lifted model state \[\tilde{x}_{t}:=x_{dt}, \tag{13}\] and input \[\hat{u}_{t}:=\begin{bmatrix}u^{\prime}_{dt}&u^{\prime}_{dt+1}&\cdots&u^{\prime }_{d(t+1)-1}\end{bmatrix}^{\prime} \tag{14}\] for each \(t\in\mathbb{N}_{0}\). Then, \[\hat{A}_{t}\begin{bmatrix}x_{dt}\\ \vdots\\ x_{d(t+1)-1}\\ \tilde{x}_{t+1}\end{bmatrix}=\hat{B}_{t}\hat{u}_{t}+\begin{bmatrix}\tilde{x}_ {t}\\ 0_{nd,1}\end{bmatrix}, \tag{15}\] where \[\hat{A}_{t} :=I_{n(d+1)}-\begin{bmatrix}0_{n,nd}&0_{n,n}\\ \oplus_{j=0}^{d-1}A_{dt+j}&0_{nd,n}\end{bmatrix}, \tag{16a}\] \[\hat{B}_{t} :=\begin{bmatrix}0_{n,md}\\ \oplus_{j=0}^{d-1}B_{dt+j}\end{bmatrix}. \tag{16b}\] On noting that \(\hat{A}_{t}\) is non-singular for all \(t\in\mathbb{N}_{0}\), the following lemma is a direct consequence of (15). **Lemma 2**.: _Given input \(u\) for the system dynamics (1b), the lifted model state in (13) evolves according to_ \[\tilde{x}_{t+1}=\Phi_{t}\,\tilde{x}_{t}+\Gamma_{t}\hat{u}_{t},\quad t\in \mathbb{N}_{0}, \tag{17}\] _where the lifted input \(\hat{u}\) is as given in (14), and_ \[\Phi_{t} :=\begin{bmatrix}0_{n,nd}&I_{n}\end{bmatrix}\hat{A}_{t}^{-1} \begin{bmatrix}I_{n}\\ 0_{nd,n}\end{bmatrix}, \tag{18a}\] \[\Gamma_{t} :=\begin{bmatrix}0_{n,nd}&I_{n}\end{bmatrix}\hat{A}_{t}^{-1}\hat{ B}_{t}, \tag{18b}\] _with \(\hat{A}_{t}\) and \(\hat{B}_{t}\) as per (16)._ **Remark 1**.: _The matrix \(\Gamma_{t}\) in (18b) is the \(d\)-step controllability matrix for system (1b) in the un-lifted domain._ For \(t\in\mathbb{N}_{0}\), define \(C_{t}:=Q_{t}^{\frac{1}{2}}\), and given \(d\in\mathbb{N}_{0}\), \[\hat{C}_{t}:=\begin{bmatrix}d-1\\ \oplus\limits_{j=0}^{d-1}C_{dt+j}&0_{nd,n}\end{bmatrix}\text{ and }\hat{R}_{t}:= \begin{smallmatrix}d-1\\ \oplus\limits_{j=0}^{d-1}R_{dt+j}.\end{array} \tag{19}\] **Lemma 3**.: _Given input \(u\), the cost in problem (1) equals_ \[\sum_{t\in\mathbb{N}_{0}}\begin{bmatrix}\tilde{x}_{t}\\ \tilde{u}_{t}\end{bmatrix}^{\prime}\begin{bmatrix}\Xi_{t}^{\prime}\Xi_{t}& \Xi_{t}^{\prime}\Delta_{t}\\ \Delta_{t}^{\prime}\Xi_{t}&\hat{R}_{t}+\Delta_{t}^{\prime}\Delta_{t}\end{bmatrix} \begin{bmatrix}\tilde{x}_{t}\\ \tilde{u}_{t}\end{bmatrix}, \tag{20}\] _with \(\tilde{x}_{t}\) as per (17) for the lifted input \(\hat{u}\) given in (14), and_ \[\Xi_{t} :=\hat{C}_{t}\hat{A}_{t}^{-1}\begin{bmatrix}I_{n}\\ 0_{nd,n}\end{bmatrix}, \tag{21a}\] \[\Delta_{t} :=\hat{C}_{t}\hat{A}_{t}^{-1}\hat{B}_{t}. \tag{21b}\] The proof of Lemma 3 is deferred to Appendix B. **Remark 2**.: _The matrix \(\Xi_{t}\) in (21a) is the \(d\)-step observability matrix for system (1b) in the un-lifted domain._ Cross-terms appear in the expression (20) of the cost in the lifted domain. This is incompatible with the formulation of Proposition 2. An LDU decomposition and corresponding lifted domain change of variable \[\tilde{u}_{t}:=(\hat{R}_{t}+\Delta_{t}^{\prime}\Delta_{t})^{-1}\Delta_{t}^{ \prime}\Xi_{t}\tilde{x}_{t}+\hat{u}_{t},\quad t\in\mathbb{N}_{0}, \tag{22}\] leads to the following reformulation of problem (1) in the required form. **Lemma 4**.: _Problem (1) is equivalent to the lifted problem_ \[\min_{\tilde{x},\tilde{u}}\sum_{t\in\mathbb{N}_{0}}\tilde{x}_{t}^{\prime}\tilde {Q}_{t}\tilde{x}_{t}+\tilde{u}_{t}^{\prime}\tilde{R}_{t}\tilde{u}_{t}\] (23a) _subject to \[\tilde{x}_{0}=\xi\] and_ \[\tilde{x}_{t+1}=\tilde{A}_{t}\tilde{x}_{t}+\tilde{B}_{t}\tilde{ u}_{t},\quad t\in\mathbb{N}_{0},\] (23b) _where_ \[\tilde{Q}_{t} :=\Xi_{t}^{\prime}\Xi_{t}-\Xi_{t}^{\prime}\Delta_{t}\tilde{R}_{t} ^{-1}\Delta_{t}^{\prime}\Xi_{t}, \tag{24}\] \[\tilde{R}_{t} :=\hat{R}_{t}+\Delta_{t}^{\prime}\Delta_{t},\] (25) \[\tilde{A}_{t} :=\Phi_{t}-\Gamma_{t}\tilde{R}_{t}^{-1}\Delta_{t}^{\prime}\Xi_{t},\] (26) \[\tilde{B}_{t} :=\Gamma_{t}. \tag{27}\] Proof.: The equivalence follows by noting that \[\begin{bmatrix}\Xi_{t}^{\prime}\Xi_{t}&\Xi_{t}^{\prime}\Delta_{t}\\ \Delta_{t}^{\prime}\Xi_{t}&\tilde{R}_{t}\end{bmatrix}=L^{\prime}\begin{bmatrix} \tilde{Q}_{t}&0\\ 0&\tilde{R}_{t}\end{bmatrix}L, \tag{28}\] where \[L:=\begin{bmatrix}I_{n}&0\\ \tilde{R}_{t}^{-1}\Delta_{t}^{\prime}\Xi_{t}&I_{n}\end{bmatrix}.\] With the correspondingly transformed input defined in (22), the cost (20) becomes the cost in (23), and the lifted state evolves according to \[\tilde{x}_{t+1}=\Phi_{t}\tilde{x}_{t}+\Gamma_{t}\left[\tilde{u}_{t}-(\hat{R}_{t }+\Delta_{t}^{\prime}\Delta_{t})^{-1}\Delta_{t}^{\prime}\Xi_{t}\tilde{x}_{t} \right],\] which is (23b). As such, \(\tilde{x}\) is defined given either \(\hat{u}\) or \(\tilde{u}\), and either can be constructed from the other using (22). **Assumption 2**.: _For all \(t\in\mathbb{N}_{0}\), the \(d\)-step controllability matrix \(\Gamma_{t}\) in (18b) has full row rank, and the \(d\)-step observability matrix \(\Xi_{t}\) in (21a) has full column rank._ **Lemma 5**.: _With \(d\in\mathbb{N}\) such that Assumption 2 holds, the matrix \(\tilde{Q}_{t}\) in (24) is positive definite for all \(t\in\mathbb{N}_{0}\)._ Proof.: First observe that application of the Woodbury matrix identity gives \[\tilde{Q}_{t} =\Xi_{t}^{\prime}\left(I_{nd}-\Delta_{t}(\hat{R}_{t}+\Delta_{t}^{ \prime}\Delta_{t})^{-1}\Delta_{t}^{\prime}\right)\Xi_{t}\] \[=\Xi_{t}^{\prime}(I_{nd}+\Delta_{t}\hat{R}_{t}^{-1}\Delta_{t}^{ \prime})^{-1}\Xi_{t}. \tag{29}\] Then note that \((I_{nd}+\Delta_{t}\hat{R}_{t}^{-1}\Delta_{t}^{\prime})^{-1}\in\mathbb{S}_{++}^{n}\). Under Assumption 2, \(\Xi_{t}\) has full column rank, and thus, \(\tilde{Q}_{t}\in\mathbb{S}_{++}^{n}\) in view of (29). **Lemma 6**.: _With \(d\in\mathbb{N}\) such that Assumption 2 holds, the state matrix \(\tilde{A}_{t}\) in (23b) is non-singular for all \(t\in\mathbb{N}_{0}\)._ The proof of Lemma 6 is deferred to Appendix III. **Theorem 1**.: _With \(d\in\mathbb{N}\) such that Assumption 2 holds, for \(P\in\mathbb{S}_{++}^{n}\) and \(t\in\mathbb{N}_{0}\), define the Riccati operator_ \[\tilde{\mathcal{R}}_{t}(P):=\tilde{Q}_{t}+\tilde{A}_{t}^{\prime}(P-P\tilde{B}_ {t}(\tilde{R}_{t}+\tilde{B}_{t}^{\prime}P\tilde{B}_{t})^{-1}\tilde{B}_{t}^{ \prime}P)\tilde{A}_{t}, \tag{30}\] _with \(\tilde{Q}_{t},\tilde{R}_{t},\tilde{A}_{t},\tilde{B}_{t}\) as per (24), (25), (26), (27), respectively. Then, for any \(X,Y\in\mathbb{S}_{++}^{n}\),_ \[\delta(\tilde{\mathcal{R}}_{t}(X),\tilde{\mathcal{R}}_{t}(Y))\leq\tilde{\rho} _{t}\cdot\delta(X,Y), \tag{31}\] _with \(\tilde{\rho}_{t}=\tilde{\zeta}_{t}/(\tilde{\zeta}_{t}+\tilde{\epsilon}_{t})<1\), where_ \[\tilde{\zeta}_{t} =\|(\tilde{Q}_{t}+\tilde{Q}_{t}\tilde{A}_{t}^{-1}\tilde{B}_{t} \tilde{R}_{t}^{-1}\tilde{B}_{t}^{\prime}(\tilde{A}_{t}^{\prime})^{-1}\tilde{Q }_{t})^{-1}\|_{2},\] \[\tilde{\epsilon}_{t} =\lambda_{\min}(\tilde{A}_{t}^{-1}\tilde{B}_{t}(\tilde{R}_{t}+ \tilde{B}_{t}^{\prime}(\tilde{A}_{t}^{\prime})^{-1}\tilde{Q}_{t}\tilde{A}_{t} ^{-1}\tilde{B}_{t})^{-1}\tilde{B}_{t}^{\prime}(\tilde{A}_{t}^{\prime})^{-1}).\] Proof.: Under Assumption 2, \(\tilde{B}_{t}=\Gamma_{t}\) has full row rank for all \(t\in\mathbb{N}_{0}\). From Lemma 5 and Lemma 6, \(\tilde{Q}_{t}\) is positive definite and \(\tilde{A}_{t}\) is invertible for all \(t\in\mathbb{N}_{0}\) in line with Assumption 1. As such, the strict contraction property follows from Proposition 2. The lifted Riccati operator \(\tilde{\mathcal{R}}_{t}\) in (30) corresponds to composing the original \(\mathcal{R}_{k}\) in (4) according to (3). **Proposition 3**.: _Given \(d\in\mathbb{N}\), for all \(P\in\mathbb{S}_{++}^{n}\) and \(t\in\mathbb{N}_{0}\),_ \[\tilde{\mathcal{R}}_{t}(P)=\mathcal{R}_{dt}\circ\mathcal{R}_{dt+1}\circ\cdots \circ\mathcal{R}_{d(t+1)-1}(P). \tag{33}\] The proof is deferred to Appendix IV. ## IV Example A numerical example is presented to illustrate the strict contraction properties of the Riccati operator. Consider the following instance of the time-varying LQ control problem (1): For \(k\in\mathbb{N}_{0}\), \[Q_{k} =\begin{bmatrix}10&4\\ 4&7\end{bmatrix}+\alpha^{k}\sin(\omega k)\begin{bmatrix}2&1\\ 1&3\end{bmatrix},\] \[R_{k} =5+4\alpha^{k}\sin(\omega k),\] \[A_{k} =\begin{bmatrix}5&3\\ 2&1\end{bmatrix}+\alpha^{k}\sin(\omega k)\begin{bmatrix}10&20\\ 30&10\end{bmatrix},\] \[B_{k} =\begin{bmatrix}2\\ 3\end{bmatrix}+\alpha^{k}\sin(\omega k)\begin{bmatrix}10\\ 20\end{bmatrix},\] where \(\alpha=0.9\), and \(\omega=1\). The time-varying dynamics are uniformly \(d\)-step controllable and observable in the sense of Assumption 2 for \(d=2\). Consider the corresponding Riccati recursions \[X_{k}=\mathcal{R}_{k}(X_{k+1})\quad\text{and}\quad Y_{k}=\mathcal{R}_{k}(Y_{k +1})\] for \(k=T-1,T-2,\ldots,0\), with boundary conditions \[X_{T}=10^{-2}\cdot I_{2}\quad\text{and}\quad Y_{T}=10^{2}\cdot I_{2}.\] With \(T=20\), the distance between \(X_{k}\) and \(Y_{k}\) is measured by the Riemannian distance \(\delta(X_{k},Y_{k})\) and the induced \(2\)-norm \(\|X_{k}-Y_{k}\|_{2}\), respectively. The results are plotted in Figure 1. Given the uniform controllability and observability index \(d=2\), the system model in the lifted reformulation of the problem evolves by \(2\) steps per stage. According to Theorem 1, the Riccati operator in the lifted domain is strictly contractive with respect to the Riemannian distance, with time-varying rate of contraction, as shown in Figure1. Observe from Figure 1 that the Riccati operator is not initially a contraction with respect to the induced \(2\)-norm. ## V Conclusion Our attention is focused on the non-stationary Riccati operator associated with the time-varying LQ control problem. The lifting approach presented in this paper provides a procedure to measure the strict contraction rate of the non-stationary Riccati operator. Further extensions to the results in this paper may be possible by replacing the controllability and observability assumptions with weaker assumptions such as stabilizability and detectability. Future work is focused on the impact of error in the cost-to-go approximations on the performance of the receding horizon scheme.
2307.10018
RobôCIn Small Size League Extended Team Description Paper for RoboCup 2023
Rob\^oCIn has participated in RoboCup Small Size League since 2019, won its first world title in 2022 (Division B), and is currently a three-times Latin-American champion. This paper presents our improvements to defend the Small Size League (SSL) division B title in RoboCup 2023 in Bordeaux, France. This paper aims to share some of the academic research that our team developed over the past year. Our team has successfully published 2 articles related to SSL at two high-impact conferences: the 25th RoboCup International Symposium and the 19th IEEE Latin American Robotics Symposium (LARS 2022). Over the last year, we have been continuously migrating from our past codebase to Unification. We will describe the new architecture implemented and some points of software and AI refactoring. In addition, we discuss the process of integrating machined components into the mechanical system, our development for participating in the vision blackout challenge last year and what we are preparing for this year.
Aline Lima de Oliveira, Cauê Addae da Silva Gomes, Cecília Virginia Santos da Silva, Charles Matheus de Sousa Alves, Danilo Andrade Martins de Souza, Driele Pires Ferreira Araújo Xavier, Edgleyson Pereira da Silva, Felipe Bezerra Martins, Lucas Henrique Cavalcanti Santos, Lucas Dias Maciel, Matheus Paixão Gumercindo dos Santos, Matheus Lafayette Vasconcelos, Matheus Vinícius Teotonio do Nascimento Andrade, João Guilherme Oliveira Carvalho de Melo, João Pedro Souza Pereira de Moura, José Ronald da Silva, José Victor Silva Cruz, Pedro Henrique Santana de Morais, Pedro Paulo Salman de Oliveira, Riei Joaquim Matos Rodrigues, Roberto Costa Fernandes, Ryan Vinicius Santos Morais, Tamara Mayara Ramos Teobaldo, Washington Igor dos Santos Silva, Edna Natividade Silva Barros
2023-07-19T14:58:30Z
http://arxiv.org/abs/2307.10018v1
# RoboCIn Small Size League ###### Abstract RoboCIn has participated in RoboCup Small Size League since 2019, won its first world title in 2022 (Division B), and is currently a three-times Latin-American champion. This paper presents our improvements to defend the Small Size League (SSL) division B title in RoboCup 2023 in Bordeaux, France. This paper aims to share some of the academic research that our team developed over the past year. Our team has successfully published 2 articles related to SSL at two high-impact conferences: the 25th RoboCup International Symposium and the 19th IEEE Latin American Robotics Symposium (LARS 2022). Over the last year, we have been continuously migrating from our past codebase to Unification. We will describe the new architecture implemented and some points of software and AI refactoring. In addition, we discuss the process of integrating machined components into the mechanical system, our development for participating in the vision blackout challenge last year and what we are preparing for this year. Keywords:RoboCIn [table]RoboCup 2023 Robotics Small Size League ## 1 Hardware The hardware updates for this year aim at more reliable motion control by improving our mechanics project. We have added a brass thread to our aluminum drive transmission support, preventing it from wearing out due to the shaft's friction. Besides the hardware adaptations for the Vision Blackout challenge, which are detailed in Subsection 2.1, our electronics project has remained unchanged, and general hardware specifications can be found in Table 1, with no changes from the 2020 version. ### Drive Transmission Support To achieve our goal of competing in Division A in the following years, we need to minimize errors in the robot's movements, and improving our drive set is a necessary step. For the mechanics project, our team uses mainly 3D-printed parts, however, this approach has shown to be insufficient for building a reliable transmission set due to resistance and precision limitations from the adopted materials (PLA) and the 3D printing process. Thus, for RoboCup 2022, we added a machined aluminum drive transmission support to our robots, Figure 1(a), making the transmission system smoother and ensuring the transmission gears' tolerance was respected. Even though the machined transmission approach has improved the system, during RoboCup 2022, some of the drive sets presented looseness on the wheel shaft, and we partially solved the problem by grounding the wheel axis with Tekbond 793. Therefore, after the competition, we conducted a material analysis and found that the major reason for the gap was that the hardness of our stainless steel wheel shaft was much higher than the hardness of the aluminum drive transmission support, causing it to wear out due to the forces applied to this structure. To mitigate this problem, we added a brass thread to the aluminum support, Figure 1(b), which has a greater hardness than the material from the previous model, 1(a), equalizing the contact forces. This model was used in the 2022 Latin American Robotics Competition (LARC) and has shown more reliability than the previous, full-aluminum, version. \begin{table} \begin{tabular}{|l|l|} \hline **Robot Version** & **v2022** \\ \hline Driving motors & Maxon EC-45 flat - 50W \\ \hline Max \% ball coverage & 19.55\% \\ \hline Microcontroller & STM32F767ZI \\ \hline Gear Transmission & 18 : 60 \\ \hline Gear Type & External Spur \\ \hline Wheel & 3D Printed \\ \hline Total Weight & 2.36 kg \\ \hline Dribbling motor & Maxon EC-max 22, 25W \\ \hline Encoder & _MILE 1024 CPT_ \\ \hline Dribbling Gear & 1 : 1 : 1 \\ \hline Dribbling bar diameter & 13mm \\ \hline Max. kick speed & 6.5m/s \\ \hline Communication Link & nRF24L01+ \\ \hline Battery & LiPo 2200mah 4S 35C \\ \hline \end{tabular} \end{table} Table 1: Robot Specifications We also had problems with tolerance in the machining process of the drive transmission support to add the brass thread. Figure 1(c) shows the consequences of this inaccuracy in the hole, causing backlash problems for the gearing misaligned axis. These problems were solved by manufacturing new parts of the transmission supports in collaboration with the Physics Department of our university, which supported us with high-precision machines that guarantee the reliability of our robots. ## 2 Vision Blackout Challenge For participating in the Vision Blackout Challenge, hardware adaptations were made, new low-level navigation methods were implemented and a complete software infrastructure was built for allowing our robots to execute SSL soccer skills autonomously. Our goal was to create a robust enough infrastructure to implement each robot skills by only creating new Finite State Machines (FSM), all using onboard modules for sensing and processing. With this architecture, we were able to complete 2 of the 4 stages of the challenge in 2022's competition, achieving 2nd place. Also, we shared details of our research on recent papers [3, 6, 7] and open-source project datasets and documentation12. Footnote 1: [https://github.com/bebetocf/ssl-dataset](https://github.com/bebetocf/ssl-dataset) Footnote 2: [https://github.com/jgocm/ssl-detector](https://github.com/jgocm/ssl-detector) ### Hardware Adaptations Following past approaches in the League, we have added an onboard camera and a compute module for vision processing and decision-making. A Logitech Figure 1: Drive transmission support versions used in the RoboCup and LARC 2022. C922 webcam was chosen due to its low-distortion parameters, allowing for easy camera calibration with high precision. As for additional computation, a 4GB NVIDIA Jetson Nano Developer Kit was chosen due to its small size, low power consumption, high throughput on DNN and image processing, and extensive documentation for NVIDIA libraries. Also, its System-on-Module (SoM) architecture leaves room for future improvements, by adapting our electronics to connect the module directly to our mainboard, for instance, saving even more space. Besides the Jetson Nano and the Logitech camera, we have also added a power supply module, using 4 cells of 18650 batteries, for powering this new subsystem. A new cover plate, which we call the robot's third floor, was designed for mounting those parts onto the robot and it is shown in Figure 3. It also has housings for additional standoffs, enabling us to place a SSL tag on the robot's top, which is useful for experiments and evaluation. ### Software Workflow The autonomous SSL robot is mainly operated by two processing modules: the Jetson Nano and the STM32F746ZI, an ARM Cortex-M7 Microcontroller Unit (MCU), also referred to as STM32F7 for simplicity. They communicate through an Ethernet cable using User Datagram Protocol (UDP) Socket packets. For embedded vision, we use a Logitech C922 camera with 30 frames per second capture rate using 640x480 pixels of resolution. Vision frames are processed by the Jetson Nano running a CNN-based Object Detection model, namely SSLLite MobileNetv2 [13, 4], for detecting SSL objects' bounding boxes, which are used for estimating their relative positions to the robot by using pre-calibrated Figure 2: Uncentric brass inserted thread. intrinsic and extrinsic camera parameters, as presented in [6]. The paper also shares details of model retraining and deployment using TensorRT optimizations. Decision-making is also implemented on the Jetson Nano, which runs Finite State Machines (FSMs) that implement each of the robot's autonomous skills for solving Vision Blackout challenge stages. Objects' relative positions are used as inputs and the FSM computes a target position and orientation, a navigation type, and command flags such as odometry resetting, capacitor charging, and kicking. This information is encoded into a protobuf message and sent to the MCU through the UDP connection. At the MCU level, for our Target-Point-based navigation, we implement three movement types: Rotate-on-Self (RoS), Drive-to-Point (DtP), and Rotate-in-Point (RiP). The first accounts for rotations around the robot's axis, mainly used for initial ball searching and self-alignment with targets. DtP implements a linear movement with orientation correction, adjusting the robot's translation velocity according to its rotation error and distance to the target. Lastly, RiP executes a circular trajectory around a point, allowing the robot to search for a goal while looking at the ball, for instance. The MCU also calculates the robot's inertial odometry by computing inverse kinematics from encoder readings, and the trajectory is estimated using gyroscope measurements combined with odometry, allowing it to adjust its path while embedded vision information is not available. Figure 4 illustrates an overview of the proposed architecture, and we present more details in [7]. ### Robot Skills This architecture was employed to solve the 4 stages of Vision Blackout challenge 2022. However only stages 1 and 2 were fully completed during the competition, Figure 3: Robot hardware adaptations for vision blackout challenge. showing our solutions were still not robust enough and highlighting many difficulties and necessary improvements, which we discuss in the next subsection. In more recent experiments [7], for evaluating our system's capabilities and weaknesses, we have executed multiple tries on different scenarios of three common SSL tasks: grabbing a ball (I), scoring on an empty goal (II), and passing the ball (III). The same rules and scoring criteria as the 2022 Vision Blackout challenge [10] were applied in the tasks, except for passing the ball, which excludes scores from the kicker robot from the challenge's stage 4. Also, we consider that the robot has succeeded in the task if conditions for all positive scores are satisfied. Table 2 shares an overview of the experiments' overall results from [7], showing that the robot was able to stop with the ball touching its dribbler and score a goal in 80% of the attempts on tasks 1 and 2. As for the third task, the ball hit the receiver robot's dribbler on 46.7% of the 15 attempts, although the robot was hit in 80% of them. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Metrics** & **Task I** & **Task II** & **Task III** \\ \hline Min Time (s) & 6.09 & 11.89 & 9.82 \\ \hline Max Time (s) & 10.27 & 60.00 & 20.00 \\ \hline Mean Time (s) & 7.70 & 19.01 & 14.21 \\ \hline Success Rate & 12/15 & 12/15 & 7/15 \\ \hline Total Score & 40/45 & 54/60 & 47/60 \\ \hline Penalties & - & 8 & 3 \\ \hline \end{tabular} \end{table} Table 2: Autonomous SSL Robot’s Overall Performances on Proposed Tasks. Figure 4: Overview of the proposed logic diagram in order to build an autonomous RoboCup Small Size League Robot. All the modules are inside the robot, the upper modules run on the Jetson Nano, while the lower ones run on the STM32F7 MCU. Finite State Machines are defined for implementing soccer skills. ### Major Issues and Ongoing Improvements #### 2.4.1 Issues from RoboCup One major difficulty we faced at the 2022 Vision Blackout challenge was to detect the ball at high distances, since our object detection approach was only capable of detecting it for up to 5 meters, which led us to failures at 2 of the 3 tries on stages 1 and 2. Also, our self-localization methods were not robust enough for solving stage 3, resulting in low scores and long execution times. As for stage 4, even though the passer robot was able to detect the ball, the kicker one could not move due to communication issues, leading to 3 failures. #### 2.4.2 Issues from Evaluation Experiments During experiments from [7], which results are reported in Table 2, analysis from embedded vision logs have shown that most failures were caused by false positive detections from objects outside the field, highlighting the importance of discarding out-of-field information. Also, many penalties were caused due to the robot's inability of detecting field lines, and ball searching was the most time-consuming part of the tasks. #### 2.4.3 Ongoing Improvements For discarding out-of-field information, we have been developing field boundary detection solutions, which also enable more complex exploration strategies for objects' searching, as we can avoid leaving the field, being also a useful feature for overcoming our major issue from RoboCup: not finding the ball. Introducing a self-localization solution for SSL robots is also a necessary improvement. It enables planning more efficient paths and avoiding penalties, such as entering the defender's area. In addition, objects' searching, which was shown to be the most time-consuming part of the tasks, could be optimized using localization knowledge for more efficient field exploration. Thus, we are working on a Monte Carlo Localization (MCL) algorithm that fuses our inertial odometry approach with vision information from detected goals and field boundaries relative positions for regressing the robot's pose over time, following motion and observations models, and resampling techniques from typical approaches of other RoboCup leagues [12]. ## 3 Software SSL-Coach was our first version of stable software developed for the SSL category. It has a modular architecture, inspired by the STP (Skill, Tactics, and Plays) [2], combined with the team's previous experience in the software development for robot soccer in the VSSS (IEEE Very Small Size Soccer) category. However, the resulted software presented both tightly coupled information processing and software decision-making stages, besides having a confusing data flow, due to the variety of demands and the short development period for the team's first participation in RoboCup, in 2019. Over the years of SSL-Coach development, the accumulation of technical debts in the architecture and development infrastructure made it difficult to make improvements, such as creating more elaborate and collaborative plays among the robots, as explored in STP. These complications in software evolution are reflected in the difficulty of including new information flows and functionalities that require additional changes in other parts of the code, including critical parts of the flow that can bring side effects. Also, it has been noted as difficult to integrate new team members, transfer knowledge, and renew the team using the existing architecture. Thus, we decided to concentrate efforts on building a new code base, aiming to reduce software coupling and complexity in order to reduce execution errors, which led to failure points in the initial software architecture. RoboCIn participates in other robot soccer categories that require the development of specialized software, namely 2D Simulation and VSSS. These categories needed to solve similar problems and situations, and as introduced in TDP 2022 [14], they use similar technologies based on C++ allied to the Qt Framework to create User Interfaces (UI). However, they had a great distance from the code base, which led to duplication of effort or replication of logic to develop, and low interchangeability of developers between categories. After surveying the RoboCIn categories' technical debts to be solved, we decided to model a more flexible and modern architecture, bringing different possibilities for expansion and reuse.Soccer-common was developed as an open-source library, used as a submodule, aiming to concentrate the global part of common code of the different team modalities, and providing a separation between UI and back-end. Also, it has as its main components: a library of geometric functions, a graphical interface with drawing support at any point, debugs, and the design of a module. A module, in the new architecture, consists of the major abstraction capable of executing logic in parallel, providing support for communication with the visualization interface and parameters, and communicating with other modules. Also, a module can be indexed for occasions where it is intended to have multiple identical execution steps, such as the behavior of a robot. Its conception consists of the main core of our architecture, which we will describe below. From now on, we will called Unification, the new development base software, which has already replaced our VSS-software and SSL-Coach. ### Architecture The tight coupling of processing in SSL-Coach is due to how its modules were created and communicate with each other. Each module consists of a thread that executes a singleton, a static global object with a unique instance at any point, and the exchange of information between these modules is done through direct connections by setters and getters, which violates the Single-Responsibility Principle (SRP) [5], and consequently makes it difficult to change the existing execution flow and call tracking performed. To resolve the SRP violations, the execution flow was oriented to data arrival events, using callbacks, thus enabling a coordinated flow of data and the execu tion of each thread. To create the callbacks infrastructure we use QT Signals & Slots3, originally intended for the communication of objects linked to the UI in the QT framework, has a robust and simple system for using callbacks in C++, which consists of implementing the Observer Pattern to facilitate communication between components of the Framework. Footnote 3: [https://doc.qt.io/qt-6/signalsandslots.html](https://doc.qt.io/qt-6/signalsandslots.html) By combining Signals & Slots with our projected wrapper for safe shared access, the communication of our modules was implemented as a cascaded publisher-consumer system, where emitter functions are connected in the creation of modules to receivers, confirming each module requirement. As soon as a module receives the input and is registered, the information is stored in a critical region and waits for the next execution to be effectively consumed. With this new infrastructure, we have also improved the communication between modules using more flexible data packages, simplifying modifications and corrections. Previously, the information we shared consisted of an extensive structure where all the relevant information to be transmitted was present. The distinction of its use was under an enumerator's responsibility. The filling of this structure was not always fully completed, as not all information was relevant to a specific message, which pollutes and makes it difficult to understand. We solved the problem using a variant type, introduced in C++17, which consists of a type-safe union, capable of aggregating different structures into a single type, enumerating each one of them. It allows us to direct the processing of a message through pattern matching, which simplifies the use of previously needed conditionals. In this software architecture, the flow of information starts with the vision system that sends position information. Simultaneously, the referee sends stage and command updates. The software then receives these inputs and applies filtering processes. The decision module determines the players' behaviors (such as goalkeeper, forward, defender, and others) according to the received referee data, making decisions and using the vision data to identify which player should perform a particular action. The behavior modules then use the decision assignments to execute the intended behavior for each player in separate threads. These threads produce tactics that are processed by the planning and/or navigation modules threads in sequence. Finally, the navigation module processes the necessary actions for the robots to move, with briefed type and parameters. As a result, we achieved the architecture in Figure 5. ### Implementation We seek to reformulate modules introduced in the 2019 TDP according to the listed technical debts, reducing the number of flags and removing boilerplate code. We will describe the changes made to our code flow below: #### 3.2.1 DataWorld In SSL-Coach, this component was responsible for receiving vision information obtained from simulators or vision software, performing vision processing and receiving commands from the referee's software. We decided to separate the processing carried out into three modules, each dedicated to one of the respective activities described above. For the module dedicated to receiving arbitration software commands, we have developed our parser4 based on the Stage and Command received, allied to the analysis of internal flags, information from the vision and the previous context, aiming at the specialization of game situations, simplifying the strategy carried out later, so each leaf situation at parser tree output, started to be treated in isolation. Footnote 4: [https://github.com/robocin/soccer-common/wiki/Referee-Parser](https://github.com/robocin/soccer-common/wiki/Referee-Parser) The complete referee parser tree, shown in Figure 6, starts from the game's command and the state received from the external referee. At the Game Action division, it decides if the robots must halt, or not. Then, the Game Status transition defines if we are dealing with an in-game situation or a positioning one, such as a preparation for kick-off. Lastly, at the Planning Game division, the parser chooses between states whether the robots must move without touching the ball (Dynamic Formation), execute a predefined play (Planned Tactic), or play the game normally (Game Tactic). #### 4.2.1 Trainer With the split of the DataWorld component, and the creation of a dedicated module to parse the information received from the arbitration soft Figure 5: Overview of SSL-Unification software architecture detailed dataflow. In purple is the data with variant groups type, in each module has notation of number of thread, in green modules with one thread running and in yellow modules with execution in function of number of available robots, N. ware, the processed commands allowed a strong restructuring in this module, where we currently make quick specific changes for opponents depending on the applied game state. Over the years we have greatly evolved players' allocation within the decision component, the Trainer, formerly called Decision, starting from a static team with 3 Defenders, 1 Support, and 1 Forward in 2019, to a dynamic allocation based on the position of the ball and risk offered by the enemies' positions. In this way, we are currently able to be an adaptable team, but one that seeks to be extremely offensive by pressing the game to the enemy half, exchanging passes until they get an opportunity to shoot on goal. Our offensive tactics are made up of a Forward, who is the player in possession or in a direct dispute over the ball, and of a variable number of supporters according to the position of the opposing team's robots, where each supporter seeks to stay in optimal positions, the best within our heuristics to receive a pass from Forward and perform a successful play. Once the ball is passed to the supporter, this will become the player in possession or direct dispute for the ball, switching positions: the supporter receiving the pass will become the Forward, which keeps the attack cycle performed by our team. Figure 6: Complete referee parser tree, showing all possible game states. #### 3.2.2 Behavior With Unification, one of the team's main goals for the old Player module was to decouple functionalities and simplify state machines. In SSL-Coach, we had behaviors with finite state machines (FSM) of many states and with similar logic functions done in several different ways within the behavior itself, which made it difficult to understand the transitions to debug and make corrections. Previously, as described, each state corresponded to an enumerator, and the state processing nodes, functions, which are incapable of storing contexts, switched by a large number of conditionals. The input for each processing node consisted of a pair \(<state,context>\), with the information needed for all existing states, which made it particularly difficult to distinguish information relating only to specific states. With the architecture update, some ways to improve the implementation of an FSM for the desired purposes were also studied. Similar to the messages used for communication between modules, the machine states started to consist of a variant, while the processing nodes of these states became classes. With this change, the state processing nodes now have greater independence, with contexts capable of restarting as a transition to a new state is performed. Also, in the current architecture, we started to apply the concept of SkillBook coming from the STP [2], and to define the attacker as a set of tactics that involve interacting with the ball (be it kicking to goal, giving a pass or take a penalty) that was previously all together in a single FSM, making it complex and disproportionately large to deal with various situations. In order to facilitate maintenance and future improvements, it was decided to extract the previously existing Planning and Navigation modules within the Behavior component, thus enabling the alternation of algorithms used, as we will explain below. ### Path-Planning One of the changes made to the architecture was the creation of a dedicated module for path planning. After that, we became capable of exploring path optimizations and switching the used algorithm. This year, we changed our path-planning algorithm and optimized our low-level control. Until then, we used an evolved version of the visibility graph presented on the 2019 TDP [15], due to a bunch of changes realized over the years aimed to optimize and handle corner cases. However, it has become really difficult to maintain it given the increased code complexity. #### 3.3.1 Current Problems One of our major issues in past competitions was the high number of fuels due to crashing and robot distance to forbidden locations, as shown in Table 3. Due to the yellow cards arising from those fuels, we were frequently forced to play with 5 or 4 players, which reduced massively our offensive power given the reduced number of players to compose the attack. This analysis led us to optimizations on the path-planning algorithm, since the majority of those fuels were avoidable. With limitations concerning Visibility Graph's nature, those related to the generated path stand out. Despite being the shortest euclidean path, it's not time-optimal for omnidirectional robots with an abrupt change in direction and velocity, as presented by Balkon et al. [1]. Furthermore, because the algorithm does not take into account the agent's momentum/direction, which is the whole robot's state with velocity and acceleration rather than solely position, there is a dissonance in its execution between the calculated path and the robot's real trajectory, as shown in Figure 7. Moreover, the available margin for navigation error is minimal due to the generated path being tangential to the obstacles. Hence, both factors culminate in the high amount of collision and invasion, since the expansion of obstacles' boundaries is not a direct guarantee of decreased collisions in general, besides being a solution that greatly hurts our team performance. Also, bigger obstacles reinforce some limitations with our implementation, given that as they are solely a set of points connected in the scene's graph, then it is not possible to increase the complexity of the polygons used as obstacles further than triangles and rectangles without harming the execution time. Likewise, we are not able to properly handle the escape from an obstacle, whether it is the start or target position. So, with a more generic solution, we are susceptible to dealing with a lot of corner cases, which result in both bad placements for ball disputes and defense area invasion. \begin{table} \begin{tabular}{|c|c|} \hline **Referee foul event** & **Amount** \\ \hline ATTACKER\_TOO\_CLOSE\_TO\_DEFENSE\_AREA & 22 \\ \hline BOT\_CRASH\_UNIQUE & 93 \\ \hline DEFENDER\_TOO\_CLOSE\_TO\_KICK\_POINT & 69 \\ \hline \end{tabular} \end{table} Table 3: Collision and invasion detected during matches at RoboCup 2019, 2021 and 2022. Figure 7: Dissonance between the planned path in blue and the trajectory executed in red, with emphasis on the forces acting in the change of edge of the graph. #### 4.2.2 Desired Key Improvements Therefore, we listed the following sought improvements for the new algorithm: * Fewer number of collisions, allowing better velocity and movement. * A generated path harmonic to the real robot's trajectory. * Robust algorithm for real-time and dynamic scenarios as those of SSL. * Possibility of an obstacle model that appraises the movement's dynamic, considering time as a factor to determine possible collisions. * Obstacles that can be differentiated from each other for a greater fidelity of representation of the world. * Possibility of simulating the robot's movement to feed estimate the robot reaching range. #### 4.2.3 Solutions Adopted Unlike the analyzed options in TDP 2019 [15], this time, we chose to study and compare Sampling and Trajectory based algorithms, which are classes of Path Planning Algorithms that had proven some robustness in terms of Motion Planning for SSL. Despite the popularity of RRT-based algorithms (RRT*, RRT-Connect, ERRT) among SSL teams, we opted for the Bang Bang trajectory [9] given that its traditional implementation already suffices all of our requirements, which has a strong integration between pure planning with a series of points and the proper navigation, since it computes the robot's action velocity along the path. Bang Bang trajectory-based path-plannings were studied and adopted by reference teams, being reported as important for the achieved results by Tigers [8, 16] and Er-Force [11, 17]. Both implementations are open-source and demonstrate distinct ways of dealing with the implementation of the algorithm. While Tigers bet on an approach based on selecting intermediate points from a constellation of points around a given origin connected by trajectory segments to the target, the Er-Force goes with a more open search approach seeking through the trajectory time and orientation. Each implementation has its advantages and drawbacks, and we sought to validate both approaches and their code bases. We converted Tigers' implementation to C++, but the achieved performance and some discrepant behavior to the java version made us adopt Er-Force's base, which was already developed in C++ and had an execution time lower than 1ms. To achieve this execution time, the algorithm has a reduced number of search iterations and a large search bias around previously found solutions. Thus in situations where the previous path is no longer possible and/or a large deviation is required to reach target, the search for paths fails in create trajectory. Thus, we merged some of our ideas into the algorithm, such as an additional validation of robot movement reset to prevent the speed in one direction from distorting the trajectory in its direction too much, also, ideas as well as some from the Tigers' solution, the use of the constellation of points around the robot to force further exploration of obstacle contour directions, at the cost of increased time complexity, but still contained in a time frame. Correlation with Navigation Path planning and navigation are inherently related. A path planning that does not considers the robot's dynamics causes a big discrepancy in the obtained result. Mainly in the Visibility Graph, which has sharp changes in the velocity and direction on the generated path. Then, by only using the \(\Delta S\), the navigation needs to predict the output for the robot to fulfill and generate all the movement's state transitions that affect the planning result, but none of this feedback is propagated into the next path planning. Aiming to close this control loop, trajectory-based algorithms are fed with the robot's current state with its position and velocity. But the software relies on data from the vision system where this current state corresponds to a robot's past state that was captured by the currently received frame. Furthermore, an SSL Robot is capable of changing its velocity in such a way that it is difficult for vision processing to keep up, therefore, approaches based only in vision limit the robot state transition update by the camera frame rate, typically 60 Hz. The limitation of detection of the robot current state by the vision mainly impacts the ability to control the acceleration and deceleration of the robot when the path is being adjusted throughout the cycles since it cannot reach the expected state. Seeking to mitigate this effect, we developed methods for estimating the current state of the robot based on the vision frame current information, the vision processing delay and the speed commands sent to the robot during this delay, starting from the state seen in the vision, and we apply the commands sent to the robot by replicating their performance time, thus estimating its real current state. Another more effective approach to this problem to eliminate assumptions about the current state of the robot would be to calculate its current trajectory segment itself, performing embedded navigation, a solution adopted by the Tigers team that we intend to invest in the following years. ## 4 Acknowledgement First, we would like to thank our advisors and the Centro de Informatica (CIn) - UFPE for all the support and knowledge during these years of project and development. We also would like to thank all our sponsors: CESAR, Microsoft, Veroli, HSBS, Moura, and Mathworks.
2307.08657
Neural Image Compression: Generalization, Robustness, and Spectral Biases
Recent advances in neural image compression (NIC) have produced models that are starting to outperform classic codecs. While this has led to growing excitement about using NIC in real-world applications, the successful adoption of any machine learning system in the wild requires it to generalize (and be robust) to unseen distribution shifts at deployment. Unfortunately, current research lacks comprehensive datasets and informative tools to evaluate and understand NIC performance in real-world settings. To bridge this crucial gap, first, this paper presents a comprehensive benchmark suite to evaluate the out-of-distribution (OOD) performance of image compression methods. Specifically, we provide CLIC-C and Kodak-C by introducing 15 corruptions to the popular CLIC and Kodak benchmarks. Next, we propose spectrally-inspired inspection tools to gain deeper insight into errors introduced by image compression methods as well as their OOD performance. We then carry out a detailed performance comparison of several classic codecs and NIC variants, revealing intriguing findings that challenge our current understanding of the strengths and limitations of NIC. Finally, we corroborate our empirical findings with theoretical analysis, providing an in-depth view of the OOD performance of NIC and its dependence on the spectral properties of the data. Our benchmarks, spectral inspection tools, and findings provide a crucial bridge to the real-world adoption of NIC. We hope that our work will propel future efforts in designing robust and generalizable NIC methods. Code and data will be made available at https://github.com/klieberman/ood_nic.
Kelsey Lieberman, James Diffenderfer, Charles Godfrey, Bhavya Kailkhura
2023-07-17T17:14:17Z
http://arxiv.org/abs/2307.08657v2
# Neural Image Compression: ###### Abstract Recent neural image compression (NIC) advances have produced models which are starting to outperform traditional codecs. While this has led to growing excitement about using NIC in real-world applications, the successful adoption of any machine learning system in the wild requires it to generalize (and be robust) to unseen distribution shifts at deployment. Unfortunately, current research lacks comprehensive datasets and informative tools to evaluate and understand NIC performance in real-world settings. To bridge this crucial gap, first, this paper presents a comprehensive benchmark suite to evaluate the out-of-distribution (OOD) performance of image compression methods. Specifically, we provide CLIC-C and Kodak-C by introducing 15 corruptions to popular CLIC and Kodak benchmarks. Next, we propose spectrally inspired inspection tools to gain deeper insight into errors introduced by image compression methods as well as their OOD performance. We then carry out a detailed performance comparison of a classical codec with several NIC variants, revealing intriguing findings that challenge our current understanding of the strengths and limitations of NIC. Finally, we corroborate our empirical findings with theoretical analysis, providing an in-depth view of the OOD performance of NIC and its dependence on the spectral properties of the data. Our benchmarks, spectral inspection tools, and findings provide a crucial bridge to the real-world adoption of NIC. We hope that our work will propet future efforts in designing robust and generalizable NIC methods. Code and data will be made available at [https://github.com/klieberman/ood_nic](https://github.com/klieberman/ood_nic). ## 1 Introduction Consider the Mars Exploration Rover, whose scientific objective is to search for clues to past activity of water (and perhaps life) on Mars. To achieve this, the rover collects images of interesting rocks and soils to be analyzed by the scientists on Earth. Sending these images down the Earth-bound data stream in their original form is too slow and expensive due to limited bandwidth. Thus, it is well accepted that image compression could play a key role in producing scientific breakthroughs [41]. Employing image compression in such a setting is challenging for three main reasons: 1) a _high compression ratio_ is desired due to low communication bandwidth, 2) given the battery-operated nature of these devices, the compression module has to be _lightweight_ so it consumes less memory and power, and 3) _robustness and generalization_ to environmental noises and domain shifts, respectively, is desired due to limited Mars-specific training data. These requirements are not specific only to the planetary exploration use case but arise in a wide range of scientific applications using image compression in the wild [32]. Recently, neural image compression (NIC) has demonstrated remarkable performance in terms of rate-distortion and runtime overhead on in-distribution (IND) data [9, 42]--satisfying requirements 1) and 2). However, there is limited work on understanding the out-of-distribution (OOD) robustness and generalization performance of image compression methods (requirement 3) [39]. Our work is driven by several open fundamental empirical and theoretical questions around this crucial issue. _How can the expected OOD performance of image compression models be reliably assessed? Can we gain a deeper understanding of the modus operandi of different image compression methods? How do training data properties and biases impact data-driven compression methods?_ Main Contributions:This paper takes a critical view of the state of image compression and makes several contributions toward answering the aforementioned questions. First, we design _comprehensive benchmark datasets_ for evaluating the OOD performance of image compression methods. Inspired by existing OOD benchmarks for classification and detection [25, 27, 50, 49], we design CLIC-C and Kodak-C by introducing 15 common shifts emulating train-deployment distribution mismatch to the popular CLIC and Kodak datasets. Next, we focus on understanding the image compression performance. The de-facto approach is to use rate-distortion (RD) curves measured with perceptual quality metrics, such as PSNR. Such scalar metrics, although easy to compute, are known to be extremely limited in what they can capture and sometimes can even be misleading [55, 53]. To complement RD curves, we propose _spectrally-inspired inspection tools_ that provide a more nuanced picture of (a) compression error, and (b) OOD performance of a given method. Specifically, we introduce a power spectral density (PSD) based approach to understand the reconstruction error. Our approach not only quantifies how much error was made but also highlights precisely where it was made (in the frequency domain). Similarly, to understand the OOD performance of a compression method in unseen deployment scenarios, we propose _Fourier error heatmaps_--a visualization tool for highlighting the sensitivity of the reconstruction performance of a compression method to different perturbations in the frequency domain. Using our benchmark datasets and inspection tools, we carry out _a systematic empirical comparison_ of classical codecs (i.e., JPEG2000, VTM) with various variants of NIC models (e.g., original, variable rate, pruned). Finally, we develop _theoretical tools_ to connect NIC OOD performance with its training data properties. Main Findings:Our analysis resulted in some invaluable insights about the state of image compression. We summarize some of our findings below. * NIC produces inherently different spectral artifacts than classical codecs even when both methods have the same PSNR (or compression rate) in many cases. Our tools help uncover this hidden spectral bias and highlight the limitations of de-facto RD curve-based performance comparison. * As the compression rate increases, NIC and classical codecs prioritize encoding different frequencies, in turn, highlighting that they use very different compression mechanisms. By precisely characterizing this behavior, our tools help in advancing the current understanding of the modus operandi of image compression methods. * In OOD settings, NIC and classical codecs can fail (or succeed) in the same (or different) ways. By attributing the successful and failed cases to the spectral characterization of OOD shifts in our benchmark dataset, we gain a holistic understanding of the OOD performance of image compression methods. * Identifying the suitable compression method becomes exceptionally challenging without the knowledge of spectral characteristics of OOD shifts. Our systematic evaluation identifies this open issue with current compression methods and suggests the design of next-generation NIC models that can adapt themselves at runtime based on the spectral nature of the data to be a potentially worthwhile direction to pursue in the future. We corroborate our findings with a detailed theoretical analysis, showing that multiple overarching trends in our experimental results can be attributed to neural compression models' spectral bias. _Appendix A has a detailed related work discussion, while all references are listed in the main paper._ ## 2 Out-of-distribution image compression datasets To evaluate NIC in the presence of environmental or digital distribution shifts, we generated variants of the CLIC and Kodak datasets, which we refer to as CLIC-C and Kodak-C. Following the techniques presented in [25] for studying the performance of DNN classifiers encountering distributional shifts "in the wild", our -C datasets consist of images augmented by 15 common corruptions. For each image in the original dataset, the -C dataset contains a corrupted version of the image for each of the 15 common corruptions1, and for each of five corruption severity levels, with 1 being the lowest severity and 5 being the highest. A sample of some corruptions on CLIC-C is provided in Figure 0(a). Footnote 1: We used github.com/bethgelab/imagecorruptions to apply corruptions to Kodak and CLIC images While each -C dataset offers a broad sampling of environmental or digital image corruptions, it also provides a spectrally diverse collection of corruptions, in the sense that each corruption can be categorized as low, medium, or high frequency based on the frequency content used for perturbations. We will write \(PSD(\cdot)\) to denote the function that converts the input image from the spatial to the frequency domain by computing the power spectral density of the input. Practically, computing \(PSD(\cdot)\) is done by applying the fast Fourier transform (FFT) [11], followed by a shift operation to center the zero-frequency component, then taking the absolute value. Now suppose we have a set \(\mathcal{X}=\{X_{k}\}_{k=1}^{N}\) of uncorrupted images and some corruption function \(c(\cdot)\) (e.g., frost, gaussian noise, etc.). We analyze the spectrum each corruption \(c(\cdot)\) by computing \(\frac{1}{N}\sum_{i=1}^{N}PSD(X_{i}-c(X_{i}))\) (see Figure 0(a)). Identifying the dominant frequencies in the Fourier spectrum for each corruption yields a rough categorization into low, medium, and high-frequency corruptions, provided in Table 0(b). ## 3 Spectral inspection tools While existing scalar metrics, such as PSNR, are able to summarize the visual similarity of reconstructed images to the original, we will demonstrate that such metrics can provide an incomplete (and sometimes misleading) picture when measuring the impact of compression in OOD settings. Notably, existing tools do not consider the impact of compression on different frequency ranges of images within a dataset. To more thoroughly analyze the effects of image compression, we propose to measure and visualize the effect of image compression in the spectral domain. Given an image compression model \(\mathcal{C}\) that returns reconstructed images, we introduce tools for analyzing compression error in the Fourier domain to better understand (\(i\)) which _spectral frequencies_ are distorted by \(\mathcal{C}\), (\(ii\)) the _OOD generalization_ error, and (\(iii\)) the _robustness_ error in the presence of distributional shifts. **Definition 3.1** (Spectral Measure of Distortion Error).: To analyze (\(i\)), we evaluate the image compression model \(\mathcal{C}\)'s ability to reconstruct components of an image across a range of frequencies. To quantify this, we compute the average PSD of the difference between each image \(X_{k}\) in a dataset \(\mathcal{X}\) and the reconstructed version \(\mathcal{C}(X_{k})\) of \(X_{k}\): \(\mathcal{D}(\mathcal{C},\mathcal{X}):=\frac{1}{N}\sum_{k=1}^{N}PSD(X_{k}- \mathcal{C}(X_{k}))\). Figure 1: **(a) Top row:** An original CLIC image and the same image with 3 different corruptions in CLIC-C (severity 5). **Bottom left:** Average PSD of CLIC dataset, \(\frac{1}{N}\sum_{k=1}^{N}PSD(X_{k})\). **Bottom row, other figures:** Average PSD of the difference between the corrupted images and the clean images for each given CLIC-C corruption \(c\), \(\frac{1}{N}\sum_{k=1}^{N}PSD(c(X_{k})-X_{k})\). **(b)** CLIC-C corruptions categorized as low, medium, or high based on corruption average \(PSD\). **Definition 3.2** (Spectral Measure of OOD Generalization Error).: For (\(ii\)), we evaluate \(\mathcal{C}\)'s ability to faithfully reconstruct OOD images. To quantify this, we extend the metric \(\mathcal{D}(\mathcal{C},\mathcal{X})\) to account for a corrupted version \(c(\mathcal{X})\) of \(\mathcal{X}\) as follows: \(\mathcal{G}(\mathcal{C},\mathcal{X},c):=\frac{1}{N}\sum_{k=1}^{N}PSD(c(X_{k}) -\mathcal{C}(c(X_{k})))\). **Definition 3.3** (Spectral Measure of OOD Robustness Error).: For (\(iii\)), we evaluate \(\mathcal{C}\)'s denoising ability. To quantify this, we compute the average PSD of the difference between each uncorrupted image \(X_{k}\) and the reconstructed version \(\mathcal{C}(c(X_{k}))\) of the corresponding corrupted image \(c(X_{k})\): \(\mathcal{R}(\mathcal{C},\mathcal{X},c):=\frac{1}{N}\sum_{k=1}^{N}PSD(X_{k}- \mathcal{C}(c(X_{k})))\). For simplicity, when \((\mathcal{C},\mathcal{X},c)\) is clear from the context, we will just write \(\mathcal{D}\), \(\mathcal{G}\), or \(\mathcal{R}\). Note that \(\mathcal{G}\) provides insight into the compression model \(\mathcal{C}\)'s ability to generalize to a distribution shift \(c\) while \(\mathcal{R}\) visualizes the denoising effect (or lack thereof) of \(\mathcal{C}\) across the frequency domain. In Appendix B, we present results using an additional tool, _the Fourier heatmap_, that utilizes Fourier basis perturbations as corruptions and is used to corroborate our findings for the specific -C datasets and corruptions we consider. This tool can be leveraged when specific OOD data is unavailable. ## 4 Experiments and findings We analyze the performance of the following image compression methods. Further details on their model architectures and training can be found in Appendices I to L. **Classical codecs:** We apply the JPEG2000 algorithm over several compression rates \(q\). **Neural Image Compressors (NIC):** NIC model optimization uses a hyperparameter \(\lambda\) to control the relative weight of distortion (quality of reconstruction) and rate (level of compression) terms in the objective function. Our experiments include eight **Fixed-Rate (FR)** models, each trained on a single \(\lambda\) value, and one **Variable-Rate (VR)** model, trained over a continuous range of \(\lambda\) values using loss conditional training [17]. We include the VR model for two reasons: (a) VR models are more desirable in the wild due to reduced space constraints and (b) to observe how approximating a fixed-rate model as a variable-rate model impacts spectral artifacts across frequencies. For both fixed- and variable-rate the distortion objective is a mean squared error (MSE) (_i.e._, model is optimized for PSNR), and in both cases, our base NIC architecture is the scale hyperprior model of [9]. All NIC models were optimized on the train split of the 2020 CLIC dataset [52]. Further training details can be found in Appendix L. For NIC, we also (a) designed and analyzed pruned variants, and (b) analyzed MS-SSIM optimized versions. These additional comparisons, as well as comparisons using the additional codec VTM (h.266) [29], are provided in Appendices C and E. **Evaluation setup.** We compare distortion, robustness, and generalization error of different image compression methods under three constraints: (a) **no constraint**, (b) **fixed-bpp**, and (c) **fixed-PSNR**. In (a), we compare methods over their full range of rate-distortion tradeoffs by generating rate-distortion curves. In (b), we compare models with hyper-parameters which give a very similar bpp result on a particular dataset. For example, we find that on the CLIC dataset, FR NIC with \(\lambda=0.15\), VR NIC with \(\lambda=0.21\), and JPEG2000 with \(q=10\), all give a bpp very close to 1.21. Thus, comparing these three models with those hyper-parameters on CLIC under a fixed bpp constraint, _emulates a setting in which a fixed budget is available to store images_. Analogously, in (c) we compare models with hyper-parameters yielding a fixed PSNR. This _emulates a setting with a requirement on minimum allowable reconstructed image quality_. Scenarios (b) and (c) are used when evaluating \(\mathcal{D},\mathcal{G},\mathcal{R}\), Fourier heatmaps, and accuracy on a downstream task. **Test data.** All models are tested on (a) in-distribution (IND) and (b) corrupted (or OOD) datasets. For (a), we use the 2020 CLIC test split, the full Kodak dataset, and the ImageNet validation split. For (b), we use the corresponding -C datasets for each of the datasets in (a). The main body contains results for the CLIC/CLIC-C dataset. Analogous results for Kodak are in Appendix F. ### Evaluating spectral distortion on IND data On in-distribution (IND) data, the existing RD curve metrics in the center of Figure 2 highlight the established trend that NIC models outperform the JPEG2000 model over the compression rates that the NIC model is trained on (bpp \(\in(0.1,1.5)\)) [9]. VR NIC obtains the same performance as FR NIC for low and moderate bpps; however, the _FR NIC model outperforms the VR NIC model at higher _bpps_, despite the fact that both models were trained on the same range of \(\lambda\). This result follows [17] and suggests that the VR NIC may not be expressive enough to learn the high PSNR regime. Next, we use our spectral inspection tool \(\mathcal{D}\) to better understand the effects of different image compression methods. Specifically, Figure 2 shows plots of \(\mathcal{D}\) under three fixed-bpp and three fixed-PSNR scenarios on the clean CLIC dataset. We highlight some surprising insights below. **Two methods yielding the same PSNR (or bpp) can produce very different spectral artifacts.** Under the fixed-PSNR constraint (right side of Figure 2), each column consists of methods with hyper-parameters selected to give very similar PSNRs on the CLIC test set (_e.g._, models on the "high psnr" column all have PSNR \(\approx 36.85\)). Despite having comparable PSNRs, the plots of \(\mathcal{D}\) vary greatly between the NIC models and JPEG2000. In particular, NIC models distort high frequencies significantly more than medium frequencies (notice the warmer-colored rings around the edges of the \(\mathcal{D}\) plots with cooler-colored centers). JPEG2000, on the other hand, distorts low and medium frequencies more than high frequencies (notice the large rectangles of warmer colors). This same pattern holds under the fixed-bpp constraint (left side of Figure 2). Furthermore, VR NIC and FR NIC have the same error patterns, but VR NIC does leave a slightly higher magnitude of error in some settings (_e.g._, med. bpp, high bpp, and high psnr). This suggests that _NIC models produce inherently different spectral artifacts than classical codecs_. **As the compression rate increases, NIC and JPEG2000 prioritize different parts of the spectrum.** On the left side of Figure 2, each column represents a different "budget" scenario where the three methods have hyper-parameters which result in the models giving very similar bpps on the CLIC test set. Although it was previously known that the quality of the reconstructed images decreases as the bpp decreases, it was not previously known _which_ frequencies NIC models distort to achieve a given bpp. The \(\mathcal{D}\) plots show that JPEG2000 models corrupt low- and mid-frequency regions starting at low and moderate compression rates and these regions become more severe as the budget decreases. NIC models do almost the opposite--they sacrifice the highest frequencies first and expand this region into lower frequencies at more severe compression rates (_i.e._, as bpp decreases). These observations demonstrate that _NICs use a very different compression mechanism than classical codecs_. ### Evaluating the generalization and the robustness on OOD data We use the CLIC-C dataset and our spectral tools to study the OOD performance of different image compression methods. We show results for example shifts (one low, medium, and high-frequency representative) and three severities (1, 3, and 5 where 5 is most severe) in Figures 3 and 4 and discuss several interesting findings below. Remaining results are reported in Appendices C, E, and H. #### 4.2.1 Rate-distortion curves on OOD data **Image compression models generalize to low- and mid-frequency shifts better than high-frequency shifts.** The top row of Figure 3 shows how well different compression models generalize to shifted images in terms of RD curves. In other words, these plots show how well a compressor \(\mathcal{C}\) can Figure 2: **Visualizing distortion via CLIC test set evaluation. Left:** spectral measure of in-distribution reconstruction error \(\mathcal{D}\) under the fixed-bpp constraint at three rates. **Center:** Rate-distortion curves with vertical lines indicating fixed-bpp values and horizontal lines indicating fixed-PSNR values. **Right:**\(\mathcal{D}\) under fixed-PSNR constraint. Each \(\mathcal{D}\) plot is labeled with a tuple of that model’s (bpp, PSNR) on CLIC. Hotter colors (red) indicate more error in that frequency range. reconstruct a given shifted image \(c(\mathcal{X})\) in terms of PSNR of \(\mathcal{C}(c(\mathcal{X}))\) with respect to \(c(\mathcal{X})\). The three examples of corruption in this figure show vastly different trends. On the low-frequency corruption (snow), all three models can reconstruct \(c(\mathcal{X})\) almost as well as these models can reconstruct clean images (note the PSNR range in the top left plot of Figure 3 is about 24-37 while the PSNR range for the clean data in Figure 2 is about 28-40). Interestingly, the three models can reconstruct the images with the glass blur (a medium-frequency shift) _better_ than they can reconstruct clean images (the PSNR of the data shifted with glass blur ranges from about 31-48). These results suggest that _image compression models are fairly effective at generalizing to low-frequency shifts and very effective at generalizing to medium-frequency shifts_. However, the high-frequency shift (shot noise), gives a starkly different result. All three models give very low PSNRs with respect to \(c(\mathcal{X})\). Even at the lowest severity (severity=1), this PSNR is only in the low 20s. As the severity increases (_i.e._, as the lines become more transparent), the PSNR decreases even more to the point that, at the highest severity, none of the models can achieve a PSNR higher than 12.5. Notably, the main factor determining the PSNR is the severity of the corruption and not model type or bpp. This suggests that _it is significantly harder to generalize to high-frequency data than to low- or mid-frequency data_. **NIC models' bpp usage is highly sensitive to the data shift type.** In the previous point, we saw that, for each corruption, all three models achieved very similar PSNRs with respect to \(c(\mathcal{X})\). The RD curves for these models, however, differ significantly because these models use different bpps to achieve these PSNRs2. In general, JPEG2000 is very consistent in terms of its bpp usage- it uses 0-2 bpps regardless of the corruption type. NIC models, however, give different bpp ranges depending on the frequency of the shift. In particular, NIC models require fewer bpps to reconstruct the glass blur shifted images (_e.g._, both NIC models and JPEG2000 can achieve PSNRs with respect to \(c(\mathcal{X})\) of > 45; however, the NIC models need bpp < 1, while JPEG2000 needs bpp > 1.5). This suggests that NIC models can capitalize on "easier to model" images. On the other hand, NIC models may also increase the bpp to no avail. Consider the shot noise corruption (a high-frequency corruption): here NIC models use much higher bpps (up to bpp=4). Despite this significant increase in bpp, the NIC models are unable to increase the PNSR (notice how the curves are almost flat). This highlights that _although NIC and JPEG2000 can achieve similar PSNRs on a given shift, NIC is significantly more sensitive to the shift type than JPEG2000 in terms of the compression rate required to achieve these PSNRs_. Footnote 2: Note that we use the same values for \(\lambda\) on NIC models and quality \(q\) on JPEG2000 for the corruptions as we do for the clean images. **NIC models are better at denoising high-frequency corruptions than JPEG2000.** The second row of Figure 3 shows how well different compressors \(\mathcal{C}\) denoise corrupted images in terms of Figure 3: **Rate-distortion curves for a representative low, medium, and high-frequency shift.** Each shift and model has three curves for severity=1 (least transparent), severity=3, and severity=5 (most transparent). **Top row:** generalization of \(\mathcal{C}(c(\mathcal{X}))\) w.r.t. \(c(\mathcal{X})\) (_i.e._, PSNR of the reconstructed shifted images w.r.t. the original shifted images). **Bottom row:** denoising of \(\mathcal{C}(c(\mathcal{X}))\) w.r.t. \(\mathcal{X}\) (_i.e._, PSNR of the reconstructed shifted images w.r.t. the original clean images). the PSNR of \(\mathcal{C}(c(\mathcal{X}))\) with respect to \(\mathcal{X}\). These results show that all the models fail at denoising snow (low-frequency) and glass blur (medium-frequency) corruptions (PSNR does not change much with an increase in bpp). However, NIC and JPEG2000 show different performances in terms of denoising high-frequency corruptions. Specifically, NIC models achieve significantly better PSNR with respect to \(\mathcal{X}\) than JPEG2000. This suggests that _NICs may be a more effective method for denoising high-frequency corruptions than the previously-used JPEG and JPEG2000 methods_. The implication of this finding extends to the research area of adversarial example denoising [5]. #### 4.2.2 Spectral analysis on OOD data We now take a deeper look at the Section 4.2.1 findings using our spectral inspection tools. We report only FR NIC results here as VR NIC results are very similar (see Figure 10 in Appendix D). **Spectral artifacts are similar for low-frequency shifts and clean images.** The patterns of \(\mathcal{G}\) in Figure 4 measure how well each model generalizes to (or reconstructs) the shifted images. For the low-frequency shift (Figure 4 left side, top row), the four plots look strikingly similar to the patterns exhibited by the same models on clean data (Figure 2 top and bottom row): NIC models distort high frequencies more than low frequencies while JPEG2000 distorts low and medium frequencies more than high frequencies. This suggests that _both methods' modus operandi for generalization to data with low-frequency shifts is similar to their modus operandi on clean data_, which makes sense as clean data is dominated by low/mid frequencies. Interestingly, these generalization differences between NIC and JPEG2000 compressors are not accompanied by differences in \(\mathcal{R}\). Both methods show similar patterns in their plots of \(\mathcal{R}\) patterns and these in turn look similar to the snow corruption plot in Figure 0(a). This is consistent with our finding from Figure 3 which showed that both NIC and JPEG200 fail to denoise low-frequency corruptions. In other words, Figure 4 shows that _NIC and JPEG2000 fail in a very similar manner on low-frequency signal denoising tasks._ **NIC and JPEG2000 make almost no generalization error on medium-frequency shifts.** The left side of the second row of Figure 4 shows that both FR NIC and JPEG2000 have very small generalization errors (magnitudes < 0.2), and this low error is relatively uniform across all frequencies. This shows that both models are very effective at reconstructing all the frequencies in images with glass blur--in fact, they can reconstruct these images _better_ than they can reconstruct clean images--corroporating our first finding in Section 4.2.1. Again the \(\mathcal{R}\) plots for glass blur for both of these models look very similar to Figure 0(a); _this similarity has a simple explanation due to a fundamental tradeoff between generalization and robustness_. This relationship between \(\mathcal{R},\mathcal{G}\) and average PSD of corruptions is described more precisely in Appendix M.4. **High-frequency corruptions exaggerate differences observed in spectral artifacts between NIC and JPEG2000 on clean data.** Section 4.2.1 highlighted that high-frequency signals severely degrade the generalization performance of all image compression methods. From the RD curves with respect to the corrupt images (top right plot in Figure 3), we observe that at each severity the three models have almost identical performance in the usual 0-2 bpp range. _These results might lead us to expect that these models make similar reconstruction mistakes, but our spectral inspection tools indicate that this is not the case at all._ Our plots of \(\mathcal{G}\) on shot noise at severity=5 (bottom row, columns 2 and 4 of Figure 4) indicate that NIC models distort the higher frequencies significantly more than Figure 4: **Generalization error \(\mathcal{G}\) and denoising error \(\mathcal{R}\) for FR NIC and JPEG2000.** We plot both spectral metrics for one low, medium, and high-frequency corruption at severities 1 and 5. Each plot is labeled with a tuple of that model’s (bpp, PSNR) on the CLIC-C dataset with that corruption. the low and medium frequencies while JPEG2000 distorts medium and low frequencies more than high frequencies. Additionally, the bottom right plot of Figure 3 shows that for shot noise, NIC models achieve higher PSNR of the reconstructed corrupt images with respect to the clean images than JPEG2000, i.e. they have a stronger denoising effect. In the bottom right of Figure 4 we see a more detailed picture: for high severity shot noise, NIC models achieve lower \(\mathcal{R}\) in high frequencies. Also, note that the NIC models have the smallest errors in \(\mathcal{G}\) whereas they have the largest errors in \(\mathcal{R}\) and vice versa. Thus, these findings suggest that _NIC models behave similarly to a low-pass filter_. ### Impact of spectral artifacts on downstream applications In practice, image compression algorithms may be used as a pre-processing step before images are used for another downstream task. For this reason, practitioners should consider the performance on downstream tasks when comparing compression methods. We analyze the effectiveness of the compression methods on the downstream task of ImageNet classification using a pre-trained ResNet-50 model and report the difference in top-1 accuracy after compression (Figure 5) [24]. **NIC can improve the robustness of downstream classification to high-frequency corruptions. However, JPEG2000 is more effective than NIC for low-frequency corruptions.** While in general, compression degrades classification performance on clean images (all compressors show negative differences for the "clean" category in Figure 5), one interesting finding is that applying NIC compression at low compression rates can actually improve the robustness of the classification model against high-frequency corruptions. Specifically, at bpp=1.23, the difference in accuracy is _positive_ for high-frequency corruptions with NIC, meaning that the classification model gave a higher accuracy on the set of corrupted images after compression than it did on the original corrupted images. Compressing with JPEG2000 at the same rate caused a degradation in accuracy on these corruptions. However, JPEG2000 has a smaller degradation in accuracy compared to NIC on low-frequency corruptions. Thus, _the ideal compressor for downstream tasks is dependent on the type of corruption_. **Pruning NIC models amplifies the robustness gains of NIC for downstream classification tasks.** Additional experimental results in Appendix C (Figure 9), show that _pruned VR NIC and NIC optimized for MS-SSIM act as even better high-frequency signal denoisers for this application_. ## 5 Theoretical analysis of the OOD performance of NIC In this section, we corroborate our empirical findings in the previous section with theoretical results. For simplicity, here we summarize our results on linear autoencoder-based NIC methods. For complete definitions, additional references, and proofs of the following statements, please refer to Appendix M. In Appendix M.3, we also provide a more general result applicable to nonlinear models. Figure 5: **Effect of compressing corrupt images on classification accuracy. Each bar shows the average difference in accuracy after compressing with different methods \(\mathcal{C}\) for a group of corruptions as classified in Table 0(b). Specifically, let \(A(X)\) be the top-1 accuracy of the model on dataset \(X\), measured in percentage points. Then we report \(A(\mathcal{C}(c(X)))-A(c(X))\) over all -C corruptions in the corruption category (or on clean ImageNet in the case of “clean”). Each subplot shows results under a different fixed-bpp constraint based on the bpps achieved by the compressors on the clean ImageNet dataset. Results for individual corruptions and additional NIC variants are in Appendix C.** Recall a classical observation: in the setting of linearly auto-encoding a mean-centered distribution, the reconstruction function (_i.e._, encoder-decoder composition) is a projection onto the high-variance principal components of the input data [18] (see also [6]). Combining this with well-known facts about statistics of natural images, we show that a linear autoencoder applied to a natural image dataset retains only low/mid (spatial) frequency components, which account for the majority of the variance. Using this result, we state theoretical explanations for multiple trends in our experiments3. Footnote 3: Since this simplified model has no compression rate objective, one should only expect the above theoretical results to be predictive (or explanatory) of the high bpp/PSNR cases of our experiments. **Lemma 5.1**.: _Let \(\mathcal{X}\) be a dataset of natural images and let \(\hat{\mathcal{X}}\) denote its (spatial) discrete Fourier transform. Assume the following (for supporting evidence see Appendix M):_ 1. _The principal components of_ \(\hat{\mathcal{X}}\) _are roughly aligned with the spatial Fourier frequency basis._ 2. _The associated variances are monotonically decreasing with frequency magnitude (more specifically according to the power law_ \(\frac{1}{|i|^{n}+|j|^{n}}\)_)._ _If \(\mathcal{C}\) is a linear autoencoder trained by minimizing MSE on \(\mathcal{X}\), with latent space of dimension \(r\), and if "\(\sim\)" denotes the spatial discrete Fourier transform, for any data point \(X\) with Fourier transform \(\hat{X}\)_ \[\widehat{\mathcal{C}(X)}\approx\begin{cases}\hat{X}_{:,ij}&:i^{2}+j^{2}\leq \frac{r}{\pi K}\\ 0&:\text{otherwise.}\end{cases}\] _where \(K\) is the number of channels in the images in \(\mathcal{X}\) and where \(\hat{X}_{:,ij}\) denotes the components of \(\hat{X}\) corresponding to spatial frequency \((i,j)\)._ **Corollary 5.2**.: _Under the hypotheses of Lemma 5.1, the robustness error of \(\mathcal{C}\) to a corruption \(c\) (as defined in Definition 3.3), measured in spatial frequency \((i,j)\), is_ \[\mathcal{R}(\mathcal{C},\mathcal{X},c)_{ij}\approx\begin{cases}\frac{1}{N} \sum_{k}\lvert(\widehat{c(X_{k})}-\hat{X}_{k})_{:,ij}\rvert&:i^{2}+j^{2}\leq \frac{r}{\pi K}\\ \frac{1}{N}\sum_{k}\lvert\hat{X}_{k:,ij}\rvert&:\text{otherwise.}\end{cases}\] **Corollary 5.3**.: _Under the hypotheses of Lemma 5.1, the generalization error of \(\mathcal{C}\) to a corruption \(c\) (as defined in Definition 3.2), measured in spatial frequency \((i,j)\), is_ \[\mathcal{G}(\mathcal{C},\mathcal{X},c)_{ij}\approx\begin{cases}0&:i^{2}+j^{2} \leq\frac{r}{\pi K}\\ \frac{1}{N}\sum_{k}\sqrt{|\hat{X}_{:,ij}|^{2}+2\hat{X}_{:,ij}(\widehat{c(X)}- \hat{X})_{:,ij}+|(\widehat{c(X)}-\hat{X})_{:,ij}|^{2}}&:\text{otherwise.}\end{cases}\] Lemma 5.1 suggests that autoencoder compressors trained on natural images behave like a low-pass filter, in turn, corroborating our claim in Section 4.2.2. Corollary 5.2 suggests that _autoencoder compressors trained on natural images are less robust to corruptions with large amplitude in low frequencies_ (in the sense that \(|(c(X)-X)_{:,ij}|^{2}\) is large for small values of \(ij\)). This is indeed what we see in Figure 3 (bottom left), where snow corruptions are detrimental to PSNR of \(\mathcal{C}(c(\mathcal{X}))\) w.r.t. \(\mathcal{X}\), and Figure 4, where the \(\mathcal{R}(\mathcal{C},\mathcal{X},c)\) error for the NIC is concentrated in low frequencies. We also observe in Figure 5 that NIC is more beneficial for downstream classification accuracy in the case of high-frequency corruptions (e.g. shot noise) and less beneficial in the case of low-frequency corruptions (e.g. snow). On the other hand, the conclusion of Corollary 5.3 is more involved than that of Corollary 5.2-the "cross term" \(2\hat{X}_{:,ij}(\widehat{c(X)}-\hat{X})_{:,ij}\) is in general non-zero. However, there are many cases where at least an expectation over the data set \(\mathcal{X}\) this cross term vanishes (_e.g._, when \(c\) is additive noise). In such cases, Corollary 5.3 suggests that _compressor trained on natural images generalize less successfully to shifts with large amplitude in high frequencies_. In Figure 3 (top right) we see that shot noise corruptions are detrimental to PSNR of \(\mathcal{C}(c(\mathcal{X}))\) w.r.t. \(c(\mathcal{X})\), and in Figure 4 we see that for the snow and shot noise corruptions, \(\mathcal{G}(\mathcal{C},\mathcal{X},c)\) are in fact concentrated in high frequencies.4 In summary, with both the theoretical analysis of a simple mathematical model and empirical results, we find that _NICs have a strong spectral bias that causes them to overfit to low-frequency training data_. Conclusion and future directions We proposed benchmark datasets and inspection tools to gain a deeper understanding of the robustness and the generalization behavior of image compression models in the wild. Using our spectral inspection tools, we uncovered the modus operandi of different compression methods. We also highlighted similarities and differences among them via a systematic OOD evaluation. While we have taken some first steps to understand the state of existing image compression schemes, many questions remain to be answered. First, exploring the use of our tools in applications beyond image compression is expected to provide interesting insights. Next, designing practical training approaches to overcome brittleness issues is a potentially worthwhile research direction. Finally, developing NIC methods to efficiently adapt to distribution shifts at runtime will mitigate some of these issues. ## Acknowledgements This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was supported by the LLNL-LDRD Program under Project No. 22-DR-009 (LLNL-JRNL-851532). The third author was supported by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. We thank Eleanor Byler for helpful comments on an earlier draft.
2304.00983
Modelling Maritime SAR Effective Sweep Widths for Helicopters in VDM
Search and Rescue (SAR) is searching for and providing help to people in danger. In the UK, SAR teams are typically charities with limited resources, and SAR missions are time critical. Search managers need to objectively decide which search assets (e.g. helicopter vs drone) would be better. A key metric in the SAR community is effective sweep width (W), which provides a single measure for a search asset's ability to detect a specific object in specific environmental conditions. Tables of W for different search assets are provided in various manuals, such as the International Aeronautical and Maritime SAR (IAMSAR) Manual. However, these tables take years of expensive testing and experience to produce, and no such tables exist for drones. This paper uses the Vienna Development Method (VDM) to build an initial model of W for a known case (helicopters at sea) with a view to predicting W tables for drones. The model computes W for various search object sizes, helicopter altitude and visibility. The results for the model are quite different from the published tables, which shows that the abstraction level is not yet correct, however it produced useful insights and directions for the next steps.
Alexander Sulaiman, Ken Pierce
2023-03-29T12:44:18Z
http://arxiv.org/abs/2304.00983v1
# Modelling Maritime SAR Effective Sweep Widths for Helicopters in VDM ###### Abstract Search and Rescue (SAR) is searching for and providing help to people in danger. In the UK, SAR teams are typically charities with limited resources, and SAR missions are time critical. Search managers need to objectively decide which search assets (e.g. helicopter vs drone) would be better. A key metric in the SAR community is _effective sweep width_ (\(\mathbf{W}\)), which provides a single measure for a search asset's ability to detect a specific object in specific environmental conditions. Tables of \(\mathbf{W}\) for different search assets are provided in various manuals, such as the International Aeronautical and Maritime SAR (IAMSAR) Manual. However, these tables take years of expensive testing and experience to produce, and no such tables exist for drones. This paper uses the Vienna Development Method (VDM) to build an initial model of \(\mathbf{W}\) for a known case (helicopters at sea) with a view to predicting \(\mathbf{W}\) tables for drones. The model computes \(\mathbf{W}\) for various search object sizes, helicopter altitude and visibility. The results for the model are quite different from the published tables, which shows that the abstraction level is not yet correct, however it produced useful insights and directions for the next steps. ## 1 Introduction Search and Rescue (SAR) covers the search for persons in distress or danger, and the provision of aid to them. While there are several specialised fields, primarily based on the terrain in which the search is conducted, the general problem of search is similar across these. In essence, a _search manager_ is responsible for a search has a number of _search assets_; the search manager must select how best to use these assets to find the missing persons (_mispers_) or objects, based on last known location, search area, local knowledge etc. There are a range of search assets that can be used, e.g. humans, dogs, drones, each with some form of _sensor_, e.g. eyes, noses, cameras. Each of these assets has different characteristics in terms of their ability to search an area within a given period and a given level of success. Search managers must be able to make quick decisions on how to deploy their available assets during a search; this depends on being able to quickly quantify and compare available assets. Effective sweep width (\(\mathbf{W}\)) is a concept that helps in these decisions by providing a single metric for each assets ability to search in a given set of conditions. \(\mathbf{W}\) is a key aspect of _search theory_: it allows diverse search assets to be compared easily in order to support fast and high-quality decisions at critical times. Search manuals, such as the IAMSAR Manual (see Section 2) provide tables of \(\mathbf{W}\) for different types of assets and conditions. The IAMSAR Manual, for example, provides tables for \(W\) for helicopters at a given height and visibility, with modifiers for known information, such as whether the misper is wearing a high-visibility life jacket. Accurate \(W\) tables are vitally important to effective SAR, but since they are produced primarily from empirical studies in the field, they can be extremely expensive to run. SAR teams in the UK are primarily operated by small charities (annual income of less than \(\xi 1\)m) and staffed by volunteers, and cannot regularly run field trials to generate new information. The increasing availability of low-cost drones --typically off-the-shelf quadcopters-- with high-quality cameras has led to interest in their use in civilian SAR. Unfortunately, \(W\) is not well understood for drones, and tables for \(W\) or guidelines for their use. \(W\) tables for helicopters and other search assets required "many years of experience and testing" [5, p.107] to develop, which is out of reach of civilian SAR teams. This leads to the potential of using modelling and simulation to run _virtual field trials_ to help develop \(W\) tables for drones. This paper presents some early results in work to explore this possibility. Given limited understanding of search theory, this effort is split into various steps: 1. Develop a simple, discrete-event model of a field trial for a known \(W\) table (i.e. maritime helicopter search) to aid understanding of \(W\) and its calculation; 2. Refine the model to find the key factors affecting \(W\); 3. Propose a model for generating \(W\) for drones and predict \(W\); and 4. Run real field trials and evaluate predicted \(W\) against real \(W\) for drones. This paper reports on the first step, and provides suggestions for the second step, and future directions for the third. The results show that the model does indeed need to be refined to better reflect the reality of the search. We select the Vienna Development Method (VDM) for its ease of use and tool support for simulation, include combinatorial testing. Also, given that the _environmental conditions_ is a key factor, our expectation is that the abstract, two-dimensional 'ocean' will need to be replaced with a high-fidelity environment model, and VDM supports this seamlessly through the Functional Mock-up Interface (FMI) and INTO-CPS tool chain. In the remainder of this paper, Section 2 provides background information on SAR concepts. Section 3 describes the modelling a lateral range experiment. Section 4 covers the results and evaluation. Section 5 provides some closing remarks. ## 2 Background This section introduces the key concept in SAR called Effective Sweep Width (\(W\)), and the aircraft and maritime SAR manual that provides data against which the results of this paper are compared. ### Effective Sweep Width Effective search (or sweep) width and sweep width are used synonymously in the literature [6, 7, 5]. \(W\) is about quantifying how effectively a specific sensor detects a specific object in specific environmental conditions [5] by specifying a single measurement for each sensor by which sensors can be compared. \(W\) is a key concept in "search theory", which was developed in World War II for naval warfare by Koopman [6, 7]. W is defined as the area under the Lateral Range Curve (LRC) [6, 7, 4, 3, 11, 10] as shown in Equation 1. An LRC will be explained using a lateral range experiment [11]. \[W=\int_{-\infty}^{+\infty}p(x)dx \tag{1}\] The lateral range experiment is represented in Figure 1. The idea of the lateral range experiment is that a sensor follows a straight path. Along the sensor's path, there are detection opportunities represented as \(D_{1}\) to \(D_{6}\) for detecting objects \(O_{1}\) to \(O_{6}\). The solid arrow represents a detection, and the dashed arrow represents a missed detection. When an object is detected, it is detected at a given lateral range distance. At each lateral range distance, there is detection data for how many objects were detected compared to how many were there, and this is how the LRC is derived. The reason for some objects being detected and some that are not is due to many factors. In the real world, if you think purely from the sensor's point of view, the sensor may not be perfect and may miss a detection, e.g. if the human was a sensor. The environment could have obstacles that can interfere with the detection. An object's physical properties can cause a missed Figure 1: Lateral range experiment detection, e.g. if it is too small. Getting a meaningful LRC depends on the detection opportunities: the more, the better, which is hard for a field experiment. The need for many detection opportunities highlight the need for a simulation model. Figure 2 represents an example of a sensor's performance profile in reference to its detection capability after a lateral range experiment like Figure 1. This shows that, at lateral range \(0\), \(100\%\) of objects were detected. At lateral range \(-9\), \(10\%\) of objects were detected. At lateral range \(6\), \(50\%\) of objects were detected, and so on. ### Related Work Values of \(W\) are typically derived from field experiments, e.g. experiments have been conducted for air-scent dog teams [2] and human visual range detection [1] for land SAR. Chiacchia & Houlahan also show that \(W\) is a good predictor of Probability of Detection (PoD) [1]. The main computer modelling simulation work for \(W\) is by Perkins [9], who found that drone \(W\) can be modelled by simulating a camera drone missing person search and that a field method can approximate \(W\). The method they used to get a drone \(W\) is by modelling a camera drone searching a grid by travelling up column zero producing an LRC. \(W\) is then calculated from the area under this curve. The grid consists of targets and obstacles. One target is placed on each row in a random location. Obstacles are placed on each row randomly, which can obstruct the target being detected. The camera drone travels up column zero and detects laterally in each row if the camera drone is capable of detecting and there is not an obstacle in the way. Different variables are modelled, i.e., obstacle density, height, drone height, and camera Figure 2: Nonlinear Lateral Range Curve lens angular field of view. The limitations are that of obstacle clumping and the need for a deeper understanding of obstacle heights to be assessed in how this affects \(W\). Perkins [8] found that a computer simulation model for a land SAR searcher can be used to quantify detection ranges for targets in different environments. They came to this conclusion by producing an LRC, where the area under the curve can be calculated to produce \(W\). The limitations are that the searcher never has a decrease in performance, which is not like the real world. Obstacle density could also mean how opaque obstacles are, which is not taken into consideration, and this affects detection for targets. ### The IAMSAR Manual The International Aeronautical and Maritime SAR (IAMSAR) Manual produces guidelines for aircraft and maritime SAR activities. The IAMSAR manual is split into three volumes. Volume I is about the overall SAR system. Volume II is for SAR managers. Volume III is for when on a SAR mission. The IAMSAR Manual Volume II [5] provides a way of measuring the effectiveness of a sensor detecting a search object in given environmental conditions. E.g., sensor (helicopter), search object (ship) and environmental conditions (perfect). The measurement is in the form of \(W\) tables, which are empirically derived. ## 3 The VDM Model This section has been modelled from the IAMSAR Manual Volume II [5]\(W\) for helicopters N-5 table. Figure 3 shows the high-level structure of this model as a UML class diagram. The remainder of this section describes the modelling of the sensor, environmental conditions, object, lateral range experiment, and \(W\). ### Sensor Listing 1 shows the altitudes constant for the sensor (human eye). The human eye can look from three altitudes (from the helicopter), \(150\) metres, \(300\) metres and \(600\) metres. Listing 2 human eye detection operation is based on the Rayleigh criterion in Equation 2. In Equation 2, \(\lambda\) is the wavelength for visible light, and \(D\) is the aperture, which is the human eye's pupil diameter. The operation takes in the input altitude (sensor altitude minus object height), object width, and object position. Sets the minimum object size that can be resolved given the inputs and returns whether the object can be detected. \[\theta\approx 1.22\frac{\lambda}{D} \tag{2}\] ### Environmental Conditions Listing 1.3 shows the distance to horizon function, taken from the IAMSAR Manual Volume II. The distance to horizon function takes altitude in metres as an input, like from listing 1.1, and outputs a distance to the horizon in kilometres. This model assumes that objects can not be seen past the horizon. ``` distanceToHorizonKilometres:real->real distanceToHorizonKilometres(altMetres)== (3.83*MATH*sqrt(altMetres)) ``` Listing 1.3: Distance to horizon function Figure 3: VDM model structure Listing 4 shows the sensor's visibility constant that the sensor has to operate in when detecting. The last visibility, \(37\) kilometres, stands for \(37\) kilometres and greater. ``` VIS_KM:seqofreal=[1.9,5.6,9.3,18.5,27.8,37] ``` Listing 4: Visibilities constant ### Objects Listing 5 shows the objects represented as a constant. The objects constant structure maps the object's name to its height and width in metres. For example, a "raft 1-person" has a height and width of 1 metre. "Person 1" has been commented out as in this model it is treated the same as "raft 1-person", and so on with the other search objects. ``` OBJS:mapseqofchartotnatl={ --Sameasraft1-person --"Person1"|->1, "Raft1-person"|->1, "Raft4-person"|->4, "Raft6-person"|->6, "Raft8-person"|->8, "Raft10-person"|->10, "Raft15-person"|->15, "Raft20-person"|->20, "Raft25-person"|->25, "Powerboat2"|->2, --Sameasraft6-person --"Powerboat6"|->6, --Sameasraft10-person --"Powerboat10"|->10, "Powerboat16"|->16, "Powerboat24"|->24, "Sailboat5"|->5, --Sameasraft8-person --"Sailboat8"|->8, "Sailboat12"|->12, --Sameasraft15-person --"Sailboat15"|->15, "Sailboat21"|->21, --Sameasraft25-person --"Sailboat25"|->25, "Ship37"|->37, "Ship69"|->69, "Ship92"|->92 ``` ### Lateral Range Experiment Figure 4 shows the lateral range experiment grid setup. The setup consists of a grid of rows and columns. The number of rows determines the maximum number of possible detections. The length of the column length is the sea length defined as \(54200\) metres. Figure 5 represents the next step in the lateral range experiment. The next step is to place one object in a random location in each row. Figure 6 represents the last step of the lateral range experiment. The sensor (human eye in a helicopter) goes along each row and detects laterally depending on Listings 1.2, 1.3, and 1.4. The lateral range experiment detection operation is shown in Listing 6. The lateral range experiment detection operation is set up by first converting the distance to the horizon from kilometres from Listing 3 to metres, then converting visibility from kilometres from Listing 4 to metres. If the object's position is less than or equal to the distance to the horizon, it proceeds, or else returns false for detection, as the assumption is that objects cannot be seen past the horizon. If the object's position is less than or equal to the distance to the horizon, it checks whether visibility is greater than \(37\) kilometres. If so, the sensor's detect operation from Figure 2 returns true (no visibility limitations). Otherwise, if visibility is not greater than \(37\) kilometres, it makes sure the object position is within the visibility, and if so, the sensor's detect operation returns true. Figure 4: Lateral range experiment grid setup Figure 5: Lateral range experiment randomly place objects Figure 6: Lateral range experiment detection ``` detect:HumanEye*nat1*nat1*nat1*real*real==>bool detect(sen,alt,objSize,objColInd,dstToHorizon,vis)==( dcldstToHorizM:real:=dstToHorizon*1000; dclvisM:real:=vis*1000; dclthirtySevenKmGreaterCase:nat1:=37000; ifobjColInd<=dstToHorizMthen( ifvisM=thirtySevenKmGreaterCasethen( returnsen.detect(alt,objSize,objColInd) )elseifobjColInd<=visthen( returnsen.detect(alt,objSize,objColInd) ); returnfalse ``` Listing 6: Lateral range experiment detection operation Listing 7 represents that each object has detection data based on its column position mapped to a tuple consisting of how many times the object has been detected, how many detection opportunities there are, and the percentage detected. ``` objDetnD:mapnat1to(nat*nat*real) ``` Listing 8: Object detection data structure ### Effective Sweep Width Listing 8 represents the main sweep width operation containing three for-loops. The outer loop is the objects in Listing 5, the middle loop is the sensor (human eye in a helicopter) altitudes in Listing 1, and the inner loop is visibilities in Listing 4. The distance to the horizon in Listing 3 is calculated using altitude in the middle loop. In the inner loop, the lateral range experiment is executed, which produces object detection data based on object size, altitude, visibility and distance to the horizon given by the sensor. Lastly, the object detection data is used to calculate the \(W\). ``` main()== forallobjNameinsetdomobj.getObjects()do( foraltinalt.getAltitudes()do( horiz.setDistance(altitude); forvinvis.getVisibilities()do( dclobjS:nat1:=obj.getObjects()(objName); lre.run(sen,alt,objS, horiz.getDistance(),v); calculateW() ) ) ) ) ) ``` Listing 9: Main effective sweep width operation Listing 9 represents how \(W\) is calculated. \(W\) is calculated by getting the object detection data for all the column index positions the object was placed in from the lateral range experiment shown in Figure 5. Then total up the percentage detected for each object column index position shown in Listing 7. The total is then converted from metres to kilometres, and multiplied by two as currently, the lateral range experiment calculated the right side of the LRC for the sensor detecting the object. The assumption for this model is that the LRC is symmetrical, not nonlinear, like in Figure 2. ``` calculateW:()==>() calculateW()==( dclwM:real:=0; dclobjDetnD:mapnatto(nat*nat*real):= lre.getObjectDetectionData(); dclkmToM:nat1:=1000; dclsymmetricalLRC:=2; forallobjDetnDPosinsetdomobjDetnDdowM:= wM+objDetnD(objDetnDPos).#3; w:=wM/kmToM; w:=w*symmetricalLRC; ) ``` Listing 9: Calculate effective sweep width operation ## 4 Results This section discusses the results produced from the simulation model; this is a table of \(W\) for human visual detection in a helicopter at sea based on different object sizes, altitudes, and visibility. It then discusses the absolute difference between the \(W\) table from this simulation model results and the expected results from the IAMSAR Manual Volume II \(W\) for helicopters. Lastly, it focuses on the 600 metres altitude case for the simulation model and the IAMSAR manual \(W\) table described using a three-dimensional scatterplot to evaluate the results more closely. Figure 7 shows the simulation model results from section 3 for \(W\) calculation for \(4000\) human visual detections for each object in a helicopter at sea at different altitudes and visibilities. E.g., the \(W\) for detecting a "Raft 8-person" search object at \(300\) metres altitude with \(9.3\) kilometres visibility is \(1.3\) kilometres. Figure 7 shows the same \(W\) for objects "Person in water" and "Raft 1-person", "Power boat 6" and "Raft 6-person", "Power boat 10" and "Raft 10-person", "Sail boat 8" and "Raft 8-person", "Sail boat 15" and "Raft 15-person", "Sail boat 25" and "Raft 25-person" as this is how the objects have been modelled. Figure 7 and 8 shows the objects named "ship 37" instead of "ship 27-46", "ship 69" instead of "ship 46-91", and "ship 92" instead of "ship > 91" from the IAMSAR Manual Volume II [5]. Figure 8 shows a table of the absolute difference between \(W\) from the simulation model in Figure@7 and the IAMSAR manual sweep widths for helicopters results. E.g. the absolute difference in \(W\) for detecting a "ship 92" at \(600\) metres altitude greater than \(37\) kilometres visibility is \(47.5\) kilometres. ## References * [1] A. A. B. K. K. (1998) The \(\alpha\)-ray telescope: a new tool for the identification of the \(\alpha\)-ray telescope. _Journal of the Astronomical Society of Japan_, 108(1):1-10, 2001. [MISSING_PAGE_POST] Figure 8: Effective sweep widths table absolute difference between the simulation model 4000 human visual detections for each object in a helicopter at sea and the IAMSAR manual sweep widths for helicopters results Figure 9 shows a three-dimensional scatterplot containing the object size (metres), visibility (kilometres) and \(W\) (kilometres) for the simulation model results vs the expected results. Figure 9 focuses on the altitude of \(600\) metres case, as other altitudes did not cause a significant difference in their results due to the horizon only being a limiting factor at \(150\) metres altitude. The figure shows that at the lowest visibility of \(1.9\) kilometres, the results are somewhat similar, but as the visibility increases, the \(W\) becomes greater. The more significant differences for \(W\) as the visibility increases because the simulation model is limited to \(4000\) detections. More detections mean more detection opportunities, which will increase the \(W\). Figure 9: Effective sweep widths scatterplot simulation model 4000 human visual detections for each object in a helicopter at an altitude of 600 metres at sea vs the IAMSAR manual sweep widths for helicopters at an altitude of 600 metres results Conclusions and Future Work In this paper, an initial model of human visual detection in a helicopter at sea using VDM was built to derive a \(W\) table for each object size given the sensor's (human eye) altitude and visibility. The initial model gives an idea of how this can be applied to the camera drone detection case. The model results differ quite from the IAMSAR Manual Volume II \(W\) for helicopters. The reason for such a significant difference is that the model is limited to 4000 detections per object. More detections mean more detection opportunities which will increase \(W\). The study shows that using a lateral range experiment, you can calculate the detection capability \(W\) for any sensor detecting an object in specific environmental conditions. The lateral range experiment produces a lateral range curve in which the area \(W\) under the curve can be calculated. Future work is to apply the simulation model work done so far to calculate \(W\) in this paper to the camera drone detection case with some improvements. Improvements are split into the sensor (camera drone), object and environmental conditions affecting \(W\). The sensor should include the camera's angular resolution, human detection performance factors, e.g. fatigue, assuming a human is in the loop, and the sensor moving in real-time. The object should include physical characteristics, e.g. colour. The environmental conditions should include weather, lighting, and obstacles depending on the terrain. E.g. obstacles in the sea would be waves at different heights. ## Acknowledgements This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Partnership (DTP) with Newcastle University. The authors would like to thank David Perkins and the Centre for Search Research (CFSR), UK registered charity number 1064927.
2303.14951
Improving Contextualized Topic Models with Negative Sampling
Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized language models and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity.
Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal, Partha Pratim Das
2023-03-27T07:28:46Z
http://arxiv.org/abs/2303.14951v1
# Improving Contextualized Topic Models with Negative Sampling ###### Abstract Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized language models and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity. ## 1 Introduction The modern world is witnessing tremendous growth in digital documents. It is often necessary to organize them into semantic categories to make the content more easily accessible to users. The assignment of domain tags through manual intervention can be quite cumbersome and very expensive to maintain, mainly due to the enormity and diversity of the available data. The use of topic modelling techniques can be of huge significance in this area because of their ability to automatically learn the overarching themes or topics from a collection of documents in an unsupervised way and tag the documents with their dominant topics Newman et al. (2010); Boyd-Graber et al. (2017); Adhya and Sanyal (2022). Informally, a topic is a group of extremely related words. While latent Dirichlet allocation (LDA) Blei et al. (2003) is the classical topic modeling approach, recently neural topic models have become popular as they decouple the inference mechanism from the underlying modeling assumptions (e.g., the topic prior), thereby simplifying the design of new topic models. Neural topic models are based on variational autoencoders (VAEs) Kingma and Welling (2014) and allow us to leverage the progress in deep learning in modeling text Zhao et al. (2021). The recently proposed contextualized topic model (CTM) Bianchi et al. (2021), which is a neural topic model, represents each document in the collection both as a bag-of-words (BoW) vector as well as a dense vector produced by a pre-trained transformer like sentence-BERT (SBERT) Reimers and Gurevych (2019), thus combining a classical representation with a contextualized representation that captures the semantics of the text better. CTM produces state-of-the-art performance on many benchmark datasets Bianchi et al. (2021). A neural topic model is trained to maximize the log-likelihood of the reconstruction of the input document and minimize the KL-divergence of the learned distribution of the latent (topic) space from a known prior distribution of the latent space. If the topics in a document are perturbed, that is, say, the top topic in a document is deleted, the document should display a marked change in its word distribution. Such an objective is not explicitly modeled above. In this paper, we train CTM to infer topics from a document in such a way that while the inferred topics should aid in reconstructing the document (as in any topic modeling algorithm), when the top topics are perturbed it should fail to reconstruct the original document. This is done by treating the document reconstructed from the correct topic vector as an anchor that is encouraged to be similar to the original input document but dissimilar to the document reconstructed from the perturbed topics. Our proposed model, **CTM-Neg**, achieves higher average topic coherence, measured by NPMI score, than that of other competing topic models, and very high topic diversity on three datasets. We have made our code publicly available1. Footnote 1: [https://github.com/AdhyaSuman/CTMNeg](https://github.com/AdhyaSuman/CTMNeg) Thus, our primary contributions are: 1. We propose a _simple but effective negative sampling technique for neural topic models_. Negative samples are produced automatically in an unsupervised way. 2. We perform extensive experiments on three publicly available datasets. In particular, we compare the proposed model with four other topic models for eight different topic counts on each dataset. We observe that the proposed strategy _leads to an increase in topic coherence_ over the baselines in most of the cases. Averaged over different topic counts, CTM-Neg achieves the highest mean NPMI score on all three datasets, and highest mean CV on two datasets, and the second-highest mean CV on the third. CTM-Neg also attains the best or the second best mean topic diversity scores on the three datasets though all the topic models except one (which underperforms) produce similar high topic diversity. ## 2 Related Work Latent Dirichlet allocation (LDA) Blei et al. (2003) models every document in a given corpus as a mixture of topics, where each topic is a probability distribution over the vocabulary. Among the modern neural alternatives to LDA, a pioneering approach is the ProdLDA model Srivastava and Sutton (2017). It is a VAE-based topic model that uses an approximate Dirichlet prior (more precisely, the Laplace approximation to the Dirichlet prior in the softmax basis), instead of a standard Gaussian prior Miao et al. (2016). The VAE takes a bag-of-words (BoW) representation of a document, maps it to a latent vector using an encoder or inference network, and then maps the vector back to a discrete distribution over words using a decoder or generator network. CTM Bianchi et al. (2021) augments ProdLDA by allowing in its input a contextualized representation (SBERT) of the documents. Embedded topic model (ETM) Dieng et al. (2020) is a VAE-based topic model that uses distributed representations of both words and topics. Negative sampling in NLP-based tasks was popularized after its use in the word embedding model, word2vec Mikolov et al. (2013). The idea of negative sampling is to'sample' examples from a noise distribution and ensure that the model being trained can distinguish between the positive and negative examples. It can be used to reduce the computational cost of training, help identify out-of-distribution examples, or to make the model more robust to adversarial attacks Xu et al. (2022). A few works have recently applied it to topic modeling. For example, Wu et al. (2020) proposed a negative sampling and quantization model (NQTM) with a modified cross-entropy loss to generate sharper topic distributions from short texts. Some researchers have applied generative adversarial networks to design topic models Wang et al. (2019); Hu et al. (2020); Wang et al. (2020), but since the negative examples are generated from an assumed fake distribution, they bear little similarity to real documents. In Nguyen and Luu (2021), a negative document sample is created by replacing the weights of the words having the highest tf-idf scores in the input document with the weights of the same words in the reconstructed document. Our method follows a different strategy: it generates a perturbed document-topic vector (instead of an explicit negative document) and uses triplet loss to push the BoW vector reconstructed from the correct topic vector closer to the input BoW vector and farther from the BoW vector generated from the perturbed topics. Unlike the present work, none of the other adversarial topic models use contextual embeddings as input. ## 3 Proposed Method ### Baseline Architecture Our proposed model is based on a VAE architecture. In particular, we build upon CTM Bianchi et al. (2021). We assume that the vocabulary size is \(V\) and a document is represented as a normalized bag-of-words vector \(\mathbf{x}_{\mathrm{BoW}}\) as well as a contextualized embedding vector \(\mathbf{x}_{c}\). A linear layer converts \(\mathbf{x}_{c}\) to a \(V\)-dimensional vector. The encoder of the VAE concatenates these two vectors into a single \(2V\)-dimensional vector \(\mathbf{x}\) and outputs the parameters of the posterior \(\left(\boldsymbol{\mu}_{T\times 1},\boldsymbol{\Sigma}_{T\times 1}\right)\) where \(T\) is the number of topics, \(\boldsymbol{\mu}_{T\times 1}\) denotes the mean, and \(\boldsymbol{\Sigma}_{T\times 1}\) represents the diagonal covariance matrix. Note that it is standard in the VAE literature to assume a diagonal covariance matrix instead of a full covariance matrix Srivastava and Sutton (2017). In the decoder, using the reparameterization trick the latent representation \((\mathbf{z}_{T\times 1})\) is generated: \[\mathbf{z}_{T\times 1}=\boldsymbol{\mu}_{T\times 1}+\mathbf{\Sigma}_{T\times 1 }^{1/2}\odot\boldsymbol{\epsilon}_{T\times 1} \tag{1}\] where \(\boldsymbol{\epsilon}_{T\times 1}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\odot\) denotes Hadamard product. This hidden representation \(\mathbf{z}\) is then used as a logit of a softmax function (\(\sigma(\cdot)\)) to generate the document-topic distribution \(\boldsymbol{\theta}_{T\times 1}\) (\(=\sigma(\mathbf{z}_{T\times 1})\)). The decoder has an unnormalized topic-word matrix \(\boldsymbol{\beta}_{T\times V}\), which is used to reconstruct the word distribution in the following manner: \[\hat{\mathbf{x}}_{V\times 1}=\sigma(\boldsymbol{\beta}_{T\times V}^{\top} \boldsymbol{\theta}_{T\times 1}) \tag{2}\] To formulate the loss function, note that the encoder learns the posterior distribution \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})\). We assume that the prior is \(p(\mathbf{z})\). The decoder is the generative model \(p_{\boldsymbol{\theta}}(\mathbf{x}_{\mathrm{BoW}}|\mathbf{z})\). The loss function to be minimized is given by \[\mathcal{L}_{\mathrm{CTM}} =\mathcal{L}_{\mathrm{RL}}+\mathcal{L}_{\mathrm{KL}}\] \[\equiv-\mathbb{E}_{\mathbf{z}\sim q_{\boldsymbol{\phi}}(\mathbf{ z}|\mathbf{x})}p_{\boldsymbol{\theta}}(\mathbf{x}_{\mathrm{BoW}}|\mathbf{z})\] \[\qquad\qquad+\mathrm{D}_{\mathrm{KL}}\left(q_{\boldsymbol{\phi}} (\mathbf{z}|\mathbf{x})\right||p(\mathbf{z})\right) \tag{3}\] here, the first term (\(\mathcal{L}_{\mathrm{RL}}\)) is the reconstruction loss (measured by the cross-entropy between the predicted output distribution \(\hat{\mathbf{x}}\) and the input vector \(\mathbf{x}_{\mathrm{BoW}}\)) while the second term \(\mathcal{L}_{\mathrm{KL}}\) is the KL-divergence of the learned latent space distribution \(q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x})\) from the prior \(p(\mathbf{z})\) of the latent space. ### Proposed Negative Sampling Mechanism To improve the topic quality, we train the above model with negative samples as follows. For every input document, after a topic vector \(\boldsymbol{\theta}\) is sampled, a perturbed vector \(\tilde{\boldsymbol{\theta}}_{\mathrm{neg}}\) is generated from it by setting the entries for the top \(S\) topics (i.e., the \(S\) positions in \(\boldsymbol{\theta}\) corresponding to the \(S\) largest values in \(\boldsymbol{\theta}\)) to zero. \(\tilde{\boldsymbol{\theta}}_{\mathrm{neg}}\) is then normalized so that the resulting vector \(\boldsymbol{\theta}_{\mathrm{neg}}\) is a probability vector. The normalization is done simply by dividing the values in \(\tilde{\boldsymbol{\theta}}_{\mathrm{neg}}\) by their sum, as all values, in \(\tilde{\boldsymbol{\theta}}_{\mathrm{neg}}\) are already non-negative (since \(\boldsymbol{\theta}\) is obtained by applying softmax). Mathematically, \[\boldsymbol{\theta}_{\mathrm{neg}} =\frac{\tilde{\boldsymbol{\theta}}_{\mathrm{neg}}}{\sum_{i=1}^{T} \tilde{\theta}_{\mathrm{neg}}[i]} \tag{4}\] \[\text{where, }\tilde{\theta}_{\mathrm{neg}}[i] =\begin{cases}0&\text{if }i\in\operatorname{argmax}(\boldsymbol{\theta},S)\\ \theta[i]&\text{otherwise}\end{cases}\] The function \(\operatorname{\texttt{argmax}}(\boldsymbol{\theta},S)\) returns the indices of the \(S\) largest values in \(\boldsymbol{\theta}\). We treat \(S\) as a hyperparameter. Like \(\boldsymbol{\theta}\), the perturbed topic vector \(\boldsymbol{\theta}_{\mathrm{neg}}\) is passed through the decoder network. The latter generates \(\hat{\mathbf{x}}_{\mathrm{neg}}=\sigma(\boldsymbol{\beta}^{\top}\boldsymbol{ \theta}_{\mathrm{neg}})\). We introduce a new term, triplet loss \(\mathcal{L}_{\mathrm{TL}}\), in Eq. (3) assuming the anchor is \(\hat{\mathbf{x}}\), the positive sample is \(\mathbf{x}_{\mathrm{BoW}}\) (the original input document), and the negative sample is \(\hat{\mathbf{x}}_{\mathrm{neg}}\): \[\mathcal{L}_{\mathrm{TL}}=\max(||\hat{\mathbf{x}}-\mathbf{x}_{\mathrm{BoW}}|| _{2}-||\hat{\mathbf{x}}-\hat{\mathbf{x}}_{\mathrm{neg}}||_{2}+m,0) \tag{5}\] where \(m\) is the margin. Therefore, the modified loss function to be minimized is given by: \[\mathcal{L}=\left(\mathcal{L}_{\mathrm{RL}}+\mathcal{L}_{\mathrm{KL}}\right)+ \lambda\mathcal{L}_{\mathrm{TL}} \tag{6}\] where \(\lambda\) is a hyperparameter. Fig. 1 depicts the proposed model. The model is trained in an end-to-end manner using Adam optimizer and backpropagation. ## 4 Experimental Setup We perform all experiments in OCTIS (Terragni et al., 2021), which is an integrated framework for topic modeling. ### Datasets We use the following three datasets: Figure 1: Framework for the contextualized topic model with negative sampling (CTM-Neg). 1. **GoogleNews (GN)**: It consists of \(11,109\) news articles, titles, and snippets collected from the Google News website in November 2013 (Qiang et al., 2020). 2. **20NewsGroups (20NG)**: It comprises \(16,309\) newsgroup documents partitioned (nearly) evenly across 20 different newsgroups (Terragni et al., 2021). 3. **M10**: It is a subset of CiteSeerX data comprising 8355 scientific publications from 10 distinct research areas (Pan et al., 2016). The last two datasets are available in OCTIS while we added the first one. ### Evaluation Metrics Coherence measures help to assess the relatedness between the top words of a topic. Informally, a topic is said to be coherent if it contains words that, when viewed together, help humans to recognize it as a distinct category (Hoyle et al., 2021). We use **Normalized Pointwise Mutual Information** (NPMI) and (Lau et al., 2014) and **Coherence Value (CV)**(Roder et al., 2015) to measure topic coherence. NPMI is widely adopted as a proxy for human judgement of topic coherence though some researchers also use CV (but CV has some known issues). NPMI calculates topic coherence by measuring how likely the topic words are to co-occur. If \(p(w_{i},w_{j})\) represents the probability of two words \(w_{i}\) and \(w_{j}\) co-occurring in a boolean sliding context window, and \(p(w_{i})\) is the marginal probability of word \(w_{i}\), then the NPMI score is given by (Lau et al., 2014), \[\mathrm{NPMI}(w_{i},w_{j})=\left(\frac{\log\frac{p(w_{i},w_{j})+\epsilon}{p(w _{i}),p(w_{j})}}{-\log(p(w_{i},w_{j})+\epsilon)}\right) \tag{7}\] where \(\epsilon\) is a small positive constant used to avoid zero. \(\mathrm{NPMI}(w_{i},w_{j})\) lies in \([-1,\ +1]\) where \(-1\) indicates the words never co-occur and \(+1\) indicates they always co-occur. CV is calculated using an indirect cosine measure along with the NPMI score over a boolean sliding window (Roder et al., 2015; Krasnashchok and Jouili, 2018). OCTIS uses the CoherenceModel of gensim where NPMI is referred to as c_npmi and CV as c_v. We measure the diversity of topics using **Inversed Rank-Biased Overlap (IRBO)**(Bianchi et al., 2021). It gives \(0\) for identical topics and \(1\) for completely dissimilar topics. Suppose we are given a collection \(\aleph\) of \(T\) topics where each topic is a list of words such that the words at the beginning of the list have a higher probability of occurrence (i.e., are more important or more highly ranked) in the topic. Then, the IRBO score of the topics is defined as \[\mathrm{IRBO}(\aleph)=1-\frac{\sum_{i=2}^{T}\sum_{j=1}^{i-1}\mathrm{RBO}(l_{i}, l_{j})}{n} \tag{8}\] where \(n=\binom{T}{2}\) is the number of pairs of lists, and \(\mathrm{RBO}(l_{i},l_{j})\) denotes the standard Rank-Biased Overlap between two ranked lists \(l_{i}\) and \(l_{j}\)(Webber et al., 2010). IRBO allows the comparison of lists that may not contain the same items, and in particular, may not cover all items in the domain. Two lists (topics) with overlapping words receive a smaller IRBO score when the overlap occurs at the highest ranks of the lists than when they occur at lower ranks. IRBO is implemented in OCTIS. Higher values of NPMI, CV, and IRBO are better than lower values. In our experiments, for evaluation using the above metrics in OCTIS, we use the top-10 words from every topic and the default values for all the other parameters. ### Baselines and Configuration We denote our proposed topic model by **CTM-Neg**. As baselines we use the following topic models, which are already implemented in OCTIS: 1. **CTM**(Bianchi et al., 2021). 2. **ProdLDA**(Srivastava and Sutton, 2017). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{8}{c|}{**\#Topics**} \\ \cline{2-9} & **10** & **20** & **30** & **40** & **50** & **60** & **90** & **120** \\ \hline GN & (2, 0.7) & (2, 0.58) & (2, 0.59) & (2, 0.59) & (3, 0.82) & (3, 0.94) & (1, 0.68) & (3, 0.82) \\ 20NG & (3, 0.78) & (3, 0.83) & (3, 0.86) & (1, 0.74) & (1, 0.12) & (3, 0.27) & (1, 0.84) & (1, 0.90) \\ M10 & (3, 0.9) & (3, 0.49) & (1, 0.82) & (1, 0.59) & (3, 0.82) & (3, 0.58) & (3, 0.93) & (3, 0.27) \\ \hline \end{tabular} \end{table} Table 1: Each paired entry shows the best hyperparameters \((S,\lambda)\) in CTM-Neg as discovered by OCTIS for a given Dataset, #Topics) combination. 3. **ETM**(Dieng et al., 2020). 4. **LDA**(Blei et al., 2003). In CTM-Neg, CTM, and Prod-LDA, the encoder is a fully-connected feedforward neural network (FFNN) with two hidden layers with 100 neurons each, and the decoder is a single-layer FFNN. We use paraphrase-distilroberta-base-v2 (which is an SBERT model) to obtain the contextualized representations of the input documents in CTM and CTM-Neg. In CTM-Neg, we set \(m=1\) in Eq. (5) as is the default in PyTorch. We have optimized the hyperparameters \(S\) and \(\lambda\) using the Bayesian optimization framework of OCTIS to maximize NPMI. Table 1 shows the optimal values discovered when \(S\in\{1,2,3\}\) and \(\lambda\in[0,1]\). In LDA, we use 5 passes over the input corpus as the default single pass produces too poor topics. Other hyperparameters are set to their default values in OCTIS. For all datasets, the vocabulary is set to the most common 2K words in the corpus. Experiments for each topic model are done for all topic counts in the set \(\{10,20,30,40,50,60,90,120\}\). We have trained all models for 50 epochs. ## 5 Results ### Quantitative Evaluation Given a dataset and a topic model, we recorded the median values of NPMI, CV, and IRBO over 5 independent runs for each topic count. We choose median instead of mean as the former is more robust to outliers. Then for the same dataset and topic model, we compute the average of these values so that we can get an idea of the performance of the topic model without coupling it to a specific topic count. Table 2 shows the corresponding values where we mention the median along with the mean. We observe that CTM-Neg achieves the highest average NPMI on all datasets. CTM-Neg also produces the highest average CV on all datasets except M10 where CTM performs slightly better. In the case of IRBO, while CTM-Neg gives the highest scores on GN and 20NG, it ranks as the second best on M10. It is also observed that the IRBO values for all models except ETM are very close to each other. In order to afford a more fine-grained view of the performance of the topic models, Fig. 2 depicts how the scores vary with topic count for all topic models and on all datasets. CTM-Neg always achieves the highest NPMI and CV scores on GN and 20NG datasets. On the M10 corpus, CTM scores slightly better than CTM-Neg in NPMI and CV for some topic counts. The IRBO plots in Fig. 2 show that on a given dataset, all topic models, except ETM, achieve very similar IRBO scores for every topic count. ETM is always found to produce significantly lower IRBO values. CTM-Neg does not always produce the highest IRBO. For example, on the M10 corpus, the IRBO score of CTM-Neg is the highest till \(T=20\) after which LDA dominates and CTM-Neg is relegated to the second position. A closer look at Fig. 2 reveals that this gain in topic diversity for LDA comes at the expense of reduced NPMI. ### Extrinsic Evaluation We also use an extrinsic task to evaluate the topic models. We measure the predictive performance of the generated topics on a document classification task. Specifically, we use the M10 dataset from OCTIS where each document is already marked with one of 10 class labels as shown in Table 3. The corpus is divided into train/dev/test subsets in the ratio 70:15:15. Each topic model is trained on the training subset to produce \(T=10\) topics and the \(T\)-dimensional document-topic latent vector is used as a representation of the document. Next, a linear support vector machine is trained with these representations of the training subset (for each topic model), and the performance on the test subset is recorded. Fig. 3 shows that CTM-Neg achieves the highest accuracy. ### Qualitative Evaluation It is acknowledged in the NLP community that automatic metrics do not always accurately capture the quality of topics produced by neural models (Hoyle et al., 2021). So we perform manual evaluation of the topics for a few selected cases. Table 4 shows some of the topics output by random runs of the different topic models on 20NG for \(T=20\) topics. Note that the table displays manually aligned topics, that is, the first topic mentioned against any of the topic models is very similar to the first topic stated against every other topic model, and similarly for all other topics. We observe that the topics generated by CTM-Neg contain very specific words in the top positions that distinguish the topics more clearly compared to the case of other models. \begin{table} \begin{tabular}{|c|c|c c|c c|c c|} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c|}{**Coherence**} & \multicolumn{2}{c|}{**Diversity**} \\ \cline{3-8} & & \multicolumn{2}{c|}{**NPMI**} & \multicolumn{2}{c|}{**CV**} & \multicolumn{2}{c|}{**IRBO**} \\ \cline{3-8} \cline{5-8} & & Mean & Median & Mean & Median & Mean & Median \\ \hline \multirow{8}{*}{**GN**} & CTM-Neg & **0.142** & **0.188** & **0.530** & **0.552** & **0.998** & **0.998** \\ & CTM & 0.081 & 0.128 & 0.485 & 0.513 & 0.995 & 0.995 \\ & ProdLDA & 0.056 & 0.076 & 0.471 & 0.476 & 0.996 & 0.996 \\ & ETM & -0.263 & -0.271 & 0.414 & 0.416 & 0.627 & 0.660 \\ & LDA & -0.164 & -0.176 & 0.403 & 0.405 & 0.997 & **0.998** \\ \hline \multirow{8}{*}{**20NG**} & CTM-Neg & **0.121** & **0.127** & **0.648** & **0.653** & **0.991** & **0.991** \\ & CTM & 0.093 & 0.098 & 0.627 & 0.632 & 0.990 & 0.990 \\ & ProdLDA & 0.080 & 0.084 & 0.609 & 0.607 & 0.990 & **0.991** \\ & ETM & 0.049 & 0.048 & 0.528 & 0.527 & 0.819 & 0.808 \\ & LDA & 0.075 & 0.080 & 0.571 & 0.577 & 0.983 & 0.990 \\ \hline \multirow{8}{*}{**M10**} & CTM-Neg & **0.052** & **0.056** & 0.462 & **0.461** & 0.986 & 0.985 \\ & CTM & 0.048 & 0.047 & **0.466** & **0.461** & 0.980 & 0.979 \\ \cline{1-1} & ProdLDA & 0.025 & 0.023 & 0.448 & 0.449 & 0.983 & 0.981 \\ \cline{1-1} & ETM & -0.056 & -0.062 & 0.345 & 0.350 & 0.502 & 0.484 \\ \cline{1-1} & LDA & -0.192 & -0.201 & 0.386 & 0.389 & **0.989** & **0.992** \\ \hline \end{tabular} \end{table} Table 2: Comparison of topic models on three datasets. For each metric and each topic model, we mention the mean and the median of the scores for topic counts {10, 20, 30, 40, 50, 60, 90, 120}. Figure 2: Variation of topic coherence (NPMI and CV) and topic diversity (IRBO) with topic count for different topic models on three datasets. The ordinate value of each data point reports the median over five independent runs. For example, the first topic produced by CTM-Neg contains very focused terms like 'turkish', 'israeli', 'genocide', 'war', etc., and is easily identifiable as'middle-east conflict' (corresponds to newsgroup talk.politics.mideast of 20NG corpus). CTM outputs a very similar topic but it seems to focus only on the 'Armenian genocide' yet contains more generic terms like 'neighbor' and 'town'. ProLDA also focuses primarily on 'Armenian genocide' but its last word 'jewish' probably refers to the Israeli conflict. While the corresponding topic from LDA contains some generic terms like'man', 'kill', etc., most of the words in ETM like 'kill', 'gun', and 'fire' are very general. Moreover, words like 'leave' and'start' that occur in this topic in ETM reduce the interpretability of the topic. Similarly, the fourth topic in CTM-Neg is sports-related and contains specific words like 'hockey' and 'baseball'. While the corresponding topic from ProdLDA mentions 'hockey' (but not 'baseball'), none of the other models produces these terms. The ability of CTM-Neg to extract focused words is probably a consequence of the negative sampling algorithm that encourages a topic to capture the most salient words of its representative documents so that deleting the topic pushes the reconstructed document away from the input document. Table 5 shows the topics that are discovered in a random run of each topic model on the M10 dataset for \(T=10\) topics. We show four topics - the first is on 'neural and evolutionary computing' (or 'artificial intelligence'), the second on'microaray gene expression', the third on'stock market', and the fourth on'multi-agent decision making'. The topics generated by CTM and CTM-Neg are very similar. However, the presence of words like 'processing' in the first topic, 'work' in the third topic, and 'approach' in the fourth topic in CTM appear less connected to the other words in the respective topics. Such outliers are not visible in the topics produced by CTM-Neg. Moreover, the second topic output by CTM-Neg contains very domain-specific terms like 'dna' and'motif', which are not produced by CTM. Similar issues occur in ProdLDA and LDA. In the case of ETM, the first topic contains words that make it a mixture of the first two topics produced by the other models. For example, it contains words like 'neural' and 'network' that occur in the first topic in the other models, and also 'gene' and 'expression' which are present in the second topic in the other models. Figure 3: Document classification for M10 corpus with \(T=10\) topics. \begin{table} \begin{tabular}{|c|c|} \hline **Label** & **\#Documents** \\ \hline Agriculture & 643 \\ Archaeology & 131 \\ Biology & 1059 \\ Computer Science & 1127 \\ Financial Economics & 978 \\ Industrial Engineering & 944 \\ Material Science & 873 \\ Petroleum Chemistry & 886 \\ Physics & 717 \\ Social Science & 997 \\ \hline \end{tabular} \end{table} Table 3: M10 labels with corresponding document counts. \begin{table} \begin{tabular}{|c|c|} \hline **Model** & **Topics** \\ \hline \multirow{5}{*}{CTM-Neg} & turkish, armenian, jewish, population, muslim, village, israeli, genocide, government, war chip, key, encryption, government, clipper, phone, security, privacy, escrow, secure video, monitor, vga, port, modem, apple, driver, card, resolution, board score, playoff, period, play, fan, win, hockey, game, baseball, lose \\ \hline \multirow{5}{*}{CTM} & people, armenian, soldier, village, turkish, massacre, troop, neighbor, town, genocide chip, clipper, encryption, government, encrypt, algorithm, agency, secure, phone, key draw, mouse, advance, convert, font, screen, button, host, code, terminal game, win, final, goal, period, cap, score, fan, lead, play \\ \hline \multirow{5}{*}{ProdLDA} & genocide, armenian, turkish, greek, muslim, village, population, russian, massacre, jewish encryption, secret, secure, chip, privacy, government, key, agency, security, encrypt monitor, card, apple, video, sale, price, board, audio, offer, external game, team, division, season, hockey, playoff, score, goal, player, wing \\ \hline \multirow{5}{*}{ETM} & people, kill, child, gun, armenian, fire, man, time, leave, start key, chip, encryption, clipper, bit, government, algorithm, message, law, system drive, card, disk, system, bit, run, window, scsi, driver, monitor game, play, win, team, player, year, good, score, hit, season \\ \hline \multirow{5}{*}{LDA} & people, jewish, armenian, child, man, kill, woman, death, turkish, israeli key, chip, encryption, government, security, clipper, bit, public, message, system card, work, monitor, system, driver, problem, run, machine, video, memory game, team, play, player, win, year, good, season, hit, score \\ \hline \end{tabular} \end{table} Table 4: Some related topics discovered by different topic models in the 20NG corpus when run for \(T=20\) topics. \begin{table} \begin{tabular}{|c|c|} \hline **Model** & **Topics** \\ \hline \multirow{5}{*}{CTM-Neg} & neural, network, learn, recurrent, learning, artificial, language, evolutionary, genetic, adaptive expression, gene, datum, sequence, cluster, protein, microarray, dna, analysis, motif stock, return, market, price, volatility, exchange, rate, interest, option, monetary decision, make, agent, making, group, multi, uncertainty, robot, intelligent, autonomous \\ \hline \multirow{5}{*}{CTM} & network, neural, learn, learning, artificial, evolutionary, language, recurrent, knowledge, processing gene, expression, datum, model, analysis, microarray, cluster, clustering, genetic, classification market, stock, price, return, risk, financial, rate, option, work, volatility decision, agent, make, making, multi, human, group, uncertainty, social, approach \\ \hline \multirow{5}{*}{ProdLDA} & network, neural, learn, recurrent, artificial, learning, evolutionary, language, knowledge, adaptive expression, gene, datum, cluster, analysis, microarray, factor, bind, classification, site market, stock, price, risk, financial, rate, evidence, return, exchange, work decision, make, agent, making, group, environment, autonomous, robot, human, mobile \\ \hline \multirow{5}{*}{ETM} & network, neural, gene, expression, datum, cluster, classification, recurrent, learn, genetic - \\ & market, gas, price, stock, financial, natural, return, work, rate, estimate model, decision, base, analysis, method, theory, application, approach, make, dynamic \\ \hline \multirow{5}{*}{LDA} & network, neural, learn, learning, recurrent, dynamic, model, artificial, sensor, bayesian gene, expression, datum, cluster, analysis, model, microarray, feature, sequence, base price, stock, oil, option, market, term, model, asset, return, pricing decision, theory, model, make, base, information, making, access, agent, bioinformatic \\ \hline \end{tabular} \end{table} Table 5: Some related topics discovered by different topic models in the M10 corpus when run for \(T=10\) topics. Therefore, we have kept the second line for ETM topics in Table 5 blank. We observed that some of the topics produced by ETM contain many common words. In particular, we found that five topics from ETM contain the words'model', 'decision','method', 'analysis', and 'theory' in some order in the top slots, thus becoming repetitive, and consequently, ETM fails to discover meaningful and diverse topics like the other models. This is indicative of the component collapsing problem where all output topics are almost identical (Srivastava and Sutton, 2017). We have observed earlier that on the M10 corpus, for large topic counts LDA beats CTM-Neg in IRBO but not in NPMI. We revisit this issue now and manually analyze their topics for \(T=40\). We found indeed the different topics output by LDA hardly overlap in words (leading to larger topic diversity) but the words do not always appear logically connected and interpretable (thus, sacrificing coherence). On the other hand, the topics generated by CTM-Neg look more coherent although they are not always disjoint. For example, see Table 6 which shows the topics containing the word 'neural' (among the top-10 words in the topic) discovered by CTM-Neg and LDA. CTM-Neg produces three topics that roughly relate to 'natural language processing', 'pattern recognition', and 'neural and evolutionary computing', respectively. But only one topic from LDA contains 'neural' - it is primarily about 'neural networks' but contains some very weakly related words. ## 6 Conclusion We have proposed a negative sampling strategy for a neural contextualized topic model. We evaluated its performance on three publicly available datasets. In most of our experiments, the augmented model achieves higher topic coherence, as measured by NPMI and CV, and comparable topic diversity, as captured by IRBO, with respect to those of competitive topic models in the literature. A manual evaluation of a few selected topics shows that the topics generated by CTM-Neg are indeed coherent and diverse. In the future, we would like to compare it with other contrastive learning-based topic models and integrate it with other neural topic models. ## Acknowledgments This work is partially supported by the SERB-DST Project CRG/2021/000803 sponsored by the Department of Science and Technology, Government of India at Indian Association for the Cultivation of Science, Kolkata.
2310.14869
Periodicity of p-adic Expansion of Rational Number
In this paper we give an algorithm to calculate the coefficients of the p-adic expansion of a rational numbers, and we give a method to decide whether this expansion is periodic or ultimately periodic.
R. Belhadef, H-A. Esbelin
2023-10-23T12:41:45Z
http://arxiv.org/abs/2310.14869v1
# Periodicity of \(p\)-adic Expansion of Rational Number ###### Abstract In this paper we give an algorithm to calculate the coefficients of the \(p\)-adic expansion of a rational numbers, and we give a method to decide whether this expansion is periodic or ultimately periodic. ## 1 Introduction It is known that in \(\mathbb{R}\), an element is rational if and only if its decimal expansion is ultimately periodic. An important analogous theorem for the \(p\)-adic expansion of rational number, is given by the following statement (see [1]): **Theorem 1.1**.: _The number \(x\in\mathbb{Q}_{p}\) is rational if and only if the sequence of digits of its \(p\)-adic expansion is periodic or ultimately periodic._ For example, in \(\mathbb{Q}_{3}\), the \(p\)-adic expansion of \(-\frac{1}{2}\) is \(1+3+3^{2}+3^{3}+...=11111111111\), it is clear that this expansion is purely periodic. In the second example in \(\mathbb{Q}_{3}\), the \(p\)-adic expansion of \(\frac{11}{5}\) is given by \(1+1.3+1.3^{2}+2.3^{3}+1.3^{4}+0.3^{5}+...=111210121012101210.....\). This expansion is ultimately periodic, with periodic block \(1210\). Another example in \(\mathbb{Q}_{5}\), the \(p\)-adic expansion of \(\frac{213}{7}\) is given by \(4+1.5+3.5^{2}+1.5^{3}+4.5^{4}+2.5^{5}+3.5^{6}+0.5^{7}+2.5^{8}+...=4131423021423 02...\). This expansion is ultimately periodic, with periodic block \(142302\).
2303.05684
Oscillations of Highly Magnetized Non-rotating Neutron Stars
Highly magnetized neutron stars are promising candidates to explain some of the most peculiar astronomical phenomena, for instance, fast radio bursts, gamma-ray bursts, and superluminous supernovae. Pulsations of these highly magnetized neutron stars are also speculated to produce detectable gravitational waves. In addition, pulsations are important probes of the structure and equation of state of the neutron stars. The major challenge in studying the pulsations of highly magnetized neutron stars is the demanding numerical cost of consistently solving the nonlinear Einstein and Maxwell equations under minimum assumptions. With the recent breakthroughs in numerical solvers, we investigate pulsation modes of non-rotating neutron stars which harbour strong purely toroidal magnetic fields of $10^{15-17}$ G through two-dimensional axisymmetric general-relativistic magnetohydrodynamics simulations. We show that stellar oscillations are insensitive to magnetization effects until the magnetic to binding energy ratio goes beyond 10%, where the pulsation mode frequencies are strongly suppressed. We further show that this is the direct consequence of the decrease in stellar compactness when the extreme magnetic fields introduce strong deformations of the neutron stars.
Man Yin Leung, Anson Ka Long Yip, Patrick Chi-Kit Cheong, Tjonnie Guang Feng Li
2023-03-10T03:27:21Z
http://arxiv.org/abs/2303.05684v1
# Oscillations of Highly Magnetized Non-rotating Neutron Stars ###### Abstract Highly magnetized neutron stars are promising candidates to explain some of the most peculiar astronomical phenomena, for instance, fast radio bursts, gamma-ray bursts, and superluminous supernovae [1, 2, 3, 4, 5, 6, 7, 8]. Pulsations of these highly magnetized neutron stars are also speculated to produce detectable gravitational waves. In addition, pulsations are important probes of the structure and equation of state of the neutron stars. The major challenge in studying the pulsations of highly magnetized neutron stars is the demanding numerical cost of consistently solving the nonlinear Einstein and Maxwell equations under minimum assumptions. With the recent breakthroughs in numerical solvers [9, 10], we investigate pulsation modes of non-rotating neutron stars which harbour strong purely toroidal magnetic fields of \(10^{15-17}\) G through two-dimensional axisymmetric general-relativistic magnetohydrodynamics simulations. We show that stellar oscillations are insensitive to magnetization effects until the magnetic to binding energy ratio goes beyond 10%, where the pulsation mode frequencies are strongly suppressed. We further show that this is the direct consequence of the decrease in stellar compactness when the extreme magnetic fields introduce strong deformations of the neutron stars. ## Introduction Neutron stars (NSs) are compact objects formed by core-collapse supernovae. Due to field amplification in the violent formation processes, most NSs are endowed with strong magnetic fields of \(10^{11-13}\) G [11]. In some extreme cases, the magnetars can harbour even stronger magnetic fields of \(10^{14-16}\) G, about 1000 times stronger than usual pulsars (for comparison, the magnetic field of a sunspot is \(10^{3}\) G [12]). Younger magnetars may carry even higher magnetic fields since they have been subjected to dissipative processes for shorter times [13]. These extreme magnetic fields affect the structure and evolution of NSs. For instance, strong magnetic fields can deform NSs [13, 14]. A direct consequence of structural deformations of NSs could be significant gravitational wave emissions [15, 16, 17]. The geometry of magnetic fields of the NSs is a crucial factor governing the physics of NSs. However, the field configuration inside the NS is unknown. Studies of equilibrium models with simple field configurations suggest that a purely toroidal field makes NSs prolate [18, 19, 20] while a purely poloidal field forces the stars to become oblate [21, 22, 23]. Nevertheless, these simple geometries are expected to be unstable [24, 25, 26, 27, 28]. Numerical simulations suggest that the magnetic fields of the NSs are rearranged rapidly due to these instabilities, leading to a mixed configuration of toroidal and poloidal fields, which is roughly axisymmetric [29, 30, 31, 32]. This mixed geometry is usually called _twisted torus_. Pulsations of NSs could be excited by various astrophysical events, such as core-collapse supernova and giant flares [33]. These pulsations are potential sources of gravitational waves, the spectra of which may serve as a sensitive probe of the structure and the equation of state (EoS) of NSs. Oscillation modes of non-magnetized NSs have been well studied using either perturbative calculations or dynamical simulations with or without spacetime evolutions, e.g. Refs. [34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. Magnetic fields are also considered in studies based on either Newtonian approaches, e.g. Refs. [45, 46, 47, 48, 49] or general-relativistic approaches with Cowling approximation (evolving matter equations only while keeping the spacetime fixed), e.g. Refs. [50, 51, 52, 53, 54, 55, 56, 57]. However, it has been shown that simulations using the Cowling approximation can overestimate the oscillation frequency up to a factor of \(2^{58,59}\). Therefore, it is important that, when computationally feasible, simulations with dynamical spacetime are conducted. The major difficulties in studying magnetized NSs come from the non-linear nature of Einstein equations, and with Maxwell equations fully coupled, analytical calculations are generally impossible. Hence, numerical computations are inevitable to solve all the involved physics with a minimum number of assumptions. Until recently, due to breakthroughs in general-relativistic magnetohydrodynamics (GRMHD) simulations, dynamical studies of magnetized NSs have become possible, e.g. Refs. [60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]. Nonetheless, there is still no accurate eigenfrequency determination for oscillation modes in highly magnetized NSs. A novel approach to compute strongly magnetized equilibrium models is recently presented and demonstrated by an open-source code XNS [66, 67, 68, 69, 70, 71, 72, 73, 74, 75]. Moreover, the GRMHD code Gmunu [76, 77, 78] allows us to robustly evolve NSs in dynamical spacetime even with extreme magnetic fields of \(10^{15-17}\) G. With these powerful tools in hand, we are now in a much better position to systematically investigate the oscillation modes of magnetized NSs. In this work, we numerically study the oscillations of highly magnetized non-rotating axisymmetric (two-dimensional) NSs. Specifically, we first construct 12 equilibrium models with different magnetic to binding energy ratios \(\mathcal{H}/\mathcal{W}\) using XNS, including one non-magnetized reference model named 'REF' and 11 magnetized models in ascending order of \(\mathcal{H}/\mathcal{W}\) named T1K1, T1K2,..., T1K11 (Methods). Next, we utilize Gmunu to perturb and evolve the equilibrium models in dynamical spacetime, where we try three different initial fluid perturbations for excitation of stellar oscillations, namely \(\ell=0\), \(\ell=2\), and \(\ell=4\) perturbations (Methods). After that, we perform a Fourier analysis of the simulation results to examine how the eigenfrequencies of oscillation modes vary with \(\mathcal{H}/\mathcal{W}\) of the NS (Methods), and we discuss possible reasons behind our results. ## Results ### Magnetization effects on oscillations of NSs In total, six dominant oscillation modes are observed in our numerical study, namely the fundamental quasi-radial \((\ell=0)\) mode \(F\) and its first overtone \(H_{1}\), the fundamental quadrupole \((\ell=2)\) mode \({}^{2}f\) and its first overtone \({}^{2}p_{1}\), as well as the fundamental hexadecapole \((\ell=4)\) mode \({}^{4}f\) and its first overtone \({}^{4}p_{1}\) (we follow the notations in Ref. [59]). Each mode is predominantly excited under the initial perturbation with the corresponding \(\ell\) index, and each eigenfunction qualitatively agrees with the spherical harmonic in the corresponding perturbation function, as shown in Fig. 1. The measured eigenfrequencies of the six modes in the 12 different NS models are summarized in Table 1, where the undetermined eigenfrequencies denoted by 'N/A' in different columns stem from different reasons below. For the column of \(F\) mode, the missing eigenfrequencies are due to unsatisfactory data quality in Gmunu simulations of T1K8 and T1K11 models under \(\ell=0\) perturbation. On the other hand, for the columns of \({}^{4}f\) and \({}^{4}p_{1}\) modes, some eigenfrequencies are missing because the hexadecapole \((\ell=4)\) modes are masked by the quadrupole \((\ell=2)\) modes and are no longer the dominant modes in Gmunu simulations of the most magnetized models under \(\ell=4\) perturbation. To better illustrate the results in Table 1, we plot in Fig. 2 the eigenfrequencies \(f_{\text{eig}}\) of the six modes as functions of the magnetic to binding energy ratio \(\mathcal{H}/\mathcal{W}\) of the NS model. We have observed an \(\mathcal{H}/\mathcal{W}\) threshold for stellar magnetization to start affecting the oscillations of NSs. For NSs with \(\mathcal{H}/\mathcal{W}\lesssim 10^{-2}\), stellar oscillations are insensitive to magnetization effects. This can be seen from Table 1 that \(f_{\text{eig}}\) of every oscillation mode is nearly the same for the first six models (REF - T1K5) even though these models span a few orders of magnitude in \(\mathcal{H}/\mathcal{W}\) and can achieve a maximum field strength of \(10^{15-17}\) G; this can also be seen from Fig. 2 that the data points at \(\mathcal{H}/\mathcal{W}\sim 0\) show a nearly horizontal trend. On the other hand, for NSs with \(\mathcal{H}/\mathcal{W}\gtrsim 10^{-1}\), stellar oscillations are significantly suppressed by stronger magnetization. Refer to the data points at \(\mathcal{H}/\mathcal{W}>10^{-1}\) in Fig. 2, \(f_{\text{eig}}\) decreases with \(\mathcal{H}/\mathcal{W}\) in general, and all the oscillation modes are pushed towards the low-frequency region, leading to the near-degeneracy of \(H_{1}\) and \({}^{2}p_{1}\) modes. Moreover, as afore-explained about the undetermined eigenfrequencies, \(\ell=4\) perturbation excites the quadrupole \((\ell=2)\) modes preferentially over the expected hexadecapole \((\ell=4)\) modes in the most magnetized models, hinting at suppression or even disappearance of higher-order oscillation modes in a more magnetized NS for \(\mathcal{H}/\mathcal{W}\gtrsim 10^{-1}\). To summarize, magnetization effects start to hinder stellar oscillations if \(\mathcal{H}/\mathcal{W}\) of the NS passes the threshold somewhere between \(10^{-2}\) and \(10^{-1}\). ### Compactness as an underlying factor The magnetization effects on NS oscillations discussed above may be understood by studying the compactness \(M/R_{\text{circ}}\) of the NS, where \(M\) is the gravitational mass and \(R_{\text{circ}}\) is the circumferential radius. As shown Ref. [79], the eigenfrequencies of the fundamental quasi-radial and quadrupole modes are related to the stellar compactness for non-magnetized NSs, and we suspect this correlation also holds for highly magnetized NSs. Thus, based on our NS models, we plot in Fig. 3 the compactness \(M/R_{\text{circ}}\) against the magnetic to binding energy ratio \(\mathcal{H}/\mathcal{W}\). We find that \(M/R_{\text{circ}}\) remains nearly unchanged for \(\mathcal{H}/\mathcal{W}\lesssim 10^{-2}\) but decreases dramatically for \(\mathcal{H}/\mathcal{W}>10^{-1}\), which agrees with the trends of \(f_{\text{eig}}(\mathcal{H}/\mathcal{W})\) shown in Fig. 2 and indeed reveals a correlation between eigenfrequencies of oscillation modes and stellar compactness. We also plot in Fig. 4\(f_{\text{eig}}\) against \(M/R_{\text{circ}}\). For all the modes, \(f_{\text{eig}}\) decreases together with \(M/R_{\text{circ}}\) in an almost linear way. Therefore, we found a quasilinear relation between \(f_{\text{eig}}\) and \(M/R_{\text{circ}}\) for magnetized NSs. The complete physical interpretation of our results is that a strong toroidal field can cause deformation of the NS [13] and alter the stellar compactness, so the propagation of seismic activities inside the NS is affected. In consequence, the eigenfrequencies of oscillation modes are correspondingly modified. ## Discussion In this work, we systematically investigate how a strong purely toroidal magnetic field with a field strength of \(10^{15-17}\) G affects the oscillations of non-rotating NSs via two-dimensional axisymmetric simulations. We carefully extract the eigenfrequencies of the excited oscillation modes and construct the corresponding eigenfunctions from the simulated data. We have found that stellar oscillations are insensitive to magnetization effects for NSs with magnetic to binding energy ratio \(\mathcal{H}/\mathcal{W}\lesssim 10^{-2}\), even though the maximum magnetic field strength \(B_{\rm max}\) can reach \(\mathcal{O}(10^{17})\) G in the star. However, stellar oscillations are suppressed significantly by stronger magnetization if \(\mathcal{H}/\mathcal{W}\gtrsim 10^{-1}\). This behaviour can be understood by the decrease of stellar compactness due to strong magnetic fields. We show that the compactness has the same dependence on \(\mathcal{H}/\mathcal{W}\) as the eigenfrequencies and demonstrate that the correlation between eigenfrequencies and compactness exists not only in non-magnetized NSs [79] but also in highly magnetized NSs. We compare our results with previous Newtonian studies, e.g. Ref. [45, 48]. These studies considered either perturbative or self-consistent MHD to construct the equilibrium models in the Newtonian regime. Both approaches found that the magnetic distortion and frequency shift in oscillation modes due to toroidal fields are minor corrections approximately proportional to \(B^{2}\) (or roughly \(\mathcal{H}/\mathcal{W}\) in this work). However, in our GRMHD simulations, the equilibrium models are constructed by solving self-consistent general-relativistic magnetohydrostatic equations in the code XNS [9, 13, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75]. When \(\mathcal{H}/\mathcal{W}\gtrsim 10^{-1}\), the magnetic deformations are far from small corrections, and thus the stellar compactness is significantly reduced. Therefore, the effect of decreasing compactness dominates and results in the suppression of oscillation modes. Besides, we compare our results with those under the Cowling approximation and we corroborate what has been shown in the literature [58, 59], namely that the Cowling approximation can lead to errors of factors of 2 (see Supplementary information). The strongest magnetic field strength of \(10^{17}\) G in this work is not expected to be observed in the exterior of ordinary pulsars and magnetars. Nevertheless, since the toroidal fields are enclosed inside the NSs, this ultra-high field could exist in the interior regions. Moreover, such field strength could also be generated during the formation of a proto-NS [13], and binary neutron star mergers [80]. The excited oscillation modes in these scenarios are potential sources for gravitational waves, and they could be detected with the next-generation detectors, such as the Kamioka Gravitational Wave Detector (KAGRA) [81], the Einstein Telescope (ET) [82], and the Neutron Star Extreme Matter Observatory (NEMO) [83]. This work presents the first step to understanding how magnetic fields with different geometries affect the oscillations of NSs. Since stellar models with purely toroidal fields are generally unstable [25], the instability is only suppressed due to the restriction to 2D axisymmetry in this work. Therefore, a natural extension considers strong purely poloidal fields and the more realistic twisted torus configuration. Since these field configurations extend to the regions outside NSs, an accurate and robust resistive GRMHD solver could be used to model these regions. This solver has already been implemented into Gmunu [78] for future studies. In addition to different configurations of magnetic fields, rotation should also be taken into account to work towards a more realistic problem, as the observed NSs are suggested to be rotating. Furthermore, introducing realistic EoSs is essential since one of the most important purposes of oscillation studies is to probe the structure and the EoSs of NSs. ## Methods ### Equilibrium models Equilibrium models of NSs are constructed by the code XNS [9, 13, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75]. XNS is a branch of the X-ECHO code [66] developed to compute equilibrium models of highly magnetized axisymmetric NSs with rotations. Different magnetic field configurations [13], uniformly and differentially rotating profiles [66], and polytropic and non-polytropic tabulated equations of state [74] are admitted. XNS enforces the \(3+1\) formulism, the conformal flatness condition, and the assumption of axisymmetric and stationary space-time so that the line element can be written as \[ds^{2}=-\alpha^{2}dt^{2}+\psi^{4}\left[dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2} \theta\left(d\phi+\beta^{\phi}dt\right)^{2}\right], \tag{1}\] where \(\alpha(r,\theta)\) is the lapse function, \(\psi(r,\theta)\) is the conformal factor, and \(\beta^{\phi}(r,\theta)\) is the shift vector (\(\beta^{\phi}=0\) for non-rotating configurations). We assume a polytropic EoS \(p=K\rho^{\gamma}\) for the stellar fluid, where \(p\) is the pressure, \(K\) is the polytropic constant, \(\rho\) is the density, and \(\gamma\) is the adiabatic index; as well as a polytropic expression \(B_{\phi}=\alpha^{-1}K_{\rm m}(\rho h\varpi^{2})^{m}\) for the toroidal field, where \(K_{\rm m}\) is the toroidal magnetization constant, \(h\) is the specific enthalpy, \(\varpi^{2}=\alpha^{2}\psi^{4}r^{2}\sin^{2}\theta\), and \(m\geq 1\) is the toroidal magnetization index. Although the field configuration of an isolated NS is expected to be a mixture of toroidal and poloidal fields, it is important first to assess how a simpler field geometry would affect the oscillations of NSs before we move on to the more complicated _Twisted Torus_ case. In total, 12 equilibrium models are computed with XNS, where one of them is a non-magnetized reference model named 'REF', and the remaining 11 models are magnetized. All the 12 models share the same rest mass \(M_{0}=1.68\) M\({}_{\odot}\), and the same \(K=1.6\times 10^{5}\) cm\({}^{5}\) g\({}^{-1}\) s\({}^{-2}\) and \(\gamma=2\) in the fluid EoS. The 11 magnetized models have the same \(m=1\) but different values of \(K_{\rm m}\) in the \(B_{\phi}\) expression, and they are arranged in ascending order of magnetic to binding energy ratio \({\cal H}/{\cal W}\), where the one with the lowest ratio is named 'T1K1', and the one with the second-lowest ratio is named 'T1K2', so on and so forth. ('T1' specifies the toroidal magnetization index being 1 and 'K' stands for \(K_{\rm m}\)). The detailed properties of all the 12 models are summarized in Table 2. ### Initial perturbations to excite oscillations Consulting a similar study on rotating non-magnetized NSs done by Ref. [59], we try the following three types of initial fluid perturbations for exciting oscillations in the equilibrium models. First, we have the \(\ell=0\) perturbation on the \(r\)-component of the three-velocity field, \[\delta v^{\prime}=a\sin\left[\pi\frac{r}{r_{\rm s}(\theta)}\right], \tag{2}\] where \(r_{\rm s}(\theta)\) locates the surface of the NS, and the perturbation amplitude \(a\) (in unit of c) is chosen to be 0.001. Second, we have the \(\ell=2\) perturbation on the \(\theta\)-component of the three-velocity field, \[\delta v^{\theta}=a\sin\left[\pi\frac{r}{r_{\rm s}(\theta)}\right]\sin\theta \cos\theta, \tag{3}\] where \(a\) is chosen to be 0.01. Lastly, we have the \(\ell=4\) perturbation on the \(\theta\)-component of the three-velocity field, \[\delta v^{\theta}=a\sin\left[\pi\frac{r}{r_{\rm s}(\theta)}\right]\sin\theta \cos\theta(3-7\cos^{2}\theta), \tag{4}\] where \(a\) is again set to be 0.01. All the three perturbation functions comprise a sine function of \(r\) and the \(\theta\)-part of a spherical harmonic with the corresponding \(\ell\) index. The sine function of \(r\) has its nodes at the centre and on the surface of the NS to avoid initial perturbations on sensitive boundaries of the problem and minimize any potential numerical errors. On the other hand, spherical harmonics are a natural choice for exciting oscillations on a sphere-like object. Moreover, for the higher-order \(\ell=2\) and \(\ell=4\) perturbations, the perturbation amplitude \(a\) has to be larger to induce any observable oscillations. ### Simulations Simulations are performed with our code Gmunu[10, 76, 77, 78]. For each of the 12 equilibrium models, we execute Gmunu three times, once for each initial perturbation function. Hence, \(12\times 3=36\) simulations are carried out in total. In all the 36 simulations, the models are evolved over a time span of 10 ms with the polytropic EoS \(p=K\rho^{7}\), under the same setting as in the computation of equilibrium models, namely, \(\gamma=2\) and \(K=110\). The lowest allowed rest mass density ('atmosphere') is set to be \(\rho_{\rm atmo}=\rho_{\rm max}\left(t=0\right)\times 10^{-10}\), and the ratio of \(\rho_{\rm atmo}\) to threshold density \(\rho_{\rm thr}\) is \(\rho_{\rm atmo}/\rho_{\rm thr}=0.99\). For completeness, we also perform simulations under the Cowling approximation (see Supplementary information), with other settings unchanged. The two-dimensional computational domain covers \(0\leq r\leq 60\), \(0\leq\theta\leq\pi\) with the resolution \(N_{r}\times N_{\theta}=64\times 16\) where each block has \(8^{2}\) cells, thus allowing 4 AMR level (an effective resolution of \(512\times 128\)). The grid refinement used in this study is identical to the GR simulations in Ref. [77]. In particular, we define a relativistic gravitational potential \(\Phi:=1-\alpha\). As \(\Phi\) is almost proportional to \(M/R\), we can use \(\Phi^{-1}\) as a measure of the characteristic length-scale [77]. For any \(\Phi\) larger than the maximum potential \(\Phi_{\rm max}\) (which is set as 0.2 in this work), the block is set to be the finest. While for the second-finest level, the same check is performed with a new maximum potential which is half of the previous one, so on and so forth. To avoid the rigorous Courant-Friedrichs-Lewy (CFL) condition at the centre of the star, the grids are enforced to be coarsened for keeping \(r\Delta\theta\sim\Delta r\) when \(r\) is smaller than 0.5. (Unless otherwise specified, all quantities in this subsection are in dimensionless units \(c=G=M_{\odot}=1\).) ### Extraction of eigenfrequencies and eigenfunctions We analyze the data from a Gmunu simulation in the following three steps. For the first step, we extract the time evolutions of the initially perturbed component of the three-velocity field at 361 \((r,\theta)\)-points in the NS model and compute the Fast Fourier Transform (FFT) of the temporal data at each \((r,\theta)\)-point. Hence, 361 FFT spectra, plots of magnitude of the complex FFT in the frequency domain, are obtained altogether. According to Ref. [59], our initial perturbation amplitudes are small enough such that the overall evolution of the input model in a Gmunu simulation can be described as a superposition of a few global oscillation modes. We verify this by observing that the FFT spectra obtained at different spatial points show discrete peaks and agree well on the peak positions. For the second step, we extract the eigenfrequencies of the excited oscillation modes. Usually, the FFT spectrum at a spatial point where the initial perturbation function has a large magnitude can reveal FFT peaks loud enough for further analysis (e.g. at \((r,\theta)\simeq(r_{\mathrm{e}}/2,\pi/2)\), \((r_{\mathrm{e}}/2,\pi/4)\), and \((r_{\mathrm{e}}/2,2\pi/15)\) for \(\ell=0\), \(\ell=2\), \(\ell=4\) perturbations respectively). Nevertheless, occasionally, we may have to integrate the FFT spectra along a radial line for sharper FFT peaks (along \(\theta=\pi/2\), \(\pi/4\), and \(2\pi/15\) for \(\ell=0\), \(\ell=2\), \(\ell=4\) perturbations respectively). Since our study here is in the ideal GRMHD regime with no physical damping of the oscillations, we apply parabolic interpolation instead of Lorentzian fitting to the peaks in the single-point or integrated FFT spectrum for simplicity (see Fig. 5 as an example). We then take the interpolated peak positions as the measured eigenfrequencies \(f_{\mathrm{eig}}\) and the full-width-at-half-maximums (FWHMs) of the parabolic interpolations as the uncertainties in eigenfrequency extraction. For the third step, we extract the eigenfunctions of the excited oscillation modes. According to Refs. [59, 84], the eigenfunction of a mode is correlated to the spatial map of FFT amplitude at the eigenfrequency of the mode, where FFT amplitude is the magnitude of the FFT multiplied by the sign of its real part. Using our FFT data computed at the 361 points, we spatially map the FFT amplitude at the frequency to which the measured eigenfrequency is the closest in the discretized frequency domain of our FFT analysis for simplicity. The eigenfunction visualized by such a spatial map can serve as a unique trademark to help us identify the same oscillation mode excited in different Gmunu simulations so that we can investigate the dependence of eigenfrequency \(f_{\mathrm{eig}}\) of a particular mode on the magnetic to binding energy ratio \(\mathcal{H}/\mathcal{W}\) of the input model. In the end, we can obtain the curves of \(f_{\mathrm{eig}}(\mathcal{H}/\mathcal{W})\) for different oscillation modes to examine the magnetization effects on oscillations of NSs. Lastly, we determine the correspondence between the modes found in our study and the modes in the literature by comparing the eigenfrequencies at zero magnetic energy, \(f_{\mathrm{eig}}(\mathcal{H}/\mathcal{W}=0)\), of the modes we found here with the mode frequencies previously reported for a non-magnetized non-rotating NS model with a similar gravitational mass [59].
2308.01948
A Multidimensional Analysis of Social Biases in Vision Transformers
The embedding spaces of image models have been shown to encode a range of social biases such as racism and sexism. Here, we investigate specific factors that contribute to the emergence of these biases in Vision Transformers (ViT). Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs. Our findings indicate that counterfactual augmentation training using diffusion-based image editing can mitigate biases, but does not eliminate them. Moreover, we find that larger models are less biased than smaller models, and that models trained using discriminative objectives are less biased than those trained using generative objectives. In addition, we observe inconsistencies in the learned social biases. To our surprise, ViTs can exhibit opposite biases when trained on the same data set using different self-supervised objectives. Our findings give insights into the factors that contribute to the emergence of social biases and suggests that we could achieve substantial fairness improvements based on model design choices.
Jannik Brinkmann, Paul Swoboda, Christian Bartelt
2023-08-03T09:03:40Z
http://arxiv.org/abs/2308.01948v1
# A Multidimensional Analysis of Social Biases in Vision Transformers ###### Abstract The embedding spaces of image models have been shown to encode a range of social biases such as racism and sexism. Here, we investigate specific factors that contribute to the emergence of these biases in Vision Transformers (ViT). Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs. Our findings indicate that counterfactual augmentation training using diffusion-based image editing can mitigate biases, but does not eliminate them. Moreover, we find that larger models are less biased than smaller models, and that models trained using discriminative objectives are less biased than those trained using generative objectives. In addition, we observe inconsistencies in the learned social biases. To our surprise, ViTs can exhibit opposite biases when trained on the same data set using different self-supervised objectives. Our findings give insights into the factors that contribute to the emergence of social biases and suggests that we could achieve substantial fairness improvements based on model design choices. ## 1 Introduction In recent studies, state-of-the-art self-supervised image models such as SimCLR [9] and iGPT [8] have been shown to encode a range of social biases, such as racism and sexism [34]. This can lead to representational harm [4] and ethical concerns in different socio-technical application scenarios [41]. The distributional nature of these models is suspected to be an important factor contributing to the emergence of social biases, as it has been demonstrated that these models tend to encode common co-occurrences of objects associated with social biases (e. g. women are more often set in "home or hotel" scenes, whereas men are more often depicted in "industrial and construction" scenes [36]). Moreover, it has been demonstrated that self-supervised training objectives can impact the distribution of social biases in models that share the same ResNet50 [14] architecture [33]. However, existing work has done little investigation into other factors that contribute to the emergence of social biases in image models. ContributionsHere, we seek to better understand the factors that contribute to the emergence of social biases in image models. Therefore, we investigate social biases in embedding spaces, which, despite not being observable for end-users, could propagate into downstream tasks during fine-tuning. This can help to make informed choices about the model to select for a downstream task, and to develop effective strategies to mitigate social biases. In detail, the contributions of our work are: * Training ViTs with counterfactual data augmentation using diffusion-based image editing can reduce social biases, but is not sufficient to eliminate them. * ViTs trained using discriminative objectives are less biased than those trained using generative objectives. * Scaling ViTs can help to mitigate social biases. * ViTs can exhibit opposite biases despite being trained on the same data set, which indicates that biases are not just a result of simple object co-occurrences. Figure 1: Gender bias in image embedding from ViTMAE: t-SNE (n=2) reveals that “female” is more closely associated with “family” rather than “career”, whereas “male” has a comparable association with both attributes. ## 2 Related Work Self-Supervised Learning of ViTsSelf-supervised approaches have emerged as the standard for training large machine learning models since they don't require labeled data and learn representations that generalize well across different downstream tasks [6]. Transformer models [35], which were designed as sequence-to-sequence models for natural language translation, have been adapted to computer vision [12]. Self-supervised learning techniques applied to ViTs can be classified into discriminative (or joint-embedding) methods and generative (or reconstruction-based) methods [32]. Discriminative methods encourage similarity among representations from diverse augmentations of a given input image, while generative methods utilize a reconstruction loss that does not rely on augmentations. Instead it uses a decoder to reconstruct the original image given a masked image. Both methods have demonstrated strong empirical results on downstream tasks [7, 8, 10, 13]. Social Biases in Image EmbeddingsThe embeddings of self-supervised image models have been shown to encode a range of human-like social biases [34]. However, the analysis was confined to SimCLR [9] and iGPT [8] as embedding models. Therefore, Sirotkin [33] built on this work to examine the distribution of social biases in image models that were trained using a range of self-supervised objectives, such as geometric, cluster-based, and contrastive methods. The authors discovered that models trained with contrastive methods exhibit the largest number of social biases, and that the distribution of biases differs depending on the studied embedding layer. However, their analysis focused only on training objectives and the number of social biases without considering the direction of the bias, constraining the interpretability of their findings. In addition, their investigation was conducted on models using a ResNet50 [14] architecture, excluding ViTs which are considered the standard for transfer learning [17]. Bias Mitigation MethodsThe approaches to mitigate biases can be distinguished into methods that manipulate the training data and methods that adjust the training procedure [23]. To mitigate biases during training, existing work suggests, amongst others, adversarial learning [37], train Figure 2: **Selected counterfactual images on ImageNet.** In each case, we show the original image (left), and the generated counterfactual image (right). ing separate models for each attribute [38], or incorporating regularization terms [2, 15]. In contrast, the methods to mitigate biases in the training data aim to generate unbiased data sets that are balanced [16] or do not include information about the bias dimension [24]. One approach to mitigate biases in the training data is Counterfactual Data Augmentation (CDA) [45]. This method entails generating training instances that contradict the observed biases. There are different variations of CDA: 1-sided CDA, which use just the counterfactuals during an additional pre-training phase, and 2-sided CDA, which uses both counterfactuals and the original training data. While 1-sided CDA has a more substantial impact on biases, it can lead to over-correction [39]. In existing work, CDA has been used to mitigate different types of biases in language models [20], operating on a set of term pairs, such as "man" and "woman". However, generating counterfactual training instances from images is non-trivial. To address this, conditional generative adversarial networks have been used to generate unbiased training data with balanced protected attributes [26, 31]. Therefore, the authors generate multiple synthetic images for each training image, maintaining the target attribute score but reversing the expression score on the protected attribute. These approaches have demonstrated to be effective at mitigating bias on selected dimensions, but do not eliminate them. In addition, existing methods focus on downstream tasks and no research has been conducted on debiasing pre-trained image models used as backbones for transfer learning. ## 3 Background iEATThe Image Embedding Association Test (iEAT) quantifies social biases in image embeddings based on semantic similarities [34]. It compares the differential association of image embeddings of selected target concepts (such as "male" and "female") and attributes (such as "science" and "liberal arts"), and tests the null-hypothesis of equal similarities of the target concepts and attributes. Hence, a rejection suggests that one target concept is more associated with one attribute than the other (such as "male" is more associated with "science" or "female" is more associated with "liberal arts"). To test the null-hypothesis, it formulates a test statistic that compares target concepts X and Y with attributes A and B, defined as: \[s(X,Y,A,B)=\sum_{x\in X}s(x,A,B)-\sum_{y\in Y}s(y,A,B)\] where \(s(w,A,B)\) is the differential association of a target concept with the attributes, measured using the cosine similarities of their embeddings: \[s(w,A,B)=\mu(cos(w,a)_{a\in A})-\mu(cos(w,b)_{b\in B})\] where \(\mu\) is the mean. The statistical significance is determined using a permutation test, contrasting the score \(s(X,Y,A,B)\) with the scores \(s(X_{i},Y_{i},A,B)\), where \(X_{i}\) and \(Y_{i}\) are all equal-sized partitions of the set \(X\,\cup\,Y\): \[p_{t}=Pr[s(X_{i},Y_{i},A,B)>s(X,Y,A,B)] \tag{1}\] The effect size \(d\) quantifies the bias magnitude, computed as the normalized separation of the association distributions: \[d=\frac{\mu(s(x,A,B)_{x\in X})-\mu(s(y,A,B)_{y\in Y})}{\sigma(s(t,A,B)_{t\, \in\,X\cup Y})} \tag{2}\] where \(\mu\) is the mean and \(\sigma\) is the standard deviation. Here, the distance from zero indicates the bias magnitude, such that an effect size equaling zero implies the absence of bias. Moreover, the effect size indicates the direction of the bias, such that a negative effect size suggests that the differential association of Y with A and B is more pronounced, whereas a positive effect size implies the opposite scenario. The iEAT framework introduces a collection of 15 association tests designed to measure human-like social biases (see Table 1). These tests offer a valuable baseline to assess the presence and intensity of certain social biases within image embeddings. However, it it important to recognize that these are not an exhaustive list of all possible biases. These biases were selected due to their recurrence in related literature and societal implications. However, there might be other biases not captured in this selection, such as political biases. Nonetheless, these tests remain an instrumental foundation to assess the existence and magnitude of social biases in image embeddings. Embedding LayerThe selection of an embedding layer is crucial to extract features that contain high-quality, general-purpose information about the objects in an image. It has been demonstrated that in ViTs trained with supervised methods, the model depth tends to correlate with the quality of the embeddings, with the highest-quality embeddings being in the second-to-last layer [42]. In contrast, ViTs trained with SSL methods have been found to generate the most useful embeddings at a layer in the middle of the model [3, 8]. Therefore, the selection of an embedding layer depends on the training approach and the specific model. Here, for each model, we choose the layer that has been reported to be optimal in linear evaluations. ## 4 Experiments and Results Here, we describe and discuss our experiments to investigate factors that contribute to the emergence of social biases in the embedding spaces of ViTs. Therefore, we assess bias mitigation methods along multiple dimensions: * Training data: We investigate counterfactual augmentation training using diffusion-based image editing and find that it can reduce social biases in ViTs, but is not sufficient to eliminate them (Section 4.1). * Training objectives: We assess the impact of training objectives, and find that ViTs trained using discriminative objectives are less biased than those trained using generative objectives (Section 4.2). * Model architecture: We evaluate the impact of different architectural choices and find that social biases decrease as model size and input resolution increase, but observe no systematic effect for patch size (Section 4.3). ### Impact of Training Data The emergence of social biases in self-supervised image models is often suggested to be a result of object co-occurrences in images (women are more often set in "home or hotel" scenes, whereas men are more often depicted in "industrial and construction" scenes [36]). However, little research has been conducted on the effect of modifications of the training data on social biases in pre-trained image models. Therefore, we investigate the debiasing effect of counterfactual data on gender bias as an example. Our findings suggest that it can reduce social biases both during pre-training and fine-tuning, although it does not eliminate them and can come at a cost of a slight reduction in downstream performance. Moreover, we observe differences in the responsiveness to the counterfactual data, suggesting that its effectiveness is model-specific. ModelsIn our experiments, we use BEiT [3], ViT-MoCo [10] and ViT-MAE [13], which use a standard Transformer as the backbone network (12 layers, 12 attention heads, 768 hidden size). The implementation and model weights were made available using HuggingFace's Transformers [35] and Timm [40]. Counterfactual Data AugmentationTo investigate the impact of training data, we examine to what extent counterfactual data augmentation can mitigate social biases in ViTs. In our experiments, we combine the approach to counterfactual data augmentation used in natural language processing with diffusion-based image editing. Therefore, we leverage a large-scale text-to-image diffusion model [28] as a foundation, to capitalize on the benefits of pre-training on a sizable and generic corpus. For each image, we generate a textual description using BLIP [21] and CLIP [25]. Then, we use a set of term pairs ("man", "woman") to substitute target words in the generated caption. For our purposes, we adopt the set of gender term pairs of Zhao _et al_. [44]. To generate counterfactual images, we use diffusion-based semantic image editing with mask guidance [11]. To this end, we use CLIPSeg [22] to mask the target words ("man") in the image and use Stable Diffusion [29] to inpaint the masked image section, conditioned on the modified captions (see Figure 2). Here, we adopt the ImageNet ILSVRC 2012 dataset (ImageNet-1K) [30] as our benchmark to assess the effectiveness of the generated data, as it is one of the most studied benchmarks for which there is an extensive literature on architecture and training procedures. ImageNet-1K contains 1.28 million images, from which we generate an additional 159,393 counterfactual images. Counterfactual TrainingTo evaluate the debiasing effect of counterfactual data, we follow Webster _et al_. [39] and continue the training of the models from a pre-trained checkpoint using the counterfactual images (1-sided CDA). To this end, we adopt the standard contrastive learning objective for ViT-MoCo [10] and masked image modeling training objective for BEiT and ViT-MAE with a masking ratio of 40 % [3] and 75 % [13], respectively. Then, we train each model using Adam [18] with a batch size of 128 \begin{table} \begin{tabular}{l l l l l} \hline \hline Test & Target A & Target B & Attribute X & Attribute Y \\ \hline T1 & Young & Old & Pleasant & Unpleasant \\ T2 & Other & Arab-Muslim & Pleasant & Unpleasant \\ T3 & European American & Asian American & American & Foreign \\ T4 & Disabled & Not-Disabled & Pleasant & Unpleasant \\ T5 & Male & Female & Career & Family \\ T6 & Male & Female & Science & Liberal Arts \\ T7 & Flower & Insect & Pleasant & Unpleasant \\ T8 & European American & Native American & Pleasant & Unpleasant \\ T9 & European American & African American & Pleasant & Unpleasant \\ T10 & Christianity & Judaism & Pleasant & Unpleasant \\ T11 & Gay & Straight & Pleasant & Unpleasant \\ T12 & Light Skin & Dark Skin & Pleasant & Unpleasant \\ T13 & White & Black & Tool & Weapon \\ T14 & White & Black & Tool & Weapon (Modern) \\ T15 & Thin & Fat & Pleasant & Unpleasant \\ \hline \hline \end{tabular} \end{table} Table 1: Image Embedding Association Tests and learning rate 1.5e-4 for a single epoch to avoid over-correction [39]. The results are depicted in Table 2. In addition to the gender bias, we report the linear evaluation performance on CIFAR10 [19] as a measure of representation quality. We observe that it does reduce gender bias on BEiT and ViT-MoCo but comes with a slight reduction in representation quality for BEiT. However, a similar effect has been observed in alternative debiasing methods before and is not specific to out setting [27, 43]. In contrast, we observe the opposite effect on ViT-MAE, where it comes with a small increase in gender bias. This implies that there are differences in the responsiveness to the counterfactual data, suggesting that the effectiveness of this technique might be model-specific. We hypothesis that this is a result of the training objectives, which could influence how the models learn from the counterfactual data. In addition, we conjecture that the counterfactual data could interact differently with pre-trained checkpoints, which could carry certain biases leading to varying debiasing effects. To evaluate whether the observed effects on ViT-MoCo and ViT-MAE are a result of their pre-trained checkpoints, we train them from scratch on ImageNet-1k and our counterfactual data (2-sided CDA). The results are illustrated in Table 3. We again observe a decrease in gender bias on ViT-MoCo, and a similar increase in gender bias on ViT-MAE. This implies that observed effects are not a result of the pre-trained checkpoint, and that other factors influence the debiasing effect, such as model architecture differences. These findings highlight the nuanced effect of training data on social biases, demanding tailored approaches for different architectures and training approaches. Thus, we anticipate the need for more principled approaches that eliminate undesirable model behavior, potentially bypassing the use of counterfactual data and instead using post-hoc interventions to eliminate biases directly. ### Impact of Training Objectives ResNet50 [14] models, when trained using different self-supervised objectives exhibit a different number of social biases [33]. Therefore, we investigate the effect of training objectives on biases in ViTs across a range of different self-supervised methodologies: discriminative and generative models. Our findings indicate that ViTs trained with discriminative learning objective are less biased than those trained using generative objectives. Moreover, we observe that models trained on the same dataset using different objectives can exhibit opposite biases, which highlights the importance of training objectives as an important factor in the emergence of social biases in embedding spaces. Discriminative and Generative ObjectivesWe investigate the distribution of social biases in ViTs trained on ImageNet-21k using different self-supervised objectives. To this end, we follow Sirotkin [33] and count the number of significant social biases across different values of \(p_{t}\) (see Equation 1) in the range of \([10^{-4},10^{-1}]\), where lower values of \(p_{t}\) correspond to higher statistical significance of the social biases. The results of this analysis are illustrated in Figure 3. Our findings indicate that, on average, ViTs trained using discriminative objectives exhibit fewer biases than those trained using generative objectives. This effect remains consistent across all threshold values, which highlights the robustness of our findings. We conjecture that this stems from the inherent characteristics of models trained using generative objectives, which encourage the model to reconstruct images that match the statistical patterns in the training data, capture underlying structure and dependencies within the data. Thus, if the training data is biased towards specific demographics, objects, or scenes, the model could unintentionally learn and perpetuate those biases in its representations. In contrast, discriminative learning objectives encourage representations that maximize view invariance between samples from the same image [32]. This encourages the model to learn and prioritize fundamental visual features that are less influenced by social biases or external factors. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c|}{Baseline} & \multicolumn{2}{c}{CDA} \\ Model & Bias & Cifar10 & Bias & Cifar10 \\ \hline BEiT & 0.65 & **87.5** & **0.45** & 84.8 \\ ViT-MoCo & 1.41 & **95.1** & **1.39** & **95.1** \\ ViT-MAE & **0.59** & **89.6** & 0.64 & **89.6** \\ \hline \hline \end{tabular} \end{table} Table 2: iEAT effect size (see Equation 2) and linear evaluation performance on CIFAR10 of different models before (Baseline) and after (CDA) debiasing using a single pre-training epoch on counterfactual data. **We find that counterfactual data augmentation can reduce social biases, but its effect is model-specific and can come with a reduction in representation quality.** \begin{table} \begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c|}{Baseline} & \multicolumn{2}{c}{CDA} \\ Model & Bias & Cifar10 & Bias & Cifar10 \\ \hline ViT-MoCo & 1.25 & 90.4 & **1.04** & **90.9** \\ ViT-MAE & **0.50** & **82.9** & 0.55 & 71.2 \\ \hline \hline \end{tabular} \end{table} Table 3: iEAT effect size (see Equation 2) and linear evaluation performance on CIFAR10 of different models pretrained from scratch on ImageNet-1k (Baseline), and both ImageNet-1k and the counterfactual data (CDA). **We again observe a decrease in gender bias on MoCo-v3 and increase on ViTMAE. This implies that the observed effects are not a result of the pre-trained checkpoint.** **Opposite Biases despite same Training Data** The analysis of the number of significant biases fails to capture their direction. To address this, we contrast the effect sizes in Table 4. To our surprise, we find that ViTs can exhibit opposite social biases, despite being trained on the same dataset, _e.g_. ViTMAE exhibits a tendency to perceive Native Americans as less pleasant than European Americans, while ViT-MoCo [10] exhibits the inverse association. However, we also find that all models reinforce a handful of consistent social biases irrespective of the training objective, _e.g_. all models associate women more with family roles than careers, and perceive Arab-Muslims as less pleasant than other humans. This points to the idea that these social biases are indeed ingrained from the training data. These findings suggest that biases in image models are not just a result of training data, but that the training objective is a significant factor contributing to their emergence, affecting both the magnitude and direction of biases. Hence, we suggest that future work on bias mitigation focuses on the set of social biases that is consistent across models. ### Impact of Model Architecture **Model Size** The size of a model often impacts its performance, indicating that larger models tend to generate embeddings that contain higher-quality, more general-purpose information about an image. Therefore, we investigate the influence of model scale on social biases, using iGPT [8] and ViT-MAE [13], as both have been trained using self-supervised methods and are available in three different model sizes. The results indicate that as we scale the model scales, the direction of social biases within the embedding spaces remains somewhat consistent (see Table 5). This implies that models of similar architecture, trained on the same dataset using the same training objective, tend to inherit analogous social biases. However, we observe that the average magnitude of the social biases decreases as the model size increases (see Figure 4), which implies that scaling the model might be a prac \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} Models & T1 & T2 & T3 & T4 & T5 & T6 & T7 & T8 & T9 & T10 & T11 & T12 & T13 & T14 & T15 \\ \hline Discriminative Models & & & & & & & & & & & & & & & \\ ViT-DINO-B & **0.99** & **1.20** & \(-0.86\) & \(0.88\) & **0.38** & \(0.01\) & \(-0.12\) & **0.84** & \(0.49\) & \(0.22\) & \(-0.08\) & \(-0.13\) & \(-0.88\) & \(-0.77\) & **1.24** \\ ViT-MoCo-B & \(-0.15\) & **1.02** & \(-0.75\) & \(-0.29\) & **1.41** & \(0.13\) & **1.68** & \(-0.66\) & **1.10** & \(0.46\) & \(-0.24\) & \(-0.11\) & \(0.77\) & 0.14 & 0.64 \\ ViT-MSN-B [1] & \(0.93\) & **1.24** & \(0.33\) & \(0.93\) & \(0.14\) & \(-0.31\) & \(0.10\) & \(0.60\) & \(-0.78\) & \(0.54\) & \(-0.28\) & **\(-\)1.09** & \(0.18\) & \(-0.08\) & **1.64** \\ \hline Generative Models & & & & & & & & & & & & & & & \\ BEiT-B & \(0.18\) & **0.82** & \(0.02\) & \(0.53\) & **0.65** & \(-0.09\) & **\(-\)1.02** & \(0.28\) & **1.28** & \(0.09\) & \(0.26\) & **1.14** & **\(-\)1.58** & \(0.56\) & **1.72** \\ iGPT-S & \(0.66\) & **0.84** & **\(-\)1.02** & \(0.75\) & \(0.22\) & \(0.16\) & **\(-\)0.55** & **\(-\)1.32** & \(0.54\) & \(0.28\) & \(0.29\) & **1.31** & **\(-\)1.11** & \(0.89\) & **1.69** \\ ViT-MAE-B & \(0.11\) & **0.55** & \(-0.29\) & \(-0.35\) & **0.59** & \(0.08\) & **\(-\)1.15** & **\(-\)1.15** & \(-0.81\) & \(0.34\) & \(0.29\) & **0.96** & **\(-\)1.30** & **\(-\)1.31** & **1.75** \\ \end{tabular} \end{table} Table 4: iEAT effect sizes (see Equation 2) for a range of association tests (see Table 1) using different embedding models. The models were trained on ImageNet-21k using self-supervised methods, with the exception of ViT-MoCo which was trained on ImageNet-1k. The effect sizes indicate the magnitude and direction of the bias, and are written in bold if the effect is significant at \(p_{t}=0.05\). ViTs trained using different self-supervised objectives can exhibit opposite social biases, despite being trained on the same dataset. Figure 4: **The mean absolute iEAT effect size decreases as model size increases.** The boxplot illustrates the effect size distribution, with the median (solid line), the quartile range (boxes), and the rest of the distribution (whiskers). Figure 3: The number of biases detected in embedding spaces of ViTs for different values of \(p_{t}\) (see Equation 1). ViTs trained using discriminative objectives are less biased than those trained using generative objectives. tical strategy to mitigate social biases. We speculate that this could be attributed to the model's capacity to capture more semantic information about the objects in the image, without the need to rely on spurious correlations. However, it is crucial to recognize that scaling a model alone might not be sufficient to eliminate social biases. Input Resolution and Patch SizeIn addition, input resolution and patch sizes have been discussed as important model parameters [3, 13]. Hence, we investigate the effect of these parameters on social biases (see Table 6). To assess the impact of different input resolutions, we consider BEiT pre-trained on ImageNet-21k at a 224x224 input resolution and subsequently fine-tuned on ImageNet-1k at different input resolutions. Our results indicate that social biases diminish as input resolution increases. This finding implies that adopting higher input resolution might contribute to a reduction in social biases. To assess the impact of different patch sizes, we consider ViT-DINO [7], which was trained at different patch sizes. In our analysis, we observe some variability in the magnitude of social biases, but no systematic increase or decrease. However, it's important to acknowledge that the sample size for this analysis is small, due to the limited number of published models. Therefore, further validation should be conducted to confirm these findings. Per-Layer AnalysisIn our experiments, we use the embeddings from the layer that has been reported to be optimal in linear evaluation. However, we expect that the intensity of the biases might differ between layers, due to the increasing semantic interpretability of internal representations [5, 33]. To explore this, we determine the number of social biases across different layers, using a significance threshold of \(p_{t}=0.5\). The results are illustrated in Figure 5. We observe that for models trained using generative objectives, despite some variation in the magnitude, the number of significant biases is somewhat consistent across all layers. However, for models trained using discriminative objectives we find that the number of significant biases in the earlier layers mirrors those of models trained using generative objectives and then decreases as we progress through \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline Models & T1 & T2 & T3 & T4 & T5 & T6 & T7 & T8 & T9 & T10 & T11 & T12 & T13 & T14 & T15 \\ \hline \multicolumn{12}{l}{Input Resolution} & & & & & & & & & & & & & & & \\ BEiT\({}_{224}\)+L & \(\mathbf{1.59}\) & \(\mathbf{1.41}\) & \(\mathbf{0.20}\) & \(-0.07\) & \(\mathbf{0.40}\) & \(-0.21\) & \(\mathbf{1.59}\) & \(-0.19\) & \(\mathbf{1.46}\) & \(0.18\) & \(\mathbf{-0.88}\) & \(\mathbf{1.12}\) & \(\mathbf{1.06}\) & \(0.81\) & \(\mathbf{1.18}\) \\ BEiT\({}_{384}\)-L & \(\mathbf{0.45}\) & \(\mathbf{1.46}\) & \(\mathbf{0.60}\) & \(\mathbf{0.15}\) & \(\mathbf{0.36}\) & \(-0.17\) & \(\mathbf{1.61}\) & \(-0.46\) & \(\mathbf{1.47}\) & \(0.27\) & \(\mathbf{-1.11}\) & \(0.47\) & \(0.61\) & \(\mathbf{1.12}\) & \(\mathbf{1.02}\) \\ BEiT\({}_{512}\)-L & \(\mathbf{0.01}\) & \(\mathbf{1.55}\) & \(\mathbf{0.35}\) & \(\mathbf{0.30}\) & \(\mathbf{0.19}\) & \(-0.22\) & \(\mathbf{1.65}\) & \(-0.41\) & \(\mathbf{1.63}\) & \(0.21\) & \(\mathbf{-1.09}\) & \(0.49\) & \(0.46\) & \(\mathbf{1.03}\) & \(\mathbf{0.79}\) \\ \hline \multicolumn{12}{l}{Patch Size} & & & & & & & & & & & & & & & \\ DINO-B/8 & \(\mathbf{0.04}\) & \(\mathbf{1.22}\) & \(\mathbf{0.32}\) & \(\mathbf{1.19}\) & \(\mathbf{0.37}\) & \(-0.16\) & \(\mathbf{0.06}\) & \(\mathbf{0.97}\) & \(\mathbf{1.16}\) & \(0.36\) & \(-0.13\) & \(0.04\) & \(\mathbf{-1.21}\) & \(0.41\) & \(\mathbf{1.49}\) \\ DINO-B/16 & \(\mathbf{0.99}\) & \(\mathbf{1.20}\) & \(-0.86\) & \(\mathbf{0.88}\) & \(\mathbf{0.38}\) & \(\mathbf{0.01}\) & \(-0.12\) & \(\mathbf{0.84}\) & \(0.49\) & \(0.22\) & \(-0.08\) & \(-0.13\) & \(-0.88\) & \(-0.77\) & \(\mathbf{1.24}\) \\ \hline \hline \end{tabular} \end{table} Table 6: iEAT effect sizes (see Equation 2) for a range of association tests (see Table 1) of BEiT pre-trained on ImageNet-21k and then fine-tuned on ImageNet-1k at different input resolutions, and ViT-DINO trained using different patch sizes. The effect sizes indicate the magnitude and direction of the bias, and are written in bold if the effect is significant at \(p_{t}=0.05\). **The direction of the social biases are somewhat consistent across different input resolutions and patch sizes, and the average magnitude of the biases decreases as input resolution increases. However, we do not observe a systematic effect for patch size.** \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Models & T1 & T2 & T3 & T4 & T5 & T6 & T7 & T8 & T9 & T10 & T11 & T12 & T13 & T14 & T15 \\ \hline iGPT-S & \(\mathbf{0.66}\) & \(\mathbf{0.84}\)\(\mathbf{-1.02}\) & \(\mathbf{0.75}\) & \(\mathbf{0.22}\) & \(\mathbf{0.16}\)\(\mathbf{-0.55}\)\(\mathbf{-1.32}\) & \(\mathbf{0.54}\) & \(\mathbf{0.28}\) & \(\mathbf{0.29}\) & \(\mathbf{1.31}\)\(\mathbf{-1.11}\) & \(\mathbf{0.89}\) & \(\mathbf{1.69}\) \\ iGPT-M & \(\mathbf{0.38}\) & \(\mathbf{0.97}\)\(\mathbf{-0.62}\) & \(\mathbf{0.46}\) & \(\mathbf{0.43}\) & \(\mathbf{0.19}\)\(\mathbf{-0.07}\)\(\mathbf{-1.02}\) & \(-0.47\) & \(\mathbf{0.60}\) & \(\mathbf{0.08}\) & \(\mathbf{1.26}\) & \(\mathbf{0.59}\) & \(\mathbf{1.02}\) & \(\mathbf{1.50}\) \\ iGPT-L & \(\mathbf{-0.40}\) & \(\mathbf{1.00}\) & \(\mathbf{0.41}\) & \(\mathbf{0.79}\) & \(\mathbf{0.44}\) & \(\mathbf{0.23}\) & \(\mathbf{0.27}\)\(\mathbf{-0.61}\) & \(-0.77\) & \(\mathbf{0.55}\) & \(\mathbf{0.07}\) & \(\mathbf{1.11}\) & \(\mathbf{0.13}\) & \(\mathbf{0.49}\) & \(\mathbf{0.75}\) \\ ViT-MAE-B & \(\mathbf{0.11}\) & \(\mathbf{0.55}\) & \(-0.29\) & \(-0.35\) & \(\mathbf{0.59}\) & \(\mathbf{0.08}\)\(\mathbf{-1.15}\)\(\mathbf{-1.15}\) & \(-0.81\) & \(\mathbf{0.34}\) & \(\mathbf{0.29}\) & \(\mathbf{0.96}\)\(\mathbf{-1.30}\) & \(\mathbf{-1.31}\) & \(\mathbf{1.75}\) \\ ViT-MAE-L & \(\mathbf{0.03}\) & \(\mathbf{0.56}\) & \(-0.21\) & \(-0.51\) & \(\mathbf{0.55}\) & \(\mathbf{0.01}\)\(\mathbf{-1.17}\)\(\mathbf{-1.43}\) & \(-0.75\) & \(\mathbf{0.35}\) & \(\mathbf{0.33}\) & \(\mathbf{1.03}\)\(\mathbf{-1.38}\)\(\mathbf{-1.41}\) & \(\mathbf{1.64}\) \\ ViT-MAE-H & \(\mathbf{0.09}\) & \(\mathbf{0.63}\) & \(-0.39\) & \(-0.10\) & \(\mathbf{0.55}\) & \(-0.09\)\(\mathbf{-1.18}\)\(\mathbf{-1.34}\) & \(-0.23\) & \(\mathbf{0.29}\) & \(\mathbf{0.30}\) & \(\mathbf{0.95}\)\(\mathbf{-1.47}\)\(\mathbf{-1.44}\) & \(\mathbf{0.40}\) \\ \hline \hline \end{tabular} \end{table} Table 5: iEAT effect sizes (see Equation 2) for a range of association tests (see Table 1) using different embedding models trained on ImageNet-21k using self-supervised methods. The effect sizes indicate the magnitude and direction of the bias, and are written in bold if the effect is significant at \(p_{t}=0.05\). **The direction of the social biases in the embedding spaces of a model are consistent across model sizes. However, the average magnitude of the social biases decreases as model size increases.** the model. This suggests that the biases inherent in the low-level features are consistent across all models, but there is a noticeable divergence as the models develop more semantically meaningful features. We hypothesize that the observed divergence in biases across different layers could be attributed to the specific training objectives of the models, as detailed in Section 4.2. The existence of biases in earlier layers does seem counterintuitive, as no semantic concepts have formed yet. However, we found a substantial portion of these biases, such as skin tone and weight, are connected to lower-level features, such as pixel brightness. This suggests that these biases could be identified without necessarily associating them with the intended semantic concepts. Therefore, we hypothesize that the root of the biases in the earlier layers could be grounded in the inherent characteristics of the image data, and not necessarily the high-level semantic interpretations we are probing. These findings align with prior observations on ResNets [33]. ## 5 Conclusion The emergence of social biases in models trained using self-supervised objectives is often attributed to biases in the training data. However, we find that models can exhibit opposite biases despite being trained on the same data. This challenges the prevailing belief that social biases arise just from simple co-occurrences of objects in the training images. Moreover, we find that training objectives, model architecture, and model scale each have significant effects on social biases in learned representations. These effects can be the reduced, but not eliminated, using counterfactual data augmentation. Therefore, we recommend that model developers and users take these details into account in designing and selecting the model most relevant to their needs, as each decision has quantifiable trade-offs. Moreover, our analysis exposes a set of social biases that is consistent across different models, wherefore we suggest that future work assesses their bias mitigation approaches on these dimensions. ## Acknowledgment This work was supported in part by the German Federal Ministry for Digital and Transport (BMDV), and in part by the German Federal Ministry for Economic Affairs and Climate Action (BMWK).
2304.01445
On the coordination efficiency of strategic multi-agent robotic teams
We study the problem of achieving decentralized coordination by a group of strategic decision makers choosing to engage or not in a task in a stochastic setting. First, we define a class of symmetric utility games that encompass a broad class of coordination games, including the popular framework known as \textit{global games}. With the goal of studying the extent to which agents engaging in a stochastic coordination game indeed coordinate, we propose a new probabilistic measure of coordination efficiency. Then, we provide an universal information theoretic upper bound on the coordination efficiency as a function of the amount of noise in the observation channels. Finally, we revisit a large class of global games, and we illustrate that their Nash equilibrium policies may be less coordination efficient then certainty equivalent policies, despite of them providing better expected utility. This counter-intuitive result, establishes the existence of a nontrivial trade-offs between coordination efficiency and expected utility in coordination games.
Marcos M. Vasconcelos, Behrouz Touri
2023-04-04T01:38:29Z
http://arxiv.org/abs/2304.01445v1
# On the coordination efficiency of strategic multi-agent robotic teams ###### Abstract We study the problem of achieving decentralized coordination by a group of strategic decision makers choosing to engage or not in a task in a stochastic setting. First, we define a class of symmetric utility games that encompass a broad class of coordination games, including the popular framework known as _global games_. With the goal of studying the extent to which agents engaging in a stochastic coordination game indeed coordinate, we propose a new probabilistic measure of coordination efficiency. Then, we provide an universal information theoretic upper bound on the coordination efficiency as a function of the amount of noise in the observation channels. Finally, we revisit a large class of global games, and we illustrate that their Nash equilibrium policies may be less coordination efficient then certainty equivalent policies, despite of them providing better expected utility. This counter-intuitive result, establishes the existence of a nontrivial trade-offs between coordination efficiency and expected utility in coordination games. ## I Introduction Coordinated behavior is desirable in many distributed autonomous systems such as robotic, social-economic, and biological networks [1, 2, 3, 4, 5]. Most of the Engineering literature on coordination assumes that the agents exchange messages over a communication network to asymptotically agree on a common decision variable, such as in opinion dynamics and distributed optimization. However, in the field of Economics, the topic of coordination has been studied from a different point of view, where the agents do not exchange (explicit) messages but instead act strategically. Such an approach is related to coordination games, in which two or more interacting agents are incentivized to take the same action. Deterministic coordination games are characterized by the existence of multiple equilibria, and often lead to the analysis of social dilemmas. One way to address the multiplicity of equilibria uses a framework known as _global games_[6]. A global game is a Bayesian coordination game, where each agent plays an action after observing a noisy signal about the state-of-the-world. The state-of-the-world, which we simply refer as _state_ captures features such as the strength of the economy in a bank run model, the political regime in a regime change model, or the difficulty of a task in a task-allocation problem. Under certain assumptions on the utility structure, global games admit a unique Bayesian Nash Equilibrium even in the presence of a vanishingly small noise in the observations, resolving the issue of equilibrium selection in games with multiple equilibria [7]. The recent literature in this class of games focuses on aspects related to existence of Nash equilibria in the presence of different information patterns [8], or the influence of correlation among the agent's observations [9, 10], and the impact of different local connectivity patterns in terms of externality in the agents' utility functions [11]. Other recent developments look at non-conventional probabilistic models for the noisy signals [12], and the presence of a multi-dimensional state in a multi-task allocation problem [13]. We consider the federated system architecture outlined in Fig. 1, where the state is available at a remote location (e.g. a cloud server), and broadcast to multiple agents by an edge node or gateway over parallel noisy channels. Upon receiving its noisy signal, an agent makes a binary decision such as to maximize an expected utility function satisfying the strategic complementarity property, leading to coordinated behavior [14]. We study how the coordination in a global game degrades with the level of noise in the communication channels. Moreover, we are interested in characterizing the limits of coordination for a given signal to noise ratio used for communication with the robotic agents. The main contributions of this paper are: * We introduce a class of games, namely homogeneous coordination games, that includes a broad class of global games. * We introduce a novel notion of coordination efficiency used to measure the coordination for the homogeneous coordination games. * We obtain a fundamental limitation on the coordination efficiency in global games for any policy based on information theoretic tools. Fig. 1: System architecture for the strategic coordination in a robotic team receiving information about a stochastic state variable available at a remote location. ## II System Model In this section, we discuss our model for global games, and in particular, we introduce an important subclass of such games, i.e., _homogeneous coordination_ games. ### _Utility structure_ A global game is an incomplete information game that is played between \(N\) players \([N]\mathop{=}\limits^{\text{def}}\{1,\ldots,N\}\) and nature. Formally, a global game is a tuple \(([N],\mathcal{A},X,\mathbf{u},\mathbf{Y})\), where: 1. \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\times\cdots\times\mathcal{A }_{N}\) is the joint action set of the \(N\) players with \(\mathcal{A}_{i}\) being the action set for player \(i\in[N]\), 2. \(X\) is a random variable determining the _type_ of nature. We refer to \(X\) as the state or the underlying fundamental of the game, 3. \(\mathbf{u}=(u_{1},\ldots,u_{N}):\mathcal{A}\times\mathbb{R}\rightarrow\mathbb{ R}^{N}\) is the utility of the \(N\) players with \(u_{i}:\mathcal{A}\times\mathbb{R}\rightarrow\mathbb{R}\) being the utility of the \(i\)-th player that depends on the action of each player and the value of the underlying fundamental \(X\), and 4. \(\mathbf{Y}=(Y_{1},\ldots,Y_{N})\) where \(Y_{i}\) is a random variable denoting the player \(i\in[N]\) noisy observation (that forms the belief) of the underlying fundamental \(X\). In this work, for any vector \(\mathbf{v}=(v_{1},\ldots,v_{N})\) and any \(i\in[N]\), we use the notation \(\mathbf{v}_{-i}=(v_{1},\ldots,v_{i-1},v_{i+1},\ldots,v_{N})\) and with abuse of notation, we say \(\mathbf{v}=(v_{i},\mathbf{v}_{-i})\). For example, for a joint action \(\mathbf{a}\in\mathcal{A}\), we write \(\mathbf{a}=(a_{i},\mathbf{a}_{-i})\) for all \(i\in[N]\). Our work is motivated by the observation that a vast majority of studies in global games and their applications, the underlying games has the following common features: 1. _Symmetric/Permutation invariant_: In many settings, the utility functions of individual agents are invariant under any permutation of other agents' actions. In other words, \(u_{i}(a_{i},\mathbf{a}_{-i})=u_{i}(a_{i},\mathbf{a}_{-i}P)\) for any \((N-1)\times(N-1)\) permutation matrix1\(P\). Footnote 1: A matrix is a permutation matrix if all its elements are zero or one and each row and each column has exactly one non-zero element. 2. _Homogeneous utility functions_: The utility function of the \(N\) players are the same in the sense that for any player \(i\in[N]\) and any action profile \(\mathbf{a}\in\mathcal{A}\), we have \(u_{i}(a_{i},\mathbf{a}_{-i})=u_{1}(a_{i},\mathbf{a}_{-i})\). 3. _Homogeneous action sets_: In many global games, we are dealing with a large population, and the action set of all players are identical. For example, in the case of political riots, all players decide to take a risky action or safe action in the face of a political regime. In this case, \(\mathcal{A}_{i}=\{0,1\}\) where \(0\) and \(1\) correspond to the safe and risky actions, respectively. 4. _Coordination promoting_: Again, in most settings of interest, the utility structure of the players is such that it promotes coordination. For example, in the case of political uprisings, bank-runs, etc., a well-studied utility function is \(u_{i}(\mathbf{a},X)=a_{i}(\sum_{j=1}^{N}a_{i}-X)\). Therefore, in the case, where all players have the perfect information about \(X\), i.e., when \(Y_{i}=X\) for all \(i\in[N]\), depending on whether \(X>\frac{1}{n}\) or \(X<\frac{1}{n}\), the only equilibrium of the game is either \(\mathbf{a}=\mathbf{1}\) or \(\mathbf{a}=\mathbf{0}\), resulting in coordination among the \(N\) players. Motivated by this, we introduce the notion of _homogeneous coordination_ games that formalize a broad class of games satisfying the above properties. For this, let \(\Delta^{k}:=\{\mathbf{q}\in\mathbb{R}^{+}_{+}\mid\sum_{i=1}^{k}q_{i}=1\}\) be the probability simplex in \(\mathbb{R}^{k}\). For a finite set \(\mathcal{A}=\{0,\ldots,M-1\}\) and a vector \(\mathbf{v}\in\mathcal{A}^{d}\), where \(d\geq 1\), let us define the empirical mass function \(G(\mathbf{v})=\big{(}q_{0},\ldots,q_{M-1}\big{)}\in\Delta^{M}\) by \[q_{\ell}=\frac{1}{\gamma}\sum_{j}\mathbf{1}(v_{j}=\ell),\ \ \ell\in\mathcal{A}.\] Basically, \(q_{\ell}\) is the proportion of the entries of \(\mathbf{v}\) that are equal to \(\ell\in\mathcal{A}\). Now, we are ready to formalize the class of homogeneous coordination games. **Definition 1** (Homogeneous Coordination Game): _A homogeneous coordination game is a game where all the agents have the same action set \(\mathcal{A}\), the same utility function \(u:\mathcal{A}^{N}\times\mathbb{R}\rightarrow\mathbb{R}\), satisfying the following conditions:_ 1. _There exists a function_ \(\hat{u}:\mathcal{A}\times\Delta^{N-1}\times\mathbb{R}\rightarrow\mathbb{R}\) _where for all_ \(\mathbf{a}\in\mathcal{A}^{N}\)_, all_ \(i\in[N]\)_, and all_ \(x\in\mathbb{R}\)_, we have_ \[\hat{u}(a_{i},G(\mathbf{a}^{-i}),x)=u(a_{i},\mathbf{a}^{-i},x).\] (1) 2. _For all_ \(i\in[N]\)_, all_ \(x\in\mathbb{R}\)_, and all_ \(\mathbf{g}\in\Delta^{N-1}\)_, there exists an optimal action_ \(a^{\star}(x)\) _and majority_ \(c^{\star}(x)\) _of the players, such that if the majority are playing_ \(a^{\star}(x)\)_, then player_ \(i\) _is better off playing that action. Mathematically, there exists_ \(c^{\star}(x)\in[0,1]\) _such that for any_ \(\mathbf{g}\in\Delta^{M}\) _with_ \(g_{a^{\star}(x)}=\max_{\ell\in[M]}g_{\ell}\geq c^{\star}(x)\)_, we have_ \[\hat{u}\big{(}a_{i}^{\star}(x),\mathbf{g},x\big{)}\geq\hat{u}\big{(}a_{i}, \mathbf{g},x\big{)},\qquad\forall a_{i}\in\mathcal{A}.\] (2) Note that Property (1) essentially means that the utility function of each player is symmetric/permutation invariant.In other words, for a finite action set \(\mathcal{A}=[M]\), any symmetric/permutation invariant function \(f:\mathcal{A}^{N-1}\rightarrow\mathbb{R}\) (as defined in (a)), can be written as a function of the empirical mass function of the \(M\) actions, i.e., for such utility functions, it does not matter which player is playing what action, but rather how many or what proportion of the players is playing each action. For the rest of the paper, with an abuse of notation, instead of \(u(a_{i},\mathbf{a}_{-i},x)\), we may view the utility functions of a homogeneous coordination game to be simply a function of the empirical mass and use the notation \(u(a_{i},\mathbf{g}_{-i},x)\) instead of \(\hat{u}(a_{i},G(\mathbf{a}_{-i}),x)\), where \(\mathbf{g}_{-i}=G(\mathbf{a}_{-i})\). ### _Information structure and policies_ Here, we discuss the assumptions on the fundamental \(X\) and individual agents' noisy observation of \(Y_{i}\). Throughout, we assume that \(X\) is a zero-mean Gaussian random variable with variance \(\sigma_{X}^{2}\), i.e., \(X\sim\mathcal{N}(0,\sigma_{X}^{2})\). We assume the commonly studied model (cf. [6, 8, 9]) for the \(i\)-th agent noisy observation \(Y_{i}\) to be \(Y_{i}=X+Z_{i}\). We assume that the noise sequence \(\{Z_{i}\}_{i\in[N]}\), is independent and identically distributed across agents and \(Z_{i}\sim\mathcal{N}(0,\sigma_{Z}^{2})\). Moreover, \(\{Z_{i}\}\) is independent of \(X\). Note that since \(X\) and \(Y_{i}\) are jointly Gaussian and the minimum mean squared error estimate of \(X\) given \(Y_{i}=y_{i}\) is linear and is given by \[\hat{x}_{\mathrm{mmse}}(y_{i})\mathop{=}^{\mathrm{def}}\mathbb{E}[X\mid Y_{i}=y _{i}]=\Big{(}\frac{\sigma_{X}^{2}}{\sigma_{X}^{2}+\sigma_{Z}^{2}}\Big{)}y_{i}. \tag{3}\] In general for games of imperfect information (which includes global games and homogeneous coordination games), the agents take action based on their observation. This leads to the notion of _policy_. For homogeneous coordination games with the action set \(\mathcal{A}=\{1,\ldots,M\}\), a _policy_ is a mapping \(\gamma_{i}:\mathbb{R}\to\mathcal{A}\) that translates agent \(i\)-s observation to action, i.e., agent \(i\in[N]\) takes action \(a_{i}=\gamma_{i}(Y_{i})\). ### _Bayesian Nash Equilibrium_ Let \(u_{i}\) be the utility function of a homogeneous coordination game (Definition 1). The agents in this game act in a noncooperative manner, by seeking to maximize their individual _expected_ utility with respect to their coordination policies. Let \(\boldsymbol{\gamma}\mathop{=}^{\mathrm{def}}(\gamma_{1},\ldots,\gamma_{N})\) be a _policy profile_, i.e., the collection of policies used by all the agents in the system. Given \(\boldsymbol{\gamma}_{-i}\), the goal of the \(i\)-th agent is to solve the stochastic optimization problem \[\mathop{\mathrm{maximize}}_{\gamma_{i}} \mathcal{J}_{i}(\gamma_{i},\boldsymbol{\gamma}_{-i})\mathop{=}^{ \mathrm{def}}\mathbb{E}\Big{[}u_{i}(A_{i},G_{-i},X)\Big{]},\] where the expectation is taken over all the exogenous random variables \(X\), and \(\{Z_{i}\}_{i\in[N]}\). This leads to the notion of Bayesian Nash Equilibrium (BNE) strategies. **Definition 2** (Bayesian Nash Equilibrium): _A policy profile \(\boldsymbol{\gamma}^{\star}\) is a Bayesian Nash Equilibrium if_ \[\mathcal{J}_{i}(\gamma_{i}^{\star},\gamma_{-i}^{\star})\geq\mathcal{J}_{i}( \gamma_{i},\boldsymbol{\gamma}_{-i}^{\star}),\text{ for all }\gamma_{i}\in\Gamma,\ \ i\in[N],\] _where \(\Gamma\) is the space of all admissible coordination policies/strategies._ ### _Coordination measure_ Given that the state \(X\) is not perfectly observed by the agents, full coordination is often unachievable. In a deterministic setting, defining a precise notion of coordination and agreement is a well-posed problem. However, there are multiple ways of defining a metric of coordination efficiency in a stochastic game setting. One essential feature that such a metric should have is to capture that the extent to which agents coordinate around an optimal action degrades with respect to the amount of noise in the observations. The introduction of framework of homogeneous coordination games, allows us to mathematically define a measure of _coordination efficiency_. **Definition 3** (Coordination efficiency): _Let \(\boldsymbol{\gamma}\) be a policy profile of \(N\) players in a homogeneous coordination game (as defined in Definition 1). We define the average coordination efficiency \(\varrho:\boldsymbol{\gamma}\mapsto[0,1]\) as_ \[\varrho(\boldsymbol{\gamma})\mathop{=}^{\mathrm{def}}\frac{1}{N}\sum_{i=1}^{N }\mathbb{P}\big{(}\gamma_{i}(Y_{i})=a^{\star}(X)\big{)},\] _where \(a^{\star}(x)\) is the optimal action defined in Definition 1._ ## III Global Games Revisited An important instance of homogeneous coordination games is a class of binary action global games (i.e., \(\mathcal{A}=\{0,1\}\)), where the utility of each agent is given by \[u_{i}(a_{i},\mathbf{a}_{-i},x)=a_{i}\cdot\bigg{(}b\Big{(}\sum_{j\neq i}a_{j} \Big{)}-x\bigg{)}, \tag{4}\] where \(b:[0,N-1]\to\mathbb{R}\) is a continuous and increasing function. The function \(b(\cdot)\) is called the _benefit function_. One application for this utility is in distributed task allocation in robotic teams [15, 16], where \(x\) represents the difficulty of a task. An agent benefits from engaging in the task if the number of other agents engage in the same action is sufficiently large. However, if the variable \(x\) is not perfectly observed by the agents it is not clear whether an agent should engage in the task or not. Our next result establishes that in fact global games with utility structure (4) are homogeneous coordination games. **Lemma 1**: _A global game with \(N\geq 2\) players, binary action set \(\mathcal{A}=\{0,1\}\), and utility function (4) is a homogeneous coordination game for any increasing continuous function \(b:[0,N-1]\to\mathbb{R}\)._ The homogeneity of the action sets and utility functions follow readily from the definition of such games. To show Property (1), for \(\mathbf{a}\in\mathcal{A}^{N}\) and \(i\in[N]\), let \(p=\frac{\sum_{j\neq i}a_{j}}{N-1}\). Then, \(G(\mathbf{a}_{-i})=(1-p,p)\). Therefore, letting \(\hat{u}(a_{i},\mathbf{g},x)\mathop{=}^{\mathrm{def}}a_{i}(b((N-1)g_{1})-x)\) for all \(a_{i}\in\mathcal{A}\), \(\mathbf{g}=(g_{0},g_{1})\in\Delta^{2}\), and \(x\in\mathbb{R}\), we have \[u_{i}(a_{i},\mathbf{a}_{-i},x)=a_{i}\Big{(}b\left((N-1)p\right)-x\Big{)}=\hat {u}(a_{i},G(\mathbf{a}_{-i}),x). \tag{5}\] To show Property (2), fix \(x\in\mathbb{R}\) and let \(b(\cdot)\) be an increasing benefit function. Then, if \(x\leq b(0)\), we have \[\hat{u}(1,\mathbf{g},x)=b\Big{(}(N-1)g_{1}\Big{)}-x\geq b(0)-x\geq\hat{u}_{i} (0,\mathbf{g},x)=0.\] Therefore, for any \(x\leq b(0)\), (2) holds with \(a^{\star}(x)=1\) and \(c^{\star}(x)=0\). Similarly, it can be shown that for \(x\geq b(N-1)\), (2) holds with \(a^{\star}(x)=0\) and \(c^{\star}(x)=0\). For \(x\in(b(0),b(N-1))\), we can show that both \(a^{\star}(x)=0\) and \(a^{\star}(x)=1\) are possible coordinating actions. To show \(a^{\star}(x)=1\), let \(c^{\star}(x)=\min\{q\in[0,1]\mid b(q(N-1))\geq x\}\) (note that the minimum exists due to the continuity of \(b(\cdot)\) and compactness of \([0,1]\)). Then for any probability vector \(\mathbf{g}=(g_{0},g_{1})\in\Delta^{2}\) with \(g_{1}\geq c^{\star}(x)=q\), we have \[\hat{u}(1,\mathbf{g},x) =b\Big{(}(N-1)g_{1}\Big{)}-x\] \[\geq b((N-1)q)-x\geq\hat{u}(0,\mathbf{g},x)=0.\] Therefore, condition (2) holds. Similarly, it can be shown that for \(x\in(b(0),b(N-1))\), \(a^{\star}(x)=0\) is a coordinating action with \(c^{\star}(x)=\max\{q\in[0,1]\mid b((1-q)(N-1))\geq x\}\). In this work, we study the so-called _best-response_ policy for the above games. In our case, for a joint policy \(\boldsymbol{\gamma}\), agent \(i\)-s best-response policy is \[\mathrm{BR}(y_{i},\mathbf{\gamma}_{-i})=\begin{cases}1&\text{if }\mathbb{E}[b\Big{(} \sum_{i\neq j}\gamma(Y_{j})\Big{)}\mid Y_{i}=y_{i}]\\ &\geq\mathbb{E}[X\mid Y_{i}=y_{i}]\\ 0&\text{otherwise}\end{cases}.\] ### _Threshold policies and their best-response_ In many games of imperfect information, including global games, we are interested in the class of threshold policies. For global games with binary actions, these are policies where an agent compares its observed signal \(Y_{i}=y_{i}\) to a threshold \(\tau_{i}\) and decides whether to take the risky action (\(a_{i}=1\)) or not (\(a_{i}=0\)), i.e., \[\gamma_{i}(y_{i})=\begin{cases}1,&y_{i}\leq\tau_{i}\\ 0,&\text{otherwise.}\end{cases}\] Using the next result, we will show that the best response to homogeneous threshold policies is a threshold policy. **Lemma 2**: _If the function \(b(\cdot)\) is nonnegative and strictly increasing, and all other agents \(j\neq i\) utilize a threshold policy \(\gamma\) with the same threshold \(\tau\), then_ \[\mathbb{E}\Bigg{[}b\Big{(}\sum_{i\neq j}\gamma(Y_{j})\Big{)}\ \Big{|}\ Y_{i}=y_{i}\Bigg{]}\] _is a strictly decreasing function of \(y_{i}\)._ Let \(B_{-i}\) denote a function of random variables \(\{Y_{j}\}_{j\neq i}\) given by \(B_{-i}\stackrel{{\mathrm{def}}}{{=}}b\Big{(}\sum_{j\neq i}\gamma (Y_{j})\Big{)}\), with the Cumulative Distribution Function (CDF) \[F_{B_{-i}\mid Y_{i}=y_{i}}(\xi)\stackrel{{\mathrm{def}}}{{=}} \mathbb{P}\Big{(}B_{-i}\leq\xi\ \Big{|}\ Y_{i}=y_{i}\Big{)}.\] Since \(b(\cdot)\geq 0\), \(B_{-i}\) is a nonnegative random variable. Therefore (cf. [17, Chapter 1.5, Property E.6]), \[\mathbb{E}[B_{-i}\mid Y_{i}=y_{i}]=\int_{0}^{\infty}\Big{(}1-F_{B_{-i}\mid Y_ {i}=y_{i}}(\xi)\Big{)}d\xi.\] Because the function \(b\) is strictly increasing, it admits a unique inverse function \(b^{-1}:\mathbb{R}_{+}\to\mathbb{R}_{+}\). Also, since the observations \(\{Y_{j}\}_{j\in[N]}\) are conditionally (on \(X\)) independent, therefore, \[F_{B_{-i}\mid Y_{i}=y_{i}}(\xi)=\int_{\mathbb{R}}\mathbb{P}\Big{(} \sum_{i\neq j}\gamma(Y_{j})\leq b^{-1}(\xi)\ \Big{|}\ X=x\Big{)}\\ \times f_{X\mid Y_{i}=y_{i}}(x)dx.\] Let \(A_{j}=\gamma(Y_{j})\) for \(j\neq i\). Conditioned on \(X=x\), the collection of Bernoulli random variables \(\{A_{j}\}_{j\neq i}\) is mutually independent with \[\mathbb{P}\big{(}A_{j}=1\mid X=x\big{)}=\Phi\Big{(}\frac{\tau_{j}-x}{\sigma_{ Z}}\Big{)},\] where \(\Phi\) is the CDF of a standard Gaussian random variable2 Footnote 2: The CDF of a standard Gaussian random variable is given by \[\Phi(x)\stackrel{{\mathrm{def}}}{{=}}\int_{-\infty}^{x}\frac{1}{ \sqrt{2\pi}}\exp\Big{(}-\frac{\xi^{2}}{2}\Big{)}d\xi.\] Under the assumption of an homogeneous threshold strategy profile where \(\tau_{j}=\tau\) for all \(j\neq i\), \(\{A_{j}\}_{j\neq i}\) is identically distributed, which implies that \[\sum_{j\neq i}A_{j}\mid X=x\sim\mathcal{B}\Bigg{(}N-1,\Phi\Big{(}\frac{\tau- x}{\sigma_{Z}}\Big{)}\Bigg{)},\] where \(\mathcal{B}(k,p)\) is a binomial distribution with parameters \((k,p)\). Therefore, \[F_{B_{-i}\mid Y_{i}=y_{i}}(\xi) =\mathbb{E}_{V}\Bigg{[}\sum_{\ell=0}^{\lfloor b^{-1}(\xi)\rfloor} \binom{N-1}{\ell}\Phi\Big{(}\frac{\tau-V-\alpha y_{i}}{\sigma_{Z}}\Big{)}^{ \ell}\] \[\quad\times\Bigg{(}1-\Phi\Big{(}\frac{\tau-V-\alpha y_{i}}{\sigma _{Z}}\Big{)}\Bigg{)}^{N-1-\ell}\Bigg{]},\] where the expectation is with respect to a random variable \(V\) with \[V\sim\mathcal{N}\Big{(}0,\frac{\sigma_{X}^{2}\sigma_{Z}^{2}}{\sigma_{X}^{2}+ \sigma_{Z}^{2}}\Big{)}\ \ \text{and}\ \ \alpha\stackrel{{\mathrm{def}}}{{=}}\frac{\sigma_{X}^{2}}{\sigma_{X} ^{2}+\sigma_{Z}^{2}}. \tag{6}\] Let \(p(\tau,v,y_{i})\stackrel{{\mathrm{def}}}{{=}}\Phi\Big{(}\frac{\tau -v-\alpha y_{i}}{\sigma_{Z}}\Big{)}\). Note that \(p(\cdot\,,\cdot,y_{i})\) is strictly decreasing in \(y_{i}\), and since the CDF of a binomial random variable computed at a point is a strictly decreasing function in the probability parameter \(p\), we have \[\frac{\partial}{\partial y_{i}}\Big{(}1-F_{B_{-i}\mid Y_{i}=y_{i}}(\xi)\Big{)}<0.\] Therefore, \[\frac{\partial}{\partial y_{i}}\int_{0}^{\infty}\Big{(}1-F_{B_{-i}\mid Y_{i}=y _{i}}(\xi)\Big{)}d\xi<0.\] **Theorem 1**: _If the benefit function \(b:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is nonnegative and strictly increasing, the best-response map to a homogeneous threshold strategy profile is a threshold strategy._ Let \[f(y_{i})\stackrel{{\mathrm{def}}}{{=}}\mathbb{E}[b(\sum_{j\neq i} \gamma(Y_{j}))\mid Y_{i}=y_{i}]-\mathbb{E}[X\mid Y_{i}=y_{i}].\] Lemma 2 implies that \(\mathbb{E}[b(\sum_{j\neq i}\gamma(Y_{j}))\mid Y_{i}=y_{i}]\) is monotonically decreasing in \(y_{i}\) while \[\mathbb{E}[X\mid Y_{i}=y_{i}]=\Big{(}\frac{\sigma_{X}^{2}}{\sigma_{X}^{2}+ \sigma_{Z}^{2}}\Big{)}y_{i}\] is a strictly increasing function of \(y_{i}\). Therefore, \(f(y_{i})\) is strictly decreasing. Also, since \(b\) is an increasing function, \(\mathbb{E}[b(\sum_{j\neq i}\gamma(Y_{j}))\mid Y_{i}=y_{i}]\in[b(0),b(N-1)]\) and hence, \(\lim_{y_{i}\to-\infty}f(y_{i})=\infty\) and \(\lim_{y_{i}\to\infty}f(y_{i})=-\infty\). Therefore, there exists a single crossing point \(\bar{\tau}\) such that \(f(y_{i})>0\) for \(y_{i}<\bar{\tau}\) and \(f(y_{i})<0\) for \(y_{i}>\bar{\tau}\). ### _Linear benefit functions_ Theorem 1 guarantees that the best response to homogeneous thresholds is a threshold policy for a broad class of Global Games. Once a new threshold is found, other agents imitate by using the same best response threshold. We recursively use this scheme, which may converge to a BNE policy. However, the convergence of such a scheme for an arbitrary benefit function \(b(\cdot)\) might not be easy establish, in general. In addition, finding the optimal strategy \(a^{\star}(x)\) may not be feasible for general benefit functions. However, such characterization is possible for the class of _linear_ benefit functions. Consider the following linear benefit function indexed by \(N\), \(b_{N}^{\text{lin}}:\mathbb{R}\to\mathbb{R}\) such that \[b_{N}^{\text{lin}}(\xi)\mathop{=}^{\mathrm{def}}\lambda\cdot\left(\frac{\xi}{N }\right), \tag{7}\] where \(\lambda>0\). Define the _belief_ function \(\pi_{ij}:\mathbb{R}^{2}\to\mathbb{R}\) as \[\pi_{ij}(\tau_{j},y_{i})\mathop{=}^{\mathrm{def}}\mathbb{P}\big{(}Y_{j}\leq \tau\mid Y_{i}=y_{i}\big{)}.\] After a few algebraic manipulations, we can write \[\pi_{ij}(\tau_{j},y_{i})=\mathbb{E}\left[\Phi\left(\frac{\tau_{j}-V-\alpha y_{ i}}{\sigma_{Z}}\right)\right], \tag{8}\] where \(V\) and \(\alpha\) are defined in Eq. (6). **Corollary 1** (Corollary to Theorem 1): _Assuming a linear benefit function (given by (7)), if each agent \(j\neq i\) uses a threshold policy with threshold \(\tau_{j}\), then the best-response to any threshold strategy profile is given by the unique solution \(\bar{y}_{i}\) of \(\frac{\lambda}{N}\sum_{i\neq j}\pi_{ij}(\bar{y}_{i})=\alpha\bar{y}_{i}\)._ Based on Corollary 1, we can define a BR map in the space of threshold policies, which takes a vector of \(N\) thresholds and maps into \(N\) thresholds. Let \(\mathcal{F}:\mathbb{R}^{N}\to\mathbb{R}^{N}\), where \[\mathcal{F}_{i}(\tau_{i},\tau_{-i})\mathop{=}^{\mathrm{def}}\arg\min_{\xi\in \mathbb{R}}\left(\frac{\lambda}{N}\sum_{i\neq j}\pi_{ij}(\tau_{j},\xi)-\alpha \xi\right)^{2},\] for all \(i\in[N]\). **Remark 1**: _The existence of a Bayesian Nash-equilibrium in threshold policies is easy to show, but whether it is unique depends on establishing a contraction property of \(\mathcal{F}\), which is a topic for future work._ ### _Homogeneous agents using a threshold \(\tau\)_ The problem is simpler when we focus only on homogeneous strategy profiles. In that case, the BR to a threshold strategy profile where every agent uses threshold \(\tau\) is the unique solution to the following equation \[\mathrm{BR}(\tau)=\Big{\{}\xi^{\star}:\lambda\frac{(N-1)}{N}\pi(\xi^{\star}; \tau)=\alpha\xi^{\star}\Big{\}}, \tag{9}\] here \[\pi(\xi;\tau)\mathop{=}^{\mathrm{def}}\mathbb{E}\left[\Phi\left(\frac{\tau- \tilde{\sigma}W-\alpha\xi}{\sigma_{Z}}\right)\right], \tag{10}\] with \(W\sim\mathcal{N}(0,1)\) and \(\tilde{\sigma}^{2}=\alpha\sigma_{Z}^{2}\). **Example 1**: _Figure 2 shows the BR function of Eq. (9) and corresponding Nash equilibrium (NE) thresholds for different values of noise variance \(\sigma_{Z}^{2}\). Two observations from this numerical experiment is that as the noise variance increases, so do the NE thresholds \(\tau^{\star}(\sigma_{Z}^{2})\). Less obvious is the limit of the NE threshold as the variance of the noise goes to zero, that is, with perfect observations. In the noiseless case, Fig. 2 shows that \(\tau^{\star}=1/2\), which implies that, in this example, \(a^{\star}(x)=\mathbf{1}(x\leq 0.5)\)._ First, let us discuss the following properties of the Gaussian CDF whose proofs are omitted due to space limitations. **Lemma 3**: _Let \(V\sim\mathcal{N}(0,1)\). Then, for any \(c\in\mathbb{R}\) and any \(\epsilon\in\mathbb{R}\), we have_ \[|\mathbb{E}_{V}[\Phi(cV+\epsilon)]-\frac{1}{2}|\leq|\epsilon|. \tag{11}\] _In particular, \(\Phi(cV)=\frac{1}{2}\) for all \(c\in\mathbb{R}\)._ Using this result, we can show the following important estimate of the fixed point of (10). **Lemma 4**: _For \(\sigma_{X}^{2},\sigma_{Z}^{2}>0\), let \(\tau_{N}^{\star}=\tau_{N}^{\star}(\sigma_{X}^{2},\sigma_{Z}^{2})\) be the unique solution to the fixed point equation_ \[\lambda\frac{N-1}{N}\pi(\tau_{N}^{\star};\tau_{N}^{\star})=\alpha\tau_{N}^{ \star}. \tag{12}\] _Then,_ \[\frac{1}{2}\leq\left(\frac{\sigma_{X}^{2}}{\sigma_{X}^{2}+\sigma_{Z}^{2}} \right)\frac{\tau_{N}^{\star}}{\lambda(N-1)/N}\leq\left(\frac{1}{2}+\frac{ \sigma_{Z}}{\sigma_{X}^{2}}\lambda(N-1)/N\right). \tag{13}\] First, note that \(0\leq\Phi(\cdot)\leq 1\) and (10), implies that \(0\leq\pi(\tau_{N}^{\star};\tau_{N}^{\star})\leq 1\). Therefore, the solution to (12), satisfies, \[0\leq\tau_{N}^{\star}\leq\frac{\lambda(N-1)/N}{\alpha}. \tag{14}\] Using \(\pi(\tau_{N}^{\star};\tau_{N}^{\star})=\mathbb{E}\left[\Phi\left(\frac{(1- \alpha)\tau_{N}^{\star}-\tilde{\sigma}W}{\sigma_{Z}}\right)\right]\), we get \[\frac{1}{2} \mathop{=}^{\mathrm{(a)}}\mathbb{E}\left[\Phi\left(\frac{-\tilde{ \sigma}W}{\sigma_{Z}}\right)\right]\] \[\mathop{\leq}^{\mathrm{(b)}}\mathbb{E}\left[\Phi\left(\frac{(1- \alpha)\tau_{N}^{\star}-\tilde{\sigma}W}{\sigma_{Z}}\right)\right]\] \[\mathop{\leq}^{\mathrm{(c)}}\mathbb{E}\left[\Phi\left(\frac{(1- \alpha)\frac{\lambda(N-1)/N}{\alpha}-\tilde{\sigma}W}{\sigma_{Z}}\right)\right]\] Fig. 2: Best response function to a homogeneous strategy profile with threshold \(\tau\). Here \(N=10\), \(\lambda=1\), and \(\sigma_{X}^{2}=1\). \[\gamma_{\rm ce}(y)\stackrel{{\rm def}}{{=}}\mathbf{1}\big{(}\hat{x}_{ \rm mmse}(y)\leq\tau_{N}^{\star}(\sigma_{Z}^{2})\big{)}.\] Thus, \[\gamma_{\rm ce}(y)=\mathbf{1}\Bigg{(}y\leq\Big{(}1+\frac{\sigma_{Z}^{2}}{ \sigma_{X}^{2}}\Big{)}\cdot\frac{\lambda}{2}\cdot\Big{(}1-\frac{1}{N}\Big{)} \Bigg{)}.\] ## IV A fundamental limit on coordination We now obtain a universal upper bound on the efficiency of _any_ policy, regardless of their structure. Our result is based on _Fano's inequality_[20]. Fano's inequality provides a bound on the probability of estimation error of the estimate of a discrete random variable on the basis of side information. **Theorem 3** (Upper bound on coordination efficiency): _For a global game with a linear benefit functions, the coordination efficiency of any homogeneous strategy profile satisfies the following bound_ \[\varrho\leq 1-h^{-1}\Big{(}H\big{(}A^{\star}(X)\mid Y_{i}\big{)}\Big{)},\] _where \(H(\cdot\mid\cdot)\) is the conditional entropy function4, and \(h^{-1}(\cdot)\) is the inverse of the binary entropy function over the interval \([0,1/2]\)._ Footnote 4: The entropy of a random variable \(X\sim f(x)\) is defined as \[H(X)\stackrel{{\rm def}}{{=}}-\mathbb{E}\Big{[}\log_{2}\big{(}f( X)\big{)}\Big{]}.\] _Let \(A^{\star}(X)\in\{0,1\}\) such that_ \[A^{\star}_{i}(X)=\mathbf{1}\big{(}X\leq\tau_{N}^{\star}(\sigma_{Z}^{2})\big{)}\] _and let \(\hat{A}_{i}(Y)\in\{0,1\}\) denote any estimate of \(A^{\star}(X)\), on the basis of \(Y_{i}\). Then, notice that the following Markov relation is satisfied_ \[A^{\star}(X)\leftrightarrow X\leftrightarrow Y\leftrightarrow\hat{A}_{i}(Y_{i}).\] _Considering the block diagram in Fig. 3, define the error random variable_ \[E_{i}\stackrel{{\rm def}}{{=}}\mathbf{1}\big{(}\hat{A}_{i}(Y) \neq A^{\star}(X)\big{)},\] _and notice that the probability of making an error when estimating \(A^{\star}(X)\) is at most \(1/2\). Fano's inequality [20] is a bound on the conditional entropy of the optimal decision computed using the oracle policy given the signal \(Y_{i}\) available to the \(i\)-th agent_ \[H\big{(}A^{\star}(X)\mid Y_{i}\big{)}\leq h(E_{i})+\mathbb{P}(E_{i}=1)\log_{2 }(|\mathcal{A}|-1),\] Fig. 3: Diagram showing how to compute the coordination error event between a generic agent and an omniscient agent with access to perfect information playing a stochastic coordination game. where \(|\mathcal{A}|\) is the cardinality of the decision variable \(A_{i}\). Since out decision variables are binary, we have \[H\big{(}A^{\star}(X)\mid Y_{i}\big{)}\leq h(E_{i}). \tag{16}\] Assuming that we can compute the LHS of Eq. (16), we obtain a bound on \(\mathbb{P}(E_{i}=1)\), by finding the inverse of the _binary entropy function5_ within the interval \([0,0.5]\). Finally, notice that for a homogeneous strategy profile the coordination efficiency is Footnote 5: The binary entropy function is defined as \[h(p)\stackrel{{\mathrm{def}}}{{=}}-p\log_{2}p-(1-p)\log_{2}(1-p).\] ### _Computing the bound on coordination efficiency_ Using properties of the entropy function, we obtain: \[H\big{(}A^{\star}(X)\mid Y_{i}\big{)}=H\big{(}A^{\star}(X)\big{)}-H(Y_{i})+H \big{(}Y_{i}\mid A^{\star}(X)\big{)}.\] We proceed to compute each of these three terms: the first is the entropy of the optimal decision variable as computed by the oracle: \[h\big{(}A^{\star}(X)\big{)}=h\Bigg{(}\mathbb{P}\bigg{(}X\leq\frac{\lambda}{2} \Big{(}1-\frac{1}{N}\Big{)}\bigg{)}\Bigg{)},\] where \(h(\cdot)\) denotes the binary entropy function. The second term is the differential entropy of the signal \(Y_{i}\), which is a Gaussian random variable with variance \(\sigma_{X}^{2}+\sigma_{Z}^{2}\). Therefore, \[H(Y_{i})=\frac{1}{2}\log_{2}\big{(}2\pi e(\sigma_{X}^{2}+\sigma_{Z}^{2})\big{)}.\] The third term is more challenging must be computed numerically. \[H(Y_{i}\mid A^{\star}(X)=1)=H\bigg{(}Y_{i}\mid X\leq\frac{\lambda}{2}\Big{(}1- \frac{1}{N}\Big{)}\bigg{)}.\] To evaluate this entropy, we must use the conditional probability density function \[f_{Y_{i}\mid X\leq\frac{\lambda}{2}(1-\frac{1}{N})}(y_{i})=\frac{\int_{-\infty }^{\frac{1}{2}(1-\frac{1}{N})}f_{Z}(y_{i}-x)f_{X}(x)dx}{\int_{-\infty}^{\frac{ 1}{2}(1-\frac{1}{N})}f_{X}(x)dx}.\] Similarly, \[H(Y_{i}\mid A^{\star}(X)=0)=H\bigg{(}Y_{i}\mid X>\frac{\lambda}{2}\Big{(}1- \frac{1}{N}\Big{)}\bigg{)}.\] To evaluate this entropy, we must use the conditional probability density function \[f_{Y_{i}\mid X>\frac{\lambda}{2}(1-\frac{1}{N})}(y_{i})=\frac{\int_{\frac{1}{2 }(1-\frac{1}{N})}^{\infty}f_{Z}(y-x)f_{X}(x)dx}{\int_{\frac{1}{2}(1-\frac{1}{ N})}^{\infty}f_{X}(x)dx}.\] Finally, we can compute: \[H(Y_{i}\mid A^{\star}(X)) =H(Y_{i}\mid A^{\star}(X)=0)\mathbb{P}\big{(}A^{\star}(X)=0\big{)}\] \[+H(Y_{i}\mid A^{\star}(X)=1)\mathbb{P}\big{(}A^{\star}(X)=1\big{)},\] where \[H(Y_{i}\mid A^{\star}(X)=0) =-\int_{\mathbb{R}}f_{Y_{i}\mid X>\frac{\lambda}{2}(1-\frac{1}{N })}(y_{i})\] \[\qquad\times\log_{2}\Big{(}f_{Y_{i}\mid X>\frac{\lambda}{2}\big{(} 1-\frac{1}{N})}(y_{i})\Big{)}dy_{i}\] and \[H(Y_{i}\mid A^{\star}(X)=1) =-\int_{\mathbb{R}}f_{Y_{i}\mid X\leq\frac{\lambda}{2}(1-\frac{1} {N})}(y_{i})\] \[\qquad\times\log_{2}\Big{(}f_{Y_{i}\mid X\leq\frac{\lambda}{2}(1- \frac{1}{N})}(y_{i})\Big{)}dy_{i}.\] Lastly, the computation of the inverse of the binary entropy function can be efficiently performed numerically. ## V Numerical results The characterization we have provided thus far assumes that a the agents choose their actions according to a policy that ideally tracks the behavior of an omniscient agent that has access to perfect information about the state. Since the agents receive noisy signals about the state, they are not able to perfectly coordinate with the omniscient agent using a threshold policy indexed by \(\tau^{\star}\). Assuming that the agents use a generic homogeneous threshold policy indexed by \(\tau\), the probability of miscoordination is given \(X=x\) is given by: \[\mathbb{P}(E_{i}=1\mid X=x)=\bigg{(}1-\Phi\Big{(}\frac{\tau-x}{ \sigma_{Z}}\Big{)}\bigg{)}\mathbf{1}(x\leq\tau^{\star})\\ +\Phi\Big{(}\frac{\tau-x}{\sigma_{Z}}\Big{)}\mathbf{1}(x>\tau^{ \star}). \tag{17}\] Therefore, \[\varrho(\boldsymbol{\gamma})=1-\int_{\mathbb{R}}\mathbb{P}(E_{i}=1\mid X=x)f(x )dx. \tag{18}\] Assume a global game with linear benefit function, and a number of agents \(N\to\infty\). The optimal threshold used by the omniscient agent is \(\tau^{\star}=\lambda/2\). Form the agent standpoint, we consider two strategies: 1. computing the NE threshold for the global game, using the prior information \(\sigma_{X}^{2}\) and \(\sigma_{Z}^{2}\), and the parameter \(\lambda\); 2. estimate the state variable \(X\) using a MMSE estimator and using the certainty equivalent policy. Figure 4 (left) shows the thresholds corresponding to these two types of coordination policies for a system \(\sigma_{X}^{2}=1\) and \(\lambda=1\) as a function of the noise variance \(\sigma_{Z}^{2}\). We can clearly see how different these two policies are. Moreover, there is also a larger computational cost of solving for the NE in the first strategy, whereas the CE strategy can be obtained in closed form in this case. Figure 4 (center) shows the expected utility of these two strategies. There is a substantial gap between the utilities of an agent using the NE versus CE. This is also clear, because CE in stochastic control and optimization is a suboptimal strategy, in general. More surprisingly is the fact that CE yields a better coordination efficiency, as shown in Fig. 4 (right). Finally, Fig. 4 (right) also shows the information theoretic upper bound on coordination efficiency for any homogeneous policy profile (not just threshold policies). The significance of this figure is that it establishes th efficiencies cannot be achieved by any policy for a given level of noise in the communication channel between the gateway and the robotic agents, in a practical application. Therefore, when properly planning for a distributed implementation of a collective task performed by strategic self-interested agents, the system designer needs to communicate at a certain signal to noise ratio, which is not determined by the bit error rate at the receiver, but instead by the level of collective coordination it is interested in achieving. ## VI Conclusions and Future work We defined the class of homogeneous coordination games which encompass the popular class of global games. Then, we proposed a Bayesian metric of coordination based on the probabilities that the agents will take the "right" action by aligning their decisions with the ones from an omniscient agent with access to the perfect state of the system. We show that this metric of coordination efficiency can be bounded using information theoretic inequalities, establishing regimes in which certain levels of coordination are impossible to achieve. To the best of our knowledge, this is the first time such methods are used in the context of global games. Future work on this topic will include design of new learning algorithms (for threshold policies) in the presence of local data at the agents, the presence of partially connected influence graphs on the agent's benefit functions, and the characterization of better upper bounds on coordination efficiency that would take into account the structure of the policy (e.g. threshold).
2305.16210
Constrained Radius Estimates Of Certain Analytic Functions
Let $\mathcal{P}$ denote the Carath\'{e}odory class accommodating all the analytic functions $p$ having positive real part and satisfying $p(0)=1$. In this paper, the second coefficient of the normalized analytic function $f$ defined on the open unit disc is constrained to define new classes of analytic functions. The classes are characterised by the functions $f/g$ having positive real part or satisfying the inequality $|(f(z)/g(z))-1|<1$ such that $f(z)(1-z^2)/z$ and $g(z)(1-z^2)/z$ are Carath\'{e}odory functions for some analytic function $g$. This paper aims at determining radius of starlikeness for the introduced classes.
Meghna Sharma, Naveen Kumar Jain, Sushil Kumar
2023-05-25T16:08:41Z
http://arxiv.org/abs/2305.16210v1
# Constrained radius estimates of certain analytic functions ###### Abstract. Let \(\mathcal{P}\) denote the Caratheodory class accommodating all the analytic functions \(p\) having positive real part and satisfying \(p(0)=1\). In this paper, the second coefficient of the normalized analytic function \(f\) defined on the open unit disc is constrained to define new classes of analytic functions. The classes are characterised by the functions \(f/g\) having positive real part or satisfying the inequality \(|(f(z)/g(z))-1|<1\) such that \(f(z)(1-z^{2})/z\) and \(g(z)(1-z^{2})/z\) are Caratheodory functions for some analytic function \(g\). This paper aims at determining radius of starlikeness for the introduced classes. Key words and phrases:Caratheodory function; Starlike Functions; Radius problems 2010 Mathematics Subject Classification: 30C45, 30C80 The first author is supported by Senior Research Fellowship from Council of Scientific and Industrial Research, New Delhi, Ref. No.:1753/(CSIR-UGC NET JUNE, 2018). class \(S^{*}_{\mathbb{Q}}\ =S^{*}(z+\sqrt{1+z^{2}})\) associated with the lune \(\{w\in\mathbb{C}:\operatorname{Re}w>0,2|w|>|w^{2}-1|\}\), Kumar and Ravichandran [9] introduced the class \(S^{*}_{R}=S^{*}(1+(z/k)((k+z)/(k-z)))\), \(k=1+\sqrt{2}\), associated with the rational function, Brannan and Kirwan [3] introduced the class \(S^{*}_{\gamma}=S^{*}((1+z)/(1-z)^{\gamma})\), \(0\leq\gamma<1\), of strongly starlike functions of order \(\gamma\). Recently, authors [22] introduced the class \(S^{*}_{Ne}=S^{*}(1+z-z^{3}/3)\) associated with the nephroid domain \(\{u+iv:((u-1)^{2}+v^{2}-4/9)^{3}-4v^{2}/3=0\}\). Also, the class \(S^{*}_{SG}=S^{*}(2/(1+e^{-z}))\) associated with modified sigmoid domain \(\{w\in\mathbb{C}:|\log(w/(2-w))|<1\}\) is introduced in [8]. ## 2. Classes of Analytic Functions Owing to the eminent Bieberbach theorem, the estimate on the second coefficient in Maclaurin series of the function \(f\in\mathcal{A}\) plays a vital role in the study of univalent functions. We express \(\mathcal{A}_{b}\) as the class of all functions \(f\in\mathcal{A}\) having the form \(f(z)=z+a_{2}z^{2}+\cdots\), where \(|a_{2}|=2b\) for \(0\leq b\leq 1\). Let \(\mathcal{P}(\alpha)\) denote the class of analytic functions of the form \(p(z)=1+a_{1}z+a_{2}z^{2}+\cdots\) with \(\operatorname{Re}(p(z))>\alpha\), \(0\leq\alpha<1\). According to a result by Nehari [14], we have \(|a_{n}|\leq 2(1-\alpha)\) for \(p\in\mathcal{P}(\alpha)\). Thus, we considered a subclass \(\mathcal{P}_{b}(\alpha)\) of \(\mathcal{P}(\alpha)\) having functions of the form \[p(z)=1+2b(1-\alpha)z+a_{2}z^{2}+\cdots,\quad|b|\leq 1.\] Gronwall initiated the study of radius problems for the functions with fixed second coefficient in early 1920s and since then, this aspect has been an active area of research. Further, MacGregor [24, 23] studied the radius problems for the class of functions having either the positive real part of the ratio \(f/g\) or satisfying the inequality \(|f/g-1|<1\). Ali _et al._[2] also made a contribution by finding various radius constants involving second order differential subordination. Recently, authors [10] studied radius problems on the class of functions involving the ratio \(f/g\). For literature related to the applications of differential subordination for functions with fixed second coefficient, see [add citation.] Inspired by the above mentioned work and taking in account of the coefficient bound, we restricted the second coefficient of the function and introduced certain subclasses of analytic functions. For \(n\in\mathbb{N}\), let us consider the analytic functions of the form \(f(z)=z+\sum\limits_{n=2}^{\infty}a_{n}z^{n}\) such \(f(z)(1-z^{2})/z\in\mathcal{P}\). Taking in note of the Bieberbach conjecture, we conclude that the coefficient \(a_{2}\) is bounded by \(2\) and subsequently, the function \(f(z)\) can be rewritten as \[f(z)=z+2bz^{2}+\sum\limits_{n=3}^{\infty}a_{n}z^{n},\quad|b|\leq 1.\] Thenceforth, we now define the class \(\mathcal{K}^{1}_{b}\) as follows: **Definition 2.1**.: For \(|b|\leq 1\), the class \(\mathcal{K}^{1}_{b}\) is defined as \[\mathcal{K}^{1}_{b}:=\left\{f(z)=z+2bz^{2}+\sum\limits_{n=3}^{\infty}a_{n}z^{n }:\operatorname{Re}\left(\frac{f(z)(1-z^{2})}{z}\right)>0,z\in\mathbb{D} \right\}.\] Consider the function \(f_{b}:\mathbb{D}\to\mathbb{C}\) defined by \[f_{b}(z)=\frac{z\left(1+z^{2}\right)}{\left(1-z^{2}\right)\left(1-2biz-z^{2} \right)} \tag{2.1}\] with \(s_{1}(z)=z(b-iz)/(i+bz)\). Then, it can be easily seen that \[\frac{f_{b}(z)(1-z^{2})}{z}=\frac{1-s_{1}(z)}{1+s_{1}(z)},\] where \(s_{1}(z)\) is the analytic function satisfying the hypothesis of Schwarz's Lemma in \(\mathbb{D}\) and yields \(\operatorname{Re}(f_{b}(z)(1-z^{2})/z)>0\). Hence, the class \(\mathcal{K}_{b}^{1}\) is non empty. Further, in order to define another class of analytic functions, consider the analytic functions of the form \[f(z)=z+\sum_{n=2}^{\infty}f_{n}z^{n}\quad\text{and}\quad g(z)=z+\sum_{n=2}^{ \infty}g_{n}z^{n} \tag{2.2}\] such that \(f(z)/g(z)\in\mathcal{P}\) and \(g(z)(1-z^{2})/z\in\mathcal{P}\). Now, using these conditions, we get that the coefficients \(f_{2}\) and \(g_{2}\) satisfies \(|f_{2}|<4\) and \(|g_{2}|\leq 2\). Thus, the functions \(f\) and \(g\) are rewritten as \[f(z)=z+4bz^{2}+\sum_{n=3}^{\infty}f_{n}z^{n},|b|\leq 1\quad\text{and}\quad g (z)=z+2cz^{2}+\sum_{n=3}^{\infty}g_{n}z^{n},|c|\leq 1.\] **Definition 2.2**.: For \(|b|\leq 1\) and \(|c|\leq 1\), the class \(\mathcal{K}_{b,c}^{2}\) is defined as \[\mathcal{K}_{b,c}^{2}:=\left\{f\in\mathcal{A}_{4b}:\operatorname{Re}\left( \frac{f(z)}{g(z)}\right)>0,\operatorname{Re}\left(\frac{g(z)(1-z^{2})}{z} \right)>0;\text{ for some }g\in\mathcal{A}_{2c}\right\}.\] Note that the functions \(f_{b,c},g_{b,c}:\mathbb{D}\to\mathbb{C}\) defined by \[f_{b,c}(z)=\frac{z\left(1+z^{2}\right)^{2}}{\left(1-z^{2}\right)\left(1-2ciz- z^{2}\right)\left(1-(4b-2c)iz-z^{2}\right)} \tag{2.3}\] and \[g_{b,c}(z)=\frac{z\left(1+z^{2}\right)}{\left(1-z^{2}\right)\left(1-2ciz-z^{2 }\right)}\] satisfy \[\frac{f_{b,c}(z)}{g_{b,c}(z)}=\frac{1-s_{2}(z)}{1+s_{2}(z)}\quad\text{and} \quad\frac{g_{b,c}(z)(1-z^{2})}{z}=\frac{1-s_{3}(z)}{1+s_{3}(z)},\] where \[s_{2}(z)=\frac{z((2b-c)-iz)}{(2b-c)z+i}\quad\text{and}\quad s_{3}(z)=\frac{z \left(c-iz\right)}{cz+i},\quad|2b-c|\leq 1.\] Note that \(s_{2}\) and \(s_{3}\) are analytic functions and satisfy the hypothesis of Schwarz's lemma in \(\mathbb{D}\) and therefore, \(\operatorname{Re}\left(\frac{f_{b,c}(z)}{g_{b,c}(z)}\right)>0\) and \(\operatorname{Re}\left(\frac{g_{b,c}(z)(1-z^{2})}{z}\right)>0\). Thus, \(f_{b,c}\) and \(g_{b,c}\) are the members of \(\mathcal{K}_{b,c}^{2}\) and hence, the class is non-empty. Following the similar pattern, let us assume that the functions \(f\) and \(g\) as defined in (2.2) and satisfying the inequalities \(|(f(z)/g(z))-1|<1\) and \(\operatorname{Re}((g(z)(1-z^{2}))/z)>0\). Note that the condition \(|(f(z)/g(z))-1|<1\) yields \(\operatorname{Re}(g(z)/f(z))>1/2\), thereby implying \(|f_{2}|\leq 1+|g_{2}|\leq 3\). Consequently, we consider the functions of the following form: \[f(z)=z+3bz^{2}+\sum_{n=3}^{\infty}f_{n}z^{n},|b|\leq 1\quad\text{and}\quad g(z) =z+2cz^{2}+\sum_{n=3}^{\infty}g_{n}z^{n},|c|\leq 1.\] **Definition 2.3**.: For \(|b|\leq 1\) and \(|c|\leq 1\), the class \(\mathcal{K}^{3}_{b,c}\) is defined as \[\mathcal{K}^{3}_{b,c}:=\left\{f\in\mathcal{A}_{3b}:\left|\frac{f(z)}{g(z)}-1 \right|<1\text{ and }\operatorname{Re}\left(\frac{g(z)(1-z^{2})}{z}\right)>0;\text{ for some }g\in\mathcal{A}_{2c}\right\}.\] **Remark 2.4**.: For \(b=-1\) and \(c=-1\), the classes \(\mathcal{K}^{1}_{b}\), \(\mathcal{K}^{2}_{b,c}\) and \(\mathcal{K}^{3}_{b,c}\) coincide with the classes \(\mathcal{K}_{3}\), \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) respectively as discussed in [10]. ## 3. Constraint Radius Estimates We constrained the second coefficient of the functions \(f\) and \(g\). The radii estimates are calculated for the classes involving these constraints. For several recent results involving radius problems with fixed second coefficient, see[[4],] Various radius problems for these classes were discussed and sharpness was proved analytically. Radii estimates for the class \(\mathcal{K}^{1}_{b}\) are obtained in the following theorem. **Theorem 3.1**.: _Let \(\tilde{n}=|2b|\). For the class \(\mathcal{K}^{1}_{b}\), the following results hold._ 1. _The_ \(S^{*}_{P}\) _radius is the smallest positive real root of the equation_ \[1-\tilde{n}r-11r^{2}-8\tilde{n}r^{3}-9r^{4}+\tilde{n}r^{5}+3r^{6}=0.\] (3.1) 2. _Let_ \(\alpha\in[0,1)\)_. The_ \(S^{*}(\alpha)\) _radius is the smallest positive real root of the equation_ \[1-\alpha-\alpha\tilde{n}r-r^{2}(5+\alpha)-4\tilde{n}r^{3}-(5-\alpha)r^{4}+ \alpha\tilde{n}r^{5}+(1+\alpha)r^{6}=0.\] (3.2) 3. _The_ \(S^{*}_{L}\) _radius is the smallest positive real root of the equation_ \[1-\sqrt{2}+(2\tilde{n}-\sqrt{2}\tilde{n})r+6r^{2}+(2\tilde{n}+\sqrt{2}\tilde{ n})r^{3}+(1+\sqrt{2})r^{4}=0.\] (3.3) 4. _The_ \(S^{*}_{e}\) _radius is the smallest positive real root of the equation_ \[1-e+\tilde{n}r+(1+5e)r^{2}+4e\tilde{n}r^{3}-(1-5e)r^{4}-\tilde{n}r^{5}-(1+e)r ^{6}=0.\] (3.4) 5. _The_ \(S^{*}_{c}\) _radius is the smallest positive real root of the equation_ \[2-\tilde{n}r-16r^{2}-12\tilde{n}r^{3}-14r^{4}+\tilde{n}r^{5}+4r^{6}=0.\] (3.5) 6. _The_ \(S^{*}_{\sin}\) _radius is the smallest positive real root of the equation_ \[\sin 1-(\tilde{n}-\tilde{n}\sin 1)r-(6-\sin 1)r^{2}-4\tilde{n}r^{3}-(8+\sin 1 )r^{4}-(3\tilde{n}+\tilde{n}\sin 1)r^{5}\] \[-(2+\sin 1)r^{6}=0.\] (3.6) 7. _The_ \(S^{*}_{\not{\mathcal{C}}}\) _radius is the smallest positive real root of the equation_ \[2-\sqrt{2}+(\tilde{n}-\sqrt{2}\tilde{n})r-(4+\sqrt{2})r^{2}-4\tilde{n}r^{3}-( 6-\sqrt{2})r^{4}-(\tilde{n}-\sqrt{2}\tilde{n})r^{5}+\sqrt{2}r^{6}=0.\] (3.7) 8. _The_ \(S^{*}_{R}\) _radius is the smallest positive real root of the equation_ \[3-2\sqrt{2}+(2\tilde{n}-2\sqrt{2}\tilde{n})r-(3+2\sqrt{2})r^{2}-4 \tilde{n}r^{3}-(7-2\sqrt{2})r^{4}-(2\tilde{n}-2\sqrt{2}\tilde{n})r^{5}\] \[-(1-2\sqrt{2})r^{6}=0.\] (3.8) _._ 2. _The_ \(S^{*}_{N_{e}}\) _radius is the smallest positive real root of the equation_ \[2-\tilde{n}r-16r^{2}-12\tilde{n}r^{3}-26r^{4}-11\tilde{n}r^{5}-8r^{6}=0.\] (3.9) 3. _The_ \(S^{*}_{SG}\) _radius is the smallest positive real root of the equation_ \[1-e+2\tilde{n}r+(7+5e)r^{2}+4\tilde{n}(1+e)r^{3}+(7+9e)r^{4}+2\tilde{n}(1+2e)r ^{5}+(1+3e)r^{6}=0.\] (3.10) _All estimates are sharp._ Proof.: For the function \(p\in\mathcal{P}(\alpha)\) and \(|z|=r<1\), by [12, Theorem 2], we have \[\left|\frac{zp^{\prime}(z)}{p(z)}\right|\leq\frac{2(1-\alpha)r}{1-r^{2}}\frac{ |b|r^{2}+2r+|b|}{(1-2\alpha)r^{2}+2|b|(1-\alpha)r+1} \tag{3.11}\] where \(|b|\leq 1\) and \(\alpha\in[0,1)\). The transform \(w(z)=\frac{1+z^{2}}{1-z^{2}}\) maps the disc \(|z|\leq r\) onto the disc \[\left|w(z)-\frac{1+r^{4}}{1-r^{4}}\right|\leq\frac{2r^{2}}{1-r^{4}}. \tag{3.12}\] Suppose \(f\in\mathcal{K}^{1}_{b}\) and let \(p:\mathbb{D}\to\mathbb{C}\) be given by \(p(z)=f(z)(1-z^{2})/z\). Note that the function \(p\in\mathcal{P}_{b}\) and a simple calculation gives \[\frac{zf^{\prime}(z)}{f(z)}=\frac{zp^{\prime}(z)}{p(z)}+\frac{1+z^{2}}{1-z^{2}}. \tag{3.13}\] Using (3.11), (3.12) and (3.13), it is seen that \(f\) maps the disc \(|z|\leq r\) onto the disc \[\left|\frac{zf^{\prime}(z)}{f(z)}-\frac{1+r^{4}}{1-r^{4}}\right|\leq\frac{ \tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+r^{2})( 1-r^{4})}. \tag{3.14}\] 1. Set \(x(r):=1-\tilde{n}r-11r^{2}-8\tilde{n}r^{3}-9r^{4}+\tilde{n}r^{5}+3r^{6}.\) Then, \(x(0)=1>0\) and \(x(1)=-8(\tilde{n}+2)<0\). Hence by the virtue of intermediate value theorem, the equation (3.1) has a root lying in the interval \((0,1)\), denoted by \(\rho_{1}\). Denote the center of the disc in (3.14) by \(a\). Clearly, \(a\geq 1\) for \(r\in[0,1)\) and \(a\leq 3/2\) if \(r<1/5^{\frac{1}{4}}\approx 0.66874\). Thus, an application of [19, Lemma 2.2] gives that the disc (3.14) lies in the region \(\{w\in\mathbb{C}:|w-1|<\operatorname{Re}w\}\) if \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq\frac{1+r^{4}}{1-r^{4}}-\frac{1}{2}.\] Or equivalently, if \[\frac{1-5r^{2}+8br^{3}-5r^{4}+r^{6}}{(-1+2br-r^{2})(1-r^{4})}\leq-\frac{1}{2}.\] Thus, \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)>\left|\frac{zf^{ \prime}(z)}{f(z)}-1\right|\] for \(0<r\leq\rho_{1}\) proving that the number \(\rho_{1}\) is the \(S^{*}_{P}\) radius for the class \(\mathcal{K}^{1}_{b}\). For \(b<0\), the function defined in (2.1), for \(z=-i\rho_{1}\) satisfies \[\operatorname{Re}\left(\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)}\right) =\frac{1-5\rho_{1}^{2}+8b\rho_{1}^{3}-5\rho_{1}^{4}+\rho_{1}^{6}}{ \left(1-2b\rho_{1}+\rho_{1}^{2}\right)\left(1-\rho_{1}^{4}\right)}\] \[=\left|\frac{2b\rho_{1}-6\rho_{1}^{2}+8b\rho_{1}^{3}-4\rho_{1}^{4} -2b\rho_{1}^{5}+2\rho_{1}^{6}}{\left(1-2b\rho_{1}+\rho_{1}^{2}\right)\left(1- \rho_{1}^{4}\right)}\right|\] \[=\left|\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)}-1\right|\] illustrating the sharpness of bound. **Remark 3.2**.: Throughout the paper, we have proved the sharpness when \(b<0\). The method can be imitated to prove for \(b>0\). Hence, sharpness is attained at the point \[z=\left\{\begin{array}{ll}-i\rho,&\text{if }b<0\\ i\rho,&\text{if }b>0\end{array}\right.\] for the classes \(S_{P}^{*}\), \(S^{*}(\alpha)\), \(S_{e}^{*}\), \(S_{c}^{*}\), \(S_{\mathbb{Q}}^{*}\), \(S_{R}^{*}\) and for the classes \(S_{L}^{*}\), \(S_{\text{sin}}^{*}\), \(S_{Ne}^{*}\) and \(S_{SG}^{*}\), sharpness is attained at the point \[z=\left\{\begin{array}{ll}\rho,&\text{if }b<0\\ -\rho,&\text{if }b>0.\end{array}\right.\] (a) Sharpness of class \(S^{*}(\alpha)\) with \(\rho_{2}=0.202135\) at \(b=-1\), \(\alpha=0.5\) (b) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0.171573\) at \(b=-1\) (c) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0.171573\) at \(b=-1\) (d) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0.171573\) at \(b=-1\) (e) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0.171573\) at \(b=-1\) (e) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0.171573\) at \(b=-1\) (f) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0.171573\) at \(b=-1\) (g) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0.171573\) at \(b=-1\) (g) Sharpness of class \(S_{L}^{*}\) with \(\rho_{3}=0. is a non-increasing function on \([0,1)\) and thus from (3.14), it follows that for \(0<r\leq\rho_{2}\), we have, \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)\geq\frac{1-5r^{2}+4 \tilde{n}r^{3}-5r^{4}+r^{6}}{(1+\tilde{n}r+r^{2})(1-r^{4})}\geq\alpha.\] This shows that the number \(\rho_{2}\) is the \(S^{*}(\alpha)\) radius for the class \(\mathcal{K}^{1}_{b}\). Since, for \(f_{b}\) defined in (2.1) at \(z=-i\rho_{2}\), we have \[\operatorname{Re}\left(\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)}\right)=\frac{1-5 \rho_{2}^{2}+8b\rho_{2}^{3}-5\rho_{2}^{4}+\rho_{2}^{6}}{(1-2b\rho_{2}+\rho_{2} ^{2})(1-\rho_{2}^{4})}=\alpha,\] it follows that the estimate is sharp. 3. From (3.14), it follows that \[\left|\frac{zf^{\prime}(z)}{f(z)}-1\right|\leq\left|\frac{zf^{\prime}(z)}{f(z )}-\frac{1+r^{4}}{1-r^{4}}\right|+\frac{2r^{4}}{1-r^{4}}\leq\frac{r(\tilde{n}+ 3\tilde{n}r^{2}+2r(3+r^{2}))}{(1-r^{2})(1+\tilde{n}r+r^{2})}.\] Suppose \(\rho_{3}\) denote the smallest real root of the equation in (3.3). It is clear that \(a\geq 1\) for \(r\in[0,1)\) and simple computations show that \(a<\sqrt{2}\) if \(r<((\sqrt{2}-1)/(\sqrt{2}+1))^{1/4}\approx 0.643594\). Hence, using [1, Lemma 2.2], the disc (3.14) lies in the region \(\{w\in\mathbb{C}:|w^{2}-1|<1\}\) provided \[\frac{r(\tilde{n}+3\tilde{n}r^{2}+2r(3+r^{2}))}{(1-r^{2})(1+\tilde{n}r+r^{2} )}\leq\sqrt{2}-1\] Since, for \(0<r\leq\rho_{3}\), we have \[\left|\left(\frac{zf^{\prime}(z)}{f(z)}\right)^{2}-1\right|<1.\] This proves that the number \(\rho_{3}\) is the \(S^{*}_{L}\) radius for the class \(\mathcal{K}^{1}_{b}\). The function \(f_{b}\) defined by \[f_{b}=-\frac{z(1-2bz+z^{2})}{(-1+z^{2})^{2}}.\] (3.15) is in the class \(\mathcal{K}^{1}_{b}\) because it satisfies the condition \((1-z^{2})f_{b}/z=(1-s(z))/(1+s(z))\), where \(s(z)=(1-bz)/z(b-z)\). Thus, for \(f_{b}\) defined by (3.15) at \(z=\rho_{3}\), we have, \[\left|\left(\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)}\right)^{2}-1\right| =\left|\left(\frac{1-4b\rho_{3}+6\rho_{3}^{2}-4b\rho_{3}^{3}+\rho _{3}^{4}}{1-2b\rho_{3}+2b\rho_{3}^{3}-\rho_{3}^{4}}\right)^{2}-1\right|\] \[=|(\sqrt{2})^{2}-1|=1\] proving the sharpness. 4. The number \(\rho_{4}\) is the smallest real root of the equation in (3.4). Easy computations show that for \(r\in[0,1)\), we have \(a<e\) for \(r<((e-1)/(e+1))^{1/4}\approx 0.824495\). Also, \(a<(e+1/e)/2\) for \(r<((e-1)/(e+1))^{1/2}\approx 0.679792\). Thus, using [13, Lemma 2.2], the disc (3.14) is contained in the region \(\{w\in\mathbb{C}:|\log w|<1\}\) provided \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq\frac{1+r^{4}}{1-r^{4}}-\frac{1}{e}.\] Or equivalently, if \[\frac{1-5r^{2}+8br^{3}-5r^{4}+r^{6}}{(-1+2br-r^{2})(1-r^{4})}\leq-\frac{1}{e}\] for \(0<r\leq\rho_{4}\). Thus, proving that the number \(\rho_{4}\) is the \(S_{e}^{*}\) radius for the class \(\mathcal{K}_{b}^{1}\). Moreover, for the function in (2.1) at \(z=-i\rho_{4}\), we have, \[\left|\log\left(\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)}\right)\right|=\left| \log\left(\frac{1-5\rho_{4}^{2}+8b\rho_{4}^{3}-5\rho_{4}^{4}+\rho_{4}^{6}}{(1- 2b\rho_{4}+\rho_{4}^{2})(1-\rho_{4}^{4})}\right)\right|=\left|\log\left(\frac{1 }{e}\right)\right|=1,\] showing that the bound is best possible. 5. The number \(\rho_{5}\) is the smallest real root of the equation in (3.5). For \(r\leq 1/\sqrt{2}\approx 0.707107\), we have \(a\leq 5/3\). By [20, Lemma 2.5], the disc (3.14) lies inside the region bounded by \(\phi_{c}(\mathbb{D})\), where \(\phi_{c}(z)=1+(4/3)z+(2/3)z^{2}\) if \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq\frac{1+r^{4}}{1-r^{4}}-\frac{1}{3}.\] Or equivalently, if \[\frac{1-5r^{2}+8br^{3}-5r^{4}+r^{6}}{(-1+2br-r^{2})(1-r^{4})}\leq-\frac{1}{3}.\] Hence, \(f\in S_{c}^{*}\) for \(0<r\leq\rho_{5}\) showing that the number \(\rho_{5}\) is the \(S_{c}^{*}\) radius for the class \(\mathcal{K}_{b}^{1}\). Further, for the function in (2.1) at \(z=-i\rho_{5}\), we have, \[\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)} =\frac{1-5\rho_{5}^{2}+8b\rho_{5}^{3}-5\rho_{5}^{4}+\rho_{5}^{6}}{ (1-2b\rho_{5}+\rho_{5}^{2})(1-\rho_{5}^{4})}\] \[=\frac{1}{3}=\phi_{c}(-1).\] Hence, proving the sharpness. 6. The number \(\rho_{6}\) is the smallest real root of the equation in (3.6). When \(1-\sin 1<a\leq 1+\sin 1\), in view of [5, Lemma 3.3], the function \(f\in S_{\sin}^{*}\) if \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq\sin 1-\frac{2r^{4}}{1-r^{4}}.\] Therefore, the disc (3.14) lies inside the region \(\phi_{s}(\mathbb{D})\), where \(\phi_{s}(z)=1+\sin z\) provided \(0<r\leq\rho_{6}\). For the function in (3.15) at \(z=\rho_{6}\), we have \[\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)} =\frac{1-5\rho_{6}^{2}+8b\rho_{6}^{3}-5\rho_{6}^{4}+\rho_{6}^{6}}{ (1-2b\rho_{6}+\rho_{6}^{2})(1-\rho_{6}^{4})}\] Hence, the estimate is sharp. 7. The number \(\rho_{7}\) is the smallest real root of the equation in (3.7). For \(\sqrt{2}-1<a<\sqrt{2}+1\), applying [7, Lemma 2.1], the disc (3.14) is contained in the region \(\{w\in\mathbb{C}:2|w|>|w^{2}-1|\}\) if \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq\frac{1+r^{4}}{1-r^{4}}+1-\sqrt{2}.\] Since, for \(0<r\leq\rho_{7}\), we have \[2\left|\frac{zf^{\prime}(z)}{f(z)}\right|>\left|\left(\frac{zf^{\prime}(z)}{f(z)} \right)^{2}-1\right|.\] For sharpness, note that the function \(f_{b}\) given in (2.1) at \(z=-i\rho_{7}\) satisfies \[\left|\left(\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)}\right)^{2}-1\right| =\left|\left(\frac{1-5\rho_{7}^{2}+8b\rho_{7}^{3}-5\rho_{7}^{4}+ \rho_{7}^{6}}{(1-2b\rho_{7}+\rho_{7}^{2})(1-\rho_{7}^{4})}\right)^{2}-1\right|\] \[=2\left|\frac{1-5\rho_{7}^{2}+8b\rho_{7}^{3}-5\rho_{7}^{4}+\rho_{ 7}^{6}}{(1-2b\rho_{7}+\rho_{7}^{2})(1-\rho_{7}^{4})}\right|=2\left|\frac{z(f_{ b})^{\prime}(z)}{f_{b}(z)}\right|.\] This shows that the number \(\rho_{7}\) is the \(S_{\mathbb{Q}}^{*}\) radius for the class \(\mathcal{K}_{b}^{1}\). 3. The number \(\rho_{8}\) is the smallest real root of the equation in (3.8). Observe that for \(r\leq((\sqrt{2}-1)/(\sqrt{2}+1))^{1/4}\approx 0.643594\), we have \(2\sqrt{2}-2<a\leq\sqrt{2}\). Then, by [9, Lemma 2.2], the function \(f\in S_{R}^{*}\) if \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq 2-2\sqrt{2}+\frac{1+r^{4}}{1-r^{4}}.\] Thus, the disc (3.14) is contained in the region \(\phi_{0}(\mathbb{D})\), where \(\phi_{0}(z):=1+(z/k)((k+z)/(k-z)),k=1+\sqrt{2}\) whenever \(0<r\leq\rho_{8}\). For the function \(f_{b}\) in (2.1), we have at \(z=-i\rho_{8}\), \[\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)} =\frac{1-5\rho_{8}^{2}+8b\rho_{8}^{3}-5\rho_{8}^{4}+\rho_{8}^{6}} {(1-2b\rho_{8}+\rho_{8}^{2})(1-\rho_{8}^{4})}\] \[=2\sqrt{2}-2=\phi_{0}(-1).\] Thus, the radius obtained is sharp. 4. Let \(\rho_{9}\) denote the smallest real root of the equation (3.9). For \(1\leq a<5/3\), using [22, Lemma 2.2] gives that \(f\in S_{Ne}^{*}\) if \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq\frac{5}{3}-\frac{1+r^{4}}{1-r^{4}}.\] Hence, the disc (3.14) lies in the region \(\phi_{Ne}(\mathbb{D})\), where \(\phi_{Ne}(z)=1+z-z^{3}/3\) provided \(0<r\leq\rho_{9}\). For sharpness, consider \(f_{b}\) in (3.15) at \(z=\rho_{9}\), \[\left|\frac{z(f_{b})^{\prime}(z)}{f_{b}(z)}\right| =\left|\frac{1-4b\rho_{9}+6\rho_{9}^{2}-4b\rho_{9}^{3}+\rho_{9}^ {4}}{(1-2b\rho_{9}+\rho_{9}^{2})\left(1-\rho_{9}^{2}\right)}\right|\] \[=\frac{5}{3}=\phi_{Ne}(1).\] 5. Let \(\rho_{10}\) denote the smallest real root of the equation (3.10) and suppose \(1\leq a<2e/(1+e)\). Using [8, Lemma 2.2], it follows that \(f\in S_{SG}^{*}\) if \[\frac{\tilde{n}r+6r^{2}+4\tilde{n}r^{3}+6r^{4}+\tilde{n}r^{5}}{(1+\tilde{n}r+ r^{2})(1-r^{4})}\leq\frac{2e}{1+e}-\frac{1+r^{4}}{1-r^{4}}.\] Hence, the disc (3.14) lies in the region \(\phi_{SG}(\mathbb{D})\), where \(\phi_{SG}(z)=2/(1+e^{-z})\) provided \(0<r\leq\rho_{10}\). Further, if \(w=z(f_{b})^{\prime}(z)/(f_{b})(z)\) for \(f_{b}\) in (3.15), then at \(z=\rho_{10}\), we have, \[\left|\log\left(\frac{w}{2-w}\right)\right| =\left|\log\left(\frac{1-4b\rho_{10}+6\rho_{10}^{2}-4b\rho_{10}^{ 3}+\rho_{10}^{4}}{1-6\rho_{10}^{2}+8b\rho_{10}^{3}-3\rho_{10}^{4}}\right)\right|\] \[=\left|\log\left(e\right)\right|=1\] and hence the estimate is sharp. The following theorem provides various starlikeness for the class \(\mathcal{K}_{b,c}^{2}\). **Theorem 3.3**.: _If \(m=|4b-2c|\leq 2\) and \(n=|2c|\), then for class \(\mathcal{K}_{b,c}^{2}\), the following statements hold:_ 1. _The_ \(S_{P}^{*}\) _radius is the smallest positive real root of the equation_ \[1-(m+n)r-3(6+mn)r^{2}-17(m+n)r^{3}-12(3+mn)r^{4}-15(m+n)r^{5}\] \[-(14+mn)r^{6}+(m+n)r^{7}+3r^{8}=0.\] (3.16) 2. _For any_ \(\alpha\in[0,1)\)_, the_ \(S^{*}(\alpha)\) _radius is the smallest positive real root of the equation_ \[1-\alpha-\alpha(m+n)r-(mn+\alpha mn+8+2\alpha)r^{2}-(m+n)(8+ \alpha)r^{3}-(6mn+18)r^{4}\] \[-(8-\alpha)(m+n)r^{5}-(mn-\alpha mn+8-2\alpha)r^{6}+\alpha(m+n) r^{7}+(1+\alpha)r^{8}=0.\] (3.17) 3. _The_ \(S_{L}^{*}\) _radius is the smallest positive real root of the equation_ \[1-\sqrt{2}+\left(2-\sqrt{2}\right)(m+n)r+(11-\sqrt{2}+3mn-\sqrt {2}mn)r^{2}+(8m+8n)r^{3}\] \[+(11+\sqrt{2}+3mn+\sqrt{2}mn)r^{4}+\left(2+\sqrt{2}\right)(m+n) r^{5}+(1+\sqrt{2})r^{6}=0.\] (3.18) 4. _The_ \(S_{e}^{*}\) _radius is the smallest positive real root of the equation_ \[1-e+(m+n)r+(2+8e+mn+emn)r^{2}+(m+8em+n+8en)r^{3}+(18e+6emn)r^{4}\] \[-(m-8em+n-8en)r^{5}-(2-8e+mn-emn)r^{6}-(m+n)r^{7}-(1+e)r^{8}=0.\] (3.19) 5. _The_ \(S_{c}^{*}\) _radius is the smallest positive real root of the equation_ \[2-(m+n)r-(26+4mn)r^{2}-(25m+25n)r^{3}-(54+18mn)r^{4}-(23m+23n)r^{5}\] \[-(22+2mn)r^{6}+(m+n)r^{7}+4r^{8}=0.\] (3.20) 6. _The_ \(S_{\sin}^{*}\) _radius is the smallest positive real root of the equation_ \[\sin 1-(m+n-m\sin 1-n\sin 1)r-(10+2mn-2\sin 1-mn\sin 1)r^{2}-(9m+9n\] \[-m\sin 1-n\sin 1)r^{3}-(22+6mn)r^{4}-(11m+11n+m\sin 1+n\sin 1)r^{ 5}-(14+4mn\] \[+2\sin 1+mn\sin 1)r^{6}-(3m+3n+m\sin 1+n\sin 1)r^{7}-(2+\sin 1)r^{8}=0.\] (3.21) _._ * _The_ \(S^{*}_{\mathcal{Q}}\) _radius is the smallest positive real root of the equation_ \[(-2+\sqrt{2})+(-3+\sqrt{2})(m+n)r+(2(-7+\sqrt{2})+(-4+\sqrt{2})mn)r^ {2}+(-11+\sqrt{2})\] \[(m+n)r^{3}+(-22-6mn)r^{4}-(9+\sqrt{2})(m+n)r^{5}+(-2(5+\sqrt{2})-( 2+\sqrt{2})mn)r^{6}\] \[-(1+\sqrt{2})(m+n)r^{7}-\sqrt{2}r^{8}=0.\] (3.22) _(viii) The_ \(S^{*}_{R}\) _radius is the smallest positive real root of the equation_ \[3-2\sqrt{2}+(2m-2\sqrt{2}m+2n-2\sqrt{2}n)r-(4+4\sqrt{2}-mn+2\sqrt {2}mn)r^{2}-(6m+2\sqrt{2}m\] \[+6n+2\sqrt{2}n)r^{3}-(18+6mn)r^{4}-(10m-2\sqrt{2}m+10n-2\sqrt{2}n )r^{5}-(12-4\sqrt{2}+3mn\] \[-2\sqrt{2}mn)r^{6}-(2m-2\sqrt{2}m+2n-2\sqrt{2}n)r^{7}-(1-2\sqrt{ 2})r^{8}=0. \tag{3.23}\] _(ix) The_ \(S^{*}_{N_{e}}\) _radius is the smallest positive real root of the equation_ \[2-(m+n)r-(26+4mn)r^{2}-(25m+25n)r^{3}-(66+18mn)r^{4}-(35m+35n)r^{5}\] \[-(46+14mn)r^{6}-(11m+11n)r^{7}-8r^{8}=0. \tag{3.24}\] _(x) The_ \(S^{*}_{SG}\) _radius is the smallest positive real root of the equation_ \[1-e+(2m+2n)r+(12+8e+3mn+emn)r^{2}+(10m+8em+10n+8en)r^{3}+(22\] \[+22e+6mn+6emn)r^{4}+(10m+12em+10n+12en)r^{5}+(12+16e+3mn+5emn)r^{6}\] \[+(2m+4em+2n+4en)r^{7}+(1+3e)r^{8}=0. \tag{3.25}\] _All estimates are sharp._ Proof.: Let \(f\in\mathcal{K}^{2}_{b,c}\). Choose a function \(g\in\mathcal{A}\) such that for all \(z\in\mathbb{D}\), \(\operatorname{Re}(f(z)/g(z))>0\) and \(\operatorname{Re}((g(z)(1-z^{2}))/z)>0\). Observe that the complex valued functions \(p_{1},p_{2}\) defined on the unit disc \(\mathbb{D}\) by \(p_{1}(z)=f(z)/g(z)\) and \(p_{2}(z)=g(z)(1-z^{2})/z\) are functions in \(\mathcal{P}(0)\). Moreover, we have \[f(z)=\frac{zp_{1}(z)p_{2}(z)}{1-z^{2}},\] which satisfy the relation \[\frac{zf^{\prime}(z)}{f(z)}=\frac{zp_{1}^{\prime}(z)}{p_{1}(z)}+\frac{zp_{2}^ {\prime}(z)}{p_{2}(z)}+\frac{1+z^{2}}{1-z^{2}}. \tag{3.26}\] Substituting \(\alpha=0\) in (3.11) and using (3.12) and (3.26), we get \[\left|\frac{zf^{\prime}(z)}{f(z)}-\frac{1+r^{4}}{1-r^{4}}\right| \leq\frac{(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4}}{+(9m+9n)r^{5}+(10+ 2mn)r^{6}+(m+n)r^{7}}{(1+mr+r^{2})(1+nr+r^{2})(1-r^{4})}. \tag{3.27}\] Also, a straightforward calculation shows that \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right) \geq\frac{1+r^{4}}{1-r^{4}}-\frac{(m+n)r+(10+2mn)r^{2}+(9m+9n)r^ {3}+(20+6mn)r^{4}}{+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r^{7}}\] \[=\frac{r^{8}-(8+mn)r^{6}-8(m+n)r^{5}-6(3+mn)r^{4}-8(m+n)r^{3}}{(1 +mr+r^{2})(1+nr+r^{2})(1-r^{4})}. \tag{3.28}\] 1. Set \(x(r):=1-(m+n)r-3(6+mn)r^{2}-17(m+n)r^{3}-12(3+mn)r^{4}-15(m+n)r^{5}-(14+mn)r^{6}+( m+n)r^{7}+3r^{8}\). Note that \(x(0)=1>0\) and \(x(1)=-16(mn+2m+2n+4)<0\) and thus in view of the intermediate value theorem, a root of the equation (3.16) lies in the interval \((0,1)\), denoted by \(\rho_{1}\). When \(1/2<a\leq 3/2\), in view of [19, Lemma 2.2], the disc (3.27) is contained in the region \(\{w\in\mathbb{C}:|w-1|<\operatorname{Re}w\}\) if \[\begin{split}&(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4} \\ &\qquad\qquad\qquad\qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r ^{7}\\ \end{split}\leq\frac{1+r^{4}}{1-r^{4}}-\frac{1}{2}.\] Hence, \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)>\left|\frac{zf^{ \prime}(z)}{f(z)}-1\right|\] whenever \(0<r\leq\rho_{1}\). This proves that the \(S_{P}^{*}\) radius for the class \(\mathcal{K}_{b,c}^{2}\) is the number \(\rho_{1}\). To justify the sharpness of \(S_{P}^{*}\) radius, observe that the function \(f_{b,c}\) in (2.3) at \(z=-i\rho_{1}\) satisfies \[\operatorname{Re}\left(\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}\right) =\frac{\rho_{1}^{8}-(8-2c(-4b+2c))\rho_{1}^{6}+32b\rho_{1}^{5}-6(3 -2c(-4b+2c))\rho_{1}^{4}+32b\rho_{1}^{3}}{(1-2c\rho_{1}+\rho_{1}^{2})\left(1- (4b-2c)\rho_{1}+\rho_{1}^{2}\right)(1-\rho_{1}^{4})}\] \[=\left|\frac{4b\rho_{1}-(10+16bc+8c^{2})\,\rho_{1}^{2}+36b\rho_{1 }^{3}-(18+48bc-24c^{2})\,\rho_{1}^{4}}{(1-2c\rho_{1}+\rho_{1}^{2})(1-4b\rho_{1 }+2c\rho_{1}+\rho_{1}^{2})(1-\rho_{1}^{4})}\right|\] \[=\left|\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}-1\right|.\] **Remark 3.4**.: On the similar lines, we can prove the sharpness for \(b>0\). The sharpness is attained at the point \[z=\left\{\begin{array}{ll}-i\rho,&\text{if }b<0\\ i\rho,&\text{if }b>0\end{array}\right.\] for the classes \(S_{P}^{*}\), \(S^{*}(\alpha)\), \(S_{e}^{*}\), \(S_{c}^{*}\), \(S_{Q}^{*}\), \(S_{R}^{*}\), \(S_{Ne}^{*}\), \(S_{SG}^{*}\) and for the classes \(S_{L}^{*}\) and \(S_{\sin}^{*}\), sharpness is attained at the point \[z=\left\{\begin{array}{ll}\rho,&\text{if }b<0\\ -\rho,&\text{if }b>0.\end{array}\right.\] 2. Let \(f\in\mathcal{K}_{b,c}^{2}\) and \(\alpha\in[0,1)\). Let \(\rho_{2}\) denote the smallest positive real root of the equation (3.17) in \((0,1)\). From (3.28), it follows that for \(0<r\leq\rho_{2}\), the function \(f\) satisfies \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)=\frac{r^{8}-(8+mn)r^{ 6}-8(m+n)r^{5}-6(3+mn)r^{4}-8(m+n)r^{3}}{-(8+mn)r^{2}+1}\] \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)=\frac{r^{8}-(8 +mn)r^{6}-8(m+n)r^{5}-6(3+mn)r^{4}-8(m+n)r^{3}}{-(8+mn)r^{2}+1}\] \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)=\frac{r^{ 8}-(8+mn)r^{6}-8(m+n)r^{5}-6(3+mn)r^{4}-8(m+n)r^{3}}{-(8+mn)r^{2}+1}\] \[\ \[>\alpha,\] thereby showing that \(f\in S^{*}(\alpha)\) in each disc for \(0<r\leq\rho_{2}\). For \(z=-i\rho_{2}\), the function \(f_{b,c}\) defined in (2.3) satisfies \[\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)} =\frac{\rho_{2}^{8}-(8-2c(-4b+2c))\rho_{2}^{6}+32b\rho_{2}^{5}-6(3- 2c(-4b+2c))\rho_{2}^{4}+32b\rho_{2}^{3}}{-(8-2c(-4b+2c))\rho_{2}^{2}+1}\] \[=\alpha\] proving that the radius is sharp. 3. The number \(\rho_{3}\) is the smallest real root of the equation in (3.18). From (3.27), it follows that \[\left|\frac{zf^{\prime}(z)}{f(z)}-1\right|\leq\frac{(m+n)r+2(5+mn)r^{2}+8(m+n) r^{3}+4(3+mn)r^{4}+3(m+n)r^{5}+2r^{6}}{(1+mr+r^{2})(1+nr+r^{2})(1-r^{2})}.\] When \(2\sqrt{2}/3\leq a<\sqrt{2}\), by [1, Lemma 2.2], the disc (3.27) is contained in the lemniscate region \(\{w\in\mathbb{C}:|w^{2}-1|<1\}\) if \[\frac{(m+n)r+2(5+mn)r^{2}+8(m+n)r^{3}+4(3+mn)r^{4}+3(m+n)r^{5}+2r^{6}}{(1+mr+ r^{2})(1+nr+r^{2})(1-r^{2})}\leq\sqrt{2}-1.\] Hence, \[\left|\left(\frac{zf^{\prime}(z)}{f(z)}\right)^{2}-1\right|<1\] for \(0<r\leq\rho_{3}\) showing that the number \(\rho_{3}\) is the \(S_{L}^{*}\) radius for the class \(\mathcal{K}_{b,c}^{2}\). Consider the functions \(f_{b,c}\), \(g_{b,c}:\mathbb{D}\rightarrow\mathbb{C}\) defined by \[f_{b,c}=\frac{z(1-2cz+z^{2})(1-(4b-2c)z+z^{2})}{(1-z^{2})^{3}}\quad\text{and} \quad g_{b,c}=\frac{z\left(1-2cz+z^{2}\right)}{\left(1-z^{2}\right)^{2}}.\] (3.29) Figure 2. Graphical illustration of sharpness for classes in Theorem (3.3) for particular choices of \(b\) and \(c\). The function \(f_{b,c}\) for the chosen \(g_{b,c}\) as given in (3.29) is a member of \(\mathcal{K}^{2}_{b,c}\) because it satisfy \[\frac{f_{b,c}(z)}{g_{b,c}(z)}=\frac{1-s_{3}(z)}{1+s_{3}(z)}\quad\text{and}\quad \frac{g_{b,c}(z)(1-z^{2})}{z}=\frac{1-s_{4}(z)}{1+s_{4}(z)},\] where \[s_{3}(z)=\frac{z(z-(2b-c))}{(2b-c)z-1}\quad\text{and}\quad s_{4}(z)=\frac{z(z- c)}{cz-1}\] are analytic functions satisfying the conditions of Schwarz's lemma in unit disc and hence \(\text{Re}(f_{b,c}(z)/g_{b,c}(z))>0\) and \(\text{Re}(g_{b,c}(z)(1-z^{2})/z)>0\). Then for \(f_{b,c}\) in (3.29) at \(z=\rho_{3}\), we have, \[\left|\left(\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}\right)^{2}-1\right| =\left|\left(\begin{array}{c}1-8b\rho_{3}+11\rho_{3}^{2}+24bc \rho_{3}^{2}-12c^{2}\rho_{3}^{2}-32b\rho_{3}^{3}+11\rho_{3}^{4}\\ +24bc\rho_{3}^{4}-12c^{2}\rho_{3}^{4}-8b\rho_{3}^{5}+\rho_{3}^{6}\\ (1-2c\rho_{3}+\rho_{3}^{2})(1-(4b-2c)\rho_{3}+\rho_{3}^{2})(1-\rho_{3}^{2}) \end{array}\right)^{2}-1\right|\] \[=|(\sqrt{2})^{2}-1|=1\] and therefore, the estimate is sharp. 4. The number \(\rho_{4}\) is the smallest real root of the equation in (3.19). When \(1/e<a\leq(e+1/e)/2\), in view of [13, Lemma 2.2], the disc (3.27) is contained in the region \(\{w\in\mathbb{C}:|\log w|<1\}\) if \[\begin{split}&(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4} \\ &\qquad\qquad\qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r^{7} \\ &\qquad\qquad\qquad\qquad(1+mr+r^{2})(1+nr+r^{2})(1-r^{4}) \end{split}\leq\frac{1+r^{4}}{1-r^{4}}-\frac{1}{e}\end{split}\] for \(0<r\leq\rho_{4}\). This shows that the \(S_{e}^{*}\) radius for the class \(\mathcal{K}^{2}_{b,c}\) is the number \(\rho_{4}\). Moreover, for \(f_{b,c}\) in (2.3) at \(z=-i\rho_{4}\), we have, \[\left|\log\left(\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}\right)\right| =\left|\log\left(\frac{\rho_{4}^{8}-(8-2c(-4b+2c))\rho_{4}^{6}+32b \rho_{4}^{5}-6(3-2c(-4b\) \[\qquad\qquad\qquad\qquad=\left|\log\left(\frac{1}{e}\right)\right|=1.\] This shows that the radius estimate is sharp. 5. The number \(\rho_{5}\) is the smallest real root of the equation in (3.20). In view of [20, Lemma 2.5], the disc (3.27) is contained in the region \(\phi_{c}(\mathbb{D})\), where \(\phi_{c}(z)=1+(4/3)z+(2/3)z^{2}\) if \[\begin{split}&(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4} \\ &\qquad\qquad\qquad\qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r ^{7}\\ &(1+mr+r^{2})(1+nr+r^{2})(1-r^{4})\end{split}\leq\frac{1+r^ {4}}{1-r^{4}}-\frac{1}{3}\] whenever \(1/3<a\leq 5/3\). The result is sharp for the function \(f_{b,c}\) given in (2.3) and at \(z=-i\rho_{5}\), we have, \[\left|\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}\right|=\left|\frac{\rho_{5}^{8}- (8-2c(-4b+2c))\rho_{5}^{6}+32b\rho_{5}^{5}-6(3-2c(-4b+2c))\rho_{5}^{4}}{(1-2c \rho_{5}+\rho_{5}^{2})\left(1-(4b-2c)\rho_{5}+\rho_{5}^{2}\right)(1-\rho_{5}^{ 4})}\right|\] \[=\frac{1}{3}=\phi_{c}(-1).\] 6. The number \(\rho_{6}\) is the smallest real root of the equation in (3.21). When \(1-\sin 1<a\leq 1+\sin 1\), an application of [5, Lemma 3.3] yields that the function \(f\in S_{\sin}^{*}\) if \[\begin{split}&(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4} \\ &\qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r^{7}\\ \hline\frac{}{}&(1+mr+r^{2})(1+nr+r^{2})(1-r^{4}) \end{split}\leq\sin 1-\frac{2r^{4}}{1-r^{4}}.\] For \(0<r\leq\rho_{6}\), the disc (3.27) is contained in the region \(\phi_{s}(\mathbb{D})\), where \(\phi_{s}(z)=1+\sin z\), showing that the radius of sine starlikeness for the class \(\mathcal{K}_{b,c}^{2}\) is the number \(\rho_{6}\). To prove sharpness, observe that for the functions given in (3.29), at \(z=\rho_{6}\), we have, \[\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)} =\frac{\rho_{6}^{8}-(8-2c(-4b+2c))\rho_{6}^{6}+32b\rho_{6}^{5}-6( 3-2c(-4b+2c))\rho_{6}^{4}}{\left(1-2c\rho_{6}+\rho_{6}^{2}\right)\left(1-(4b-2 c)\rho_{6}+\rho_{6}^{2}\right)\left(1-\rho_{6}^{4}\right)}\] Hence, the result is sharp. 7. The number \(\rho_{7}\) is the smallest real root of the equation in (3.22). Consider \(\sqrt{2}-1<a<\sqrt{2}+1\) and applying [7, Lemma 2.1], the disc (3.27) is contained in the region \(\{w\in\mathbb{C}:2|w|>|w^{2}-1|\}\) if \[\begin{split}&(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4} \\ &\qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r^{7}\\ \hline\frac{}{}&(1+mr+r^{2})(1+nr+r^{2})(1-r^{4}) \end{split}\leq\frac{1+r^{4}}{1-r^{4}}+1-\sqrt{2}.\] Therefore, for \(0<r\leq\rho_{7}\), we have \[2\left|\frac{zf^{\prime}(z)}{f(z)}\right|>\left|\left(\frac{zf^{\prime}(z)}{ f(z)}\right)^{2}-1\right|\] which concludes that the number \(\rho_{7}\) is the \(S_{\mathbb{Q}}^{*}\) radius for the class \(\mathcal{K}_{b,c}^{2}\). Further, for the function in (2.3) at \(z=-i\rho_{7}\), we have, \[\left|\left(\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}\right)^{2}-1\right| =\left|\left(\frac{\rho_{7}^{8}-(8-2c(-4b+2c))\rho_{7}^{6}+32b \rho_{7}^{5}-6(3-2c(-4b\right.\] \[\left.\qquad\qquad\qquad+2c))\rho_{7}^{4}+32b\rho_{7}^{3}-(8-2c(-4b+ 2c))\rho_{7}^{2}+1}{\left(1-2c\rho_{7}+\rho_{7}^{2}\right)\left(1-(4b-2c) \rho_{7}+\rho_{7}^{2}\right)\left(1-\rho_{7}^{4}\right)}\right)^{2}-1\right|\] \[=2\left|\frac{\rho_{7}^{8}-(8-2c(-4b+2c))\rho_{7}^{6}+32b\rho_{7}^ {5}-6(3-2c(-4b+2c))\rho_{7}^{4}}{\left(1-2c\rho_{7}+\rho_{7}^{2}\right)\left(1 -(4b-2c)\rho_{7}+\rho_{7}^{2}\right)\left(1-\rho_{7}^{4}\right)}\right|\] \[=2\left|\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}\right|.\] Thus, the estimate is best possible. * The number \(\rho_{8}\) is the smallest real root of the equation in (3.23). If \(2(\sqrt{2}-1)<a\leq\sqrt{2}\), by [9, Lemma 2.2] the function \(f\in S_{R}^{*}\) if \[\begin{array}{c}(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4}\\ \qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r^{7}\\ \qquad\qquad(1+mr+r^{2})(1+nr+r^{2})(1-r^{4})\end{array}\leq\frac{1+r^{4}}{1-r ^{4}}-2(\sqrt{2}-1).\] Observe that for \(0<r\leq\rho_{8}\), the disc (3.27) is contained in the region \(\phi_{0}(\mathbb{D})\), where \(\phi_{0}(z):=1+(z/k)((k+z)/(k-z)),k=1+\sqrt{2}\). For sharpness, note that the function \(f_{b,c}\) in (2.3) at \(z=-i\rho_{8}\) satisfies \[\begin{array}{c}\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}=\frac{\rho_{8}^{ 8}-(8-2c(-4b+2c))\rho_{8}^{6}+32b\rho_{8}^{5}-6(3-2c(-4b+2c))\rho_{8}^{4}+32b \rho_{8}^{3}}{(1-2c\rho_{8}+\rho_{8}^{2})\left(1-(4b-2c)\rho_{8}+\rho_{8}^{2} \right)\left(1-\rho_{8}^{4}\right)}\\ \qquad\qquad\qquad=2\sqrt{2}-2=\phi_{0}(-1).\end{array}\] * Let \(\rho_{9}\) denote the smallest real root of the equation (3.24). For \(1\leq a<5/3\), an application of [22, Lemma 2.2] gives that \(f\in S_{Ne}^{*}\) if \[\begin{array}{c}(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4}\\ \qquad\qquad\qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r^{7}\\ \qquad\qquad\qquad(1+mr+r^{2})(1+nr+r^{2})(1-r^{4})\end{array}\leq\frac{5}{3} -\frac{1+r^{4}}{1-r^{4}}.\] Thus, for \(0<r\leq\rho_{9}\), the disc (3.27) lies in the region \(\phi_{Ne}(\mathbb{D})\), where \(\phi_{Ne}(z)=1+z-z^{3}/3\). Further, the function defined in (2.3) at \(z=-i\rho_{9}\) satisfies \[\begin{array}{c}\left|\frac{z(f_{b,c})^{\prime}(z)}{f_{b,c}(z)}\right|= \left|\frac{\rho_{9}^{8}-(8-2c(-4b+2c))\rho_{9}^{6}+32b\rho_{9}^{5}-6(3-2c(-4b +2c))\rho_{9}^{4}}{(1-2c\rho_{9}+\rho_{9}^{2})\left(1-(4b-2c)\rho_{9}+\rho_{9} ^{2}\right)\left(1-\rho_{9}^{4}\right)}\right|\\ \qquad\qquad\qquad=\frac{5}{3}=\phi_{Ne}(1).\end{array}\] This proves that the estimate is best possible. * Let \(f\in\mathcal{K}_{b,c}^{2}\) and \(\rho_{10}\) denote the smallest real root of the equation (3.25). Using [8, Lemma 2.2] for \(1\leq a<2e/(1+e)\), it follows that \(f\in S_{SG}^{*}\) if \[\begin{array}{c}(m+n)r+(10+2mn)r^{2}+(9m+9n)r^{3}+(20+6mn)r^{4}\\ \qquad\qquad\qquad+(9m+9n)r^{5}+(10+2mn)r^{6}+(m+n)r^{7}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \frac{2e}{1+e}-\frac{1+r^{4}}{1-r^{4}}.\end{array}\] Hence, for \(0<r\leq\rho_{10}\), the disc (3.27) lies in the region \(\phi_{SG}(\mathbb{D})\), where \(\phi_{SG}(z)=2/(1+e^{-z})\). This shows that the \(S_{SG}^{*}\) radius for the class \(\mathcal{K}_{b,c}^{2}\) is the number \(\rho_{10}\). For \(w=z(f_{b,c})^{\prime}/f_{b,c}\) and at \(z=-i\rho_{10}\), where \(f_{b,c}\) is the function defined in (2.3), we have \[\begin{array}{c}\left|\log\left(\frac{w}{2-w}\right)\right|=\left|\frac{-1+(8 +8bc-4c^{2})\rho_{10}^{2}+32b\rho_{10}^{3}+6(3+8bc-4c^{2})\rho_{10}^{4}}{+32b \rho_{10}^{5}+(8+8bc-4c^{2})\rho_{10}^{6}-\rho_{10}^{8}}\right|\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-1+8b\rho_{10}-(12+24bc-1 2c^{2})\rho_{10}^{2}+40b\rho_{10}^{3}-(18+48bc}\right|\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-24c^{2})\rho_{10}^{4}+24b \rho_{10}^{5}-(4-8bc+4c^{2})\rho_{10}^{6}-8b\rho_{10}^{7}+3\rho_{10}^{8} \end{array}\right|\\ =1.\end{array}\] Thus, the obtained radius is sharp. In the theorem given below, we determine the radii constants for the class \(\mathcal{K}^{3}_{b,c}\). **Theorem 3.5**.: _Let \(u=|2c-3b|\leq 1\) and \(v=|2c|\). Then, the following results hold for the class \(\mathcal{K}^{3}_{b,c}\):_ 1. _The_ \(S^{*}_{p}\) _radius is the smallest positive real root of the equation_ \[1-(u+v)r-(15+3uv)r^{2}-(17u+12v)r^{3}-(17+12uv)r^{4}-(15u+3v)r^{5}\] \[-(1+uv)r^{6}+ur^{7}=0.\] (3.30) 2. _For any_ \(0\leq\alpha<1\)_, the_ \(S^{*}(\alpha)\) _radius is the smallest positive real root of the equation_ \[1-\alpha-(u\alpha+v\alpha)r-(7+uv+\alpha+uv\alpha)r^{2}-(8u+6v+ u\alpha)r^{3}-(9+6uv-\alpha)r^{4}\] \[-(8u+2v-u\alpha-v\alpha)r^{5}-(1+uv-\alpha-uv\alpha)r^{6}+\alpha ur ^{7}=0.\] (3.31) 3. _The_ \(S^{*}_{L}\) _radius is the smallest positive real root of the equation_ \[1-\sqrt{2}+(2u-\sqrt{2}u+2v-\sqrt{2}v)r+(8+3uv-\sqrt{2}uv)r^{2}+( 8u+4v+\sqrt{2}v)r^{3}\] \[+(3+\sqrt{2}+3uv+\sqrt{2}uv)r^{4}+(2u+\sqrt{2}u)r^{5}=0.\] (3.32) 4. _The_ \(S^{*}_{e}\) _radius is the smallest positive real root of the equation_ \[1-e+(u+v)r+(1+7e+uv+euv)r^{2}+(u+8eu+6ev)r^{3}-(1-9e-6euv)r^{4}\] \[-(u-8eu+v-2ev)r^{5}-(1-e+uv-euv)r^{6}-ur^{7}=0.\] (3.33) 5. _The_ \(S^{*}_{c}\) _radius is the smallest positive real root of the equation_ \[2-(u+v)r-(22+4uv)r^{2}-(25u+18v)r^{3}-(26+18uv)r^{4}-(23u+5v)r^{5}\] \[-(2+2uv)r^{6}+ur^{7}=0.\] (3.34) 6. _The_ \(S^{*}_{\sin}\) _radius is the smallest positive real root of the equation_ \[\sin 1-(u+v-u\sin 1-v\sin 1)r-(8+2uv-\sin 1-uv\sin 1)r^{2}-(9u+6v-u\sin 1 )r^{3}\] \[-(12+6uv+\sin 1)r^{4}-(11u+5v+u\sin 1+v\sin 1)r^{5}-(4+4uv+\sin 1 +uv\sin 1)r^{6}\] \[-(3u+u\sin 1)r^{7}=0.\] (3.35) 7. _The_ \(S^{*}_{\mathbb{Q}}\) _radius is the smallest positive real root of the equation_ \[2-\sqrt{2}+(u-\sqrt{2}u+v-\sqrt{2}v)r-(6+\sqrt{2}+\sqrt{2}uv)r^{2}-(7 u+\sqrt{2}u+6v)r^{3}-(10-\sqrt{2}\] \[+6uv)r^{4}-(9u-\sqrt{2}u+3v-\sqrt{2}v)r^{5}-(2-\sqrt{2}+2uv-\sqrt {2}uv)r^{6}-(u-\sqrt{2}u)r^{7}=0.\] (3.36) 8. _The_ \(S^{*}_{R}\) _radius is the smallest positive real root of the equation_ \[3-2\sqrt{2}+(2u-2\sqrt{2}u+2v-2\sqrt{2}v)r-(5+2\sqrt{2}-uv+2\sqrt {2}uv)r^{2}-(6u+2\sqrt{2}u+6v)r^{3}\] \[-(11-2\sqrt{2}+6uv)r^{4}-(10u-2\sqrt{2}u+4v-2\sqrt{2}v)r^{5}-(3-2 \sqrt{2}+3uv-2\sqrt{2}uv)r^{6}\] \[-(2u-2\sqrt{2}u)r^{7}=0.\] (3.37) _._ 2. _The_ \(S^{*}_{N_{e}}\) _radius is the smallest positive real root of the equation_ \[2-(u+v)r-(22+4uv)r^{2}-(25u+18v)r^{3}-(38+18uv)r^{4}-(35u+17v)r^{5}\] \[-(14+14uv)r^{6}-11ur^{7}=0.\] (3.38) _(x) The_ \(S^{*}_{SG}\) _radius is the smallest positive real root of the equation_ \[1-e+(2u+2v)r+(9+7e+3uv+euv)r^{2}+(10u+8eu+6v+6ev)r^{3}+(11+13e+6uv\] \[+6euv)r^{4}+(10u+12eu+4v+6ev)r^{5}+(3+5e+3uv+5euv)r^{6}+(2u+4eu)r ^{7}=0. \tag{3.39}\] Proof.: Let \(f\in\mathcal{K}^{3}_{b,c}\). Further, choose the function \(g\in\mathcal{A}\) such that \[\left|\frac{f(z)}{g(z)}-1\right|<1\text{ and }\operatorname{Re}\frac{g(z)(1-z^{2} )}{z}>0.\] Define the functions \(p_{1},p_{2}:\mathbb{D}\to\mathbb{C}\) as \(p_{1}(z)=g(z)(1-z^{2})/z\) and \(p_{2}(z)=g(z)/f(z)\). Clearly, \(p_{1}\in\mathcal{P}_{c}\) and \(p_{2}\in\mathcal{P}_{2c-3b}(1/2)\). Moreover, \[f(z)=\frac{g(z)}{p_{2}(z)}=\frac{zp_{1}(z)}{(1-z^{2})p_{2}(z)}\] and \[\frac{zf^{\prime}(z)}{f(z)}=\frac{zp_{1}^{\prime}(z)}{p_{1}(z)}-\frac{zp_{2}^ {\prime}(z)}{p_{2}(z)}+\frac{1+z^{2}}{1-z^{2}}. \tag{3.40}\] From (3.11), the following inequalities readily follows: \[\left|\frac{zp_{1}^{\prime}(z)}{p_{1}(z)}\right|\leq\frac{r}{1-r^{2}}\frac{vr ^{2}+v+4r}{r^{2}+vr+1}\quad\text{and}\quad\left|\frac{zp_{2}^{\prime}(z)}{p_{ 2}(z)}\right|\leq\frac{r}{1-r^{2}}\frac{ur^{2}+u+2r}{ur+1}. \tag{3.41}\] Thus, combining (3.40) and (3.41) yields the disk \[\left|\frac{zf^{\prime}(z)}{f(z)}-\frac{1+r^{4}}{1-r^{4}}\right|\leq\frac{(u +v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}}{(1+ru)(1+r^{2}+rv)( 1-r^{4})}. \tag{3.42}\] From above inequality, it follows that \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)\geq\frac{1-(7+uv)r^ {2}-(8u+6v)r^{3}-(9+6uv)r^{4}-2(4u+v)r^{5}-(1+uv)r^{6}}{(1-r^{4})(1+ru)(1+r^{2} +rv)}. \tag{3.43}\] 1. Set \(x(r):=1-(u+v)r-(15+3uv)r^{2}-(17u+12v)r^{3}-(17+12uv)r^{4}-(15u+3v)r^{5}-(1+uv) r^{6}+ur^{7}\). Observe that \(x(0)=1>0\) and \(x(1)=-16(2+2u+uv+v)<0\) and thus the intermediate value theorem shows that a root of the equation (3.30) lies in \((0,1)\), denoted by \(\rho_{1}\). When \(1/2<a\leq 3/2\), in view of [19, Lemma 2.2], the disc (3.42) is contained in the parabolic region \(\{w\in\mathbb{C}:|w-1|<\operatorname{Re}w\}\) if \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad+(2+2uv)r^{6}+ur^{7}\leq\frac{1+r^{4}}{1-r^{4}}-\frac{1}{2}.\] Equivalently, \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)>\left|\frac{zf^{\prime} (z)}{f(z)}-1\right|\] for \(0<r\leq\rho_{1}\). Thus, the radius of parabolic starlikeness for the class \(\mathcal{K}^{3}_{b,c}\) is the number \(\rho_{1}\). 2. For \(0\leq\alpha<1\), let \(\rho_{2}\in(0,1)\) be the smallest positive real root of the equation (3.31). Thus, in view of (3.43), we get \[\operatorname{Re}\left(\frac{zf^{\prime}(z)}{f(z)}\right)>\alpha\] whenever \(0<r\leq\rho_{2}\). 3. The number \(\rho_{3}\) is the smallest real root of the equation in (3.32). When \(2\sqrt{2}/3\leq a<\sqrt{2}\), in view of [1, Lemma 2.2], the disc (3.42) lies in the lemniscate region \(\{w\in\mathbb{C}:|w^{2}-1|<1\}\) if \[\frac{(u+v)r+(8+2uv)r^{2}+(8u+5v)r^{3}+(4+4uv)r^{4}+3ur^{5}}{(1+ru)(1+r^{2}+rv )(1-r^{2})}\leq\sqrt{2}-1.\] Hence, for \(0<r\leq\rho_{3}\), we have \[\left|\left(\frac{zf^{\prime}(z)}{f(z)}\right)^{2}-1\right|<1.\] the radius of lemniscate starlikeness for the class \(\mathcal{K}^{3}_{b,c}\) is the number \(\rho_{3}\). 4. The number \(\rho_{4}\) is the smallest real root of the equation in (3.33). When \(1/e<a\leq(e+1/e)/2\), in view of [13, Lemma 2.2], \(f\in S^{*}_{e}\) if \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\] \[\frac{+(2+2uv)r^{6}+ur^{7}}{(1+ru)(1+r^{2}+rv)(1-r^{4})}\leq\frac{1 +r^{4}}{1-r^{4}}-\frac{1}{e}.\] Thus, the disc (3.42) is contained in the region \(\{w\in\mathbb{C}:|\log w|<1\}\) for \(0<r\leq\rho_{4}\). 5. The number \(\rho_{5}\) is the smallest real root of the equation in (3.34). Using [20, Lemma 2.5], the function \(f\in S^{*}_{c}\) if \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\] \[+(2+2uv)r^{6}+ur^{7}\leq\frac{1+r^{4}}{1-r^{4}}-\frac{1}{3}.\] Thereby, the disc (3.42) lies inside \(\phi_{c}(\mathbb{D})\), where \(\phi_{c}(z)=1+(4/3)z+(2/3)z^{2}\), if \(0<r\leq\rho_{5}\). 6. The number \(\rho_{6}\) is the smallest real root of the equation in (3.35). When \(1-\sin 1<a\leq 1+\sin 1\), an application of [5, Lemma 3.3] gives that the function \(f\in S^{*}_{\sin}\) if \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\] \[+(2+2uv)r^{6}+ur^{7}\leq\sin 1-\frac{2r^{4}}{1-r^{4}}.\] Hence, the disc in (3.42) is contained in the region \(\phi_{s}(\mathbb{D})\), where \(\phi_{s}(z)=1+\sin z\), for \(0<r\leq\rho_{6}\). * The number \(\rho_{7}\) is the smallest real root of the equation in (3.36). Considering \(\sqrt{2}-1<a<\sqrt{2}+1\) and using [7, Lemma 2.1], the disc (3.42) is contained in the region \(\{w\in\mathbb{C}:2|w|>|w^{2}-1|\}\) provided \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\\ +(2+2uv)r^{6}+ur^{7}\\ \leq\frac{1+r^{4}}{1-r^{4}}+1-\sqrt{2}.\] For \(0<r\leq\rho_{7}\), the function \(f\) satisfies \[2\left|\frac{zf^{\prime}(z)}{f(z)}\right|>\left|\left(\frac{zf^{\prime}(z)}{f( z)}\right)^{2}-1\right|.\] * The number \(\rho_{8}\) is the smallest real root of the equation in (3.37). when \(2(\sqrt{2}-1)<a\leq\sqrt{2}\), by [9, Lemma 2.2] the function \(f\in S_{R}^{*}\) if \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\\ +(2+2uv)r^{6}+ur^{7}\\ \leq\frac{1+r^{4}}{1-r^{4}}+2-2\sqrt{2}.\] Note that for \(0<r\leq\rho_{8}\), the disc (3.42) is contained in the region \(\phi_{0}(\mathbb{D})\), where \(\phi_{0}(z):=1+(z/k)((k+z)/(k-z))\) and \(k=1+\sqrt{2}\). * Let \(\rho_{9}\) denote the smallest real root of the equation (3.38). When \(1\leq a<5/3\), in view of [22, Lemma 2.2], \(f\in S_{Ne}^{*}\) if \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\\ +(2+2uv)r^{6}+ur^{7}\\ \leq\frac{5}{3}-\frac{1+r^{4}}{1-r^{4}}.\] Thus, the disc (3.42) is contained in the region \(\phi_{Ne}(\mathbb{D})\) for \(0<r\leq\rho_{9}\), where \(\phi_{Ne}(z)=1+z-z^{3}/3\). * Let \(\rho_{10}\) denote the smallest real root of the equation (3.39). Applying [8, Lemma 2.2] for \(1\leq a<2e/(1+e)\) gives \(f\in S_{SG}^{*}\) if \[(u+v)r+(8+2uv)r^{2}+(9u+6v)r^{3}+(10+6uv)r^{4}+(9u+3v)r^{5}\\ +(2+2uv)r^{6}+ur^{7}\\ \leq\frac{2e}{1+e}-\frac{1+r^{4}}{1-r^{4}}.\] Hence, the disc (3.42) is contained in the region \(\phi_{SG}(\mathbb{D})\) for \(0<r\leq\rho_{10}\), where \(\phi_{SG}(z)=2/(1+e^{-z})\). ## 4. Conclusion Three classes of analytic functions satisfying some conditions involving the ratio \(f/g\) were introduced. Several sharp radii estimates were determined for these classes by constraining the second coefficients of the functions. Various generalizations of the existing work in the same field are also discussed. The technique used in this paper can be imitated in finding radius estimates for various classes of analytic functions involving fixed second coefficient.
2305.15719
Efficient Neural Music Generation
Recent progress in music generation has been remarkably advanced by the state-of-the-art MusicLM, which comprises a hierarchy of three LMs, respectively, for semantic, coarse acoustic, and fine acoustic modelings. Yet, sampling with the MusicLM requires processing through these LMs one by one to obtain the fine-grained acoustic tokens, making it computationally expensive and prohibitive for a real-time generation. Efficient music generation with a quality on par with MusicLM remains a significant challenge. In this paper, we present MeLoDy (M for music; L for LM; D for diffusion), an LM-guided diffusion model that generates music audios of state-of-the-art quality meanwhile reducing 95.7% or 99.6% forward passes in MusicLM, respectively, for sampling 10s or 30s music. MeLoDy inherits the highest-level LM from MusicLM for semantic modeling, and applies a novel dual-path diffusion (DPD) model and an audio VAE-GAN to efficiently decode the conditioning semantic tokens into waveform. DPD is proposed to simultaneously model the coarse and fine acoustics by incorporating the semantic information into segments of latents effectively via cross-attention at each denoising step. Our experimental results suggest the superiority of MeLoDy, not only in its practical advantages on sampling speed and infinitely continuable generation, but also in its state-of-the-art musicality, audio quality, and text correlation. Our samples are available at https://Efficient-MeLoDy.github.io/.
Max W. Y. Lam, Qiao Tian, Tang Li, Zongyu Yin, Siyuan Feng, Ming Tu, Yuliang Ji, Rui Xia, Mingbo Ma, Xuchen Song, Jitong Chen, Yuping Wang, Yuxuan Wang
2023-05-25T05:02:35Z
http://arxiv.org/abs/2305.15719v1
# Efficient Neural Music Generation ###### Abstract Recent progress in music generation has been remarkably advanced by the state-of-the-art MusicLM, which comprises a hierarchy of three LMs, respectively, for semantic, coarse acoustic, and fine acoustic modelings. Yet, sampling with the MusicLM requires processing through these LMs one by one to obtain the fine-grained acoustic tokens, making it computationally expensive and prohibitive for a real-time generation. Efficient music generation with a quality on par with MusicLM remains a significant challenge. In this paper, we present **MeLoDy** (**M** for music; **L** for LM; **D** for diffusion), an LM-guided diffusion model that generates music audios of state-of-the-art quality meanwhile reducing 95.7% or 99.6% forward passes in MusicLM, respectively, for sampling 10s or 30s music. MeLoDy inherits the highest-level LM from MusicLM for semantic modeling, and applies a novel dual-path diffusion (DPD) model and an audio VAE-GAN to efficiently decode the conditioning semantic tokens into waveform. DPD is proposed to simultaneously model the coarse and fine acoustics by incorporating the semantic information into segments of latents effectively via cross-attention at each denoising step. Our experimental results suggest the superiority of MeLoDy, not only in its practical advantages on sampling speed and infinitely continuable generation, but also in its state-of-the-art musicality, audio quality, and text correlation. Our samples are available at [https://Efficient-MeLoDy.github.io/](https://Efficient-MeLoDy.github.io/). ## 1 Introduction Music is an art composed of harmony, melody, and rhythm that permeates every aspect of human life. With the blossoming of deep generative models [1; 2; 3], music generation has drawn much attention in recent years [4; 5; 6]. As a prominent class of generative models, language models (LMs) [7; 8] showed extraordinary modeling capability in modeling complex relationships across long-term contexts [9; 10; 11]. In light of this, AudioLM [3] and many follow-up works [5; 12; 13; 14] successfully applied LMs to audio synthesis. Concurrent to the LM-based approaches, diffusion probabilistic models (DPMs) [1; 15; 16], as another competitive class of generative models [2; 17], have also demonstrated exceptional abilities in synthesizing speech [18; 19; 20], sounds [21; 22] and music [6; 23]. However, generating music from free-form text remains challenging as the permissible music descriptions can be very diverse and relate to any of the genres, instruments, tempo, scenarios, or even some subjective feelings. Conventional text-to-music generation models are listed in Table 1, where both MusicLM [5] and Noise2Music [6] were trained on large-scale music datasets and demonstrated the state-of-the-art (SOTA) generative performances with high fidelity and adherence to various aspects of text prompts. Yet, the success of these two methods comes with large computational costs, which would be a serious impediment to their practicalities. In comparison, Mousai [23] building upon DPMs made efficient samplings of high-quality music possible. Nevertheless, the number of their demonstrated cases was comparatively small and showed limited in-sample dynamics. Aiming for a feasible music creation tool, a high efficiency of the generative model is essential since it facilitates interactive creation with human feedback being taken into account as in [24]. While LMs and DPMs both showed promising results, we believe the relevant question is not whether one should be preferred over another but whether we can leverage both approaches with respect to their individual advantages, e.g., [25]. After analyzing the success of MusicLM, we leverage the highest-level LM in MusicLM, termed as _semantic LM_, to model the semantic structure of music, determining the overall arrangement of melody, rhythm, dynamics, timbre, and tempo. Conditional on this semantic LM, we exploit the non-autoregressive nature of DPMs to model the acoustics efficiently and effectively with the help of a successful sampling acceleration technique [26]. All in all, in this paper, we introduce several novelties that constitute our main contributions: 1. We present **MeLoDy** (**M** for music; **L** for LM; **D** for diffusion), an LM-guided diffusion model that generates music of competitive quality while reducing 95.7% and 99.6% iterations of MusicLM to sample 10s and 30s music, being faster than real-time on a V100 GPU. 2. We propose the novel dual-path diffusion (DPD) models to efficiently model coarse and fine acoustic information simultaneously with a particular semantic conditioning strategy. 3. We design an effective sampling scheme for DPD, which improves the generation quality over the previous sampling method in [23] proposed for this class of LDMs. 4. We reveal a successful audio VAE-GAN that effectively learns continuous latent representations, and is capable of synthesizing audios of competitive quality together with DPD. ## 2 Related Work Audio GenerationApart from the generation models shown in Table 1, there are also music generation models [28; 29] that can generate high-quality music samples at high speed, yet they cannot accept free-form text conditions and can only be trained to specialize in single-genre music, e.g., techno music in [29]. There also are some successful music generators in the industry, e.g. Mubert [30] and Riffusion [31], yet, as analyzed in [5], they struggled to compete with MusicLM in handling free-form text prompts. In a more general scope of audio synthesis, some promising text-to-audio synthesizers [12; 21; 22] trained with AudioSet [32] also demonstrated to be able to generate music from free-form text, but the musicality is limited. AudioLM [3] unconditionally continued piano audios with promising fidelity. Parallel to this work, SoundStorm [33] exceedingly accelerated the AudioLM with a non-autoregressive decoding scheme [34], such that the acoustic LM can be decoded in 27 forward passes. In comparison, neglecting the individual cost of networks, MeLoDy takes 5 to 20 forward passes to generate acoustics of high fidelity, as discussed in Section 5. Network ArchitectureThe architecture designed for our proposed DPD was inspired by the dual-path networks used in the context of audio separation, where Luo et al. [35] initiated the idea of segmentation-based dual-path processing, and triggered a number of follow-up works achieving the state-of-the-art results [36; 37; 38; 39; 40]. Noticing that the objective in diffusion models indeed can be viewed as a special case of source separation, this kind of dual-path architecture effectually provides us a basis for simultaneous coarse-and-fine acoustic modeling. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Model** & **Prompts** & **Training Data** & **AC** & **FR** & **VT** & **MP** \\ \hline Moüsai [23] & Text & 2.5k hours of music & ✓ & ✓ & ✗ & ✗ \\ MusicLM [5] & Text, Melody & 280k hours of music & ✓ & ✗ & ✓ & ✗ \\ Noise2Music [6] & Text & 340k hours of music & ✗ & ✗ & ✓ & ✗ \\ \hline **MeLoDy** (Ours) & Text, Audio & 257k hours of music1 & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of MeLoDy with conventional text-to-music generation models in the literature. We use **AC** to denote whether audio continuation is supported, **FR** to denote whether the sampling is faster than real-time on a V100 GPU, **VT** to denote whether the model has been tested and demonstrated using various types of text prompts including instruments, genres, and long-form rich descriptions, and **MP** to denote whether the evaluation was done by music producers. Background on Audio Language Modeling This section provides the preliminaries that serve as the basis for our model. In particular, we briefly describe the audio language modeling framework used in MusicLM. ### Audio Language Modeling with MusicLM MusicLM [5] mainly follows the audio language modeling framework presented in AudioLM [3], where audio synthesis is viewed as a language modeling task over a hierarchy of coarse-to-fine audio tokens. In AudioLM, there are two kinds of tokenization for representing different scopes of audio: * **Semantic Tokenization**: K-means over representations from SSL, e.g., w2v-BERT [41]; * **Acoustic Tokenization**: Neural audio codec, e.g., SoundStream [42]. To better handle the hierarchical structure of the acoustic tokens, AudioLM further separates the modeling of acoustic tokens into coarse and fine stages. In total, AudioLM defines three LM tasks: (1) semantic modeling, (2) coarse acoustic modeling, and (3) fine acoustic modeling. We generally define the sequence of conditioning tokens as \(\mathbf{c}_{1:T_{\text{val}}}:=[\mathbf{c}_{1},\dots,\mathbf{c}_{T_{\text{val}}}]\) and the sequence of target tokens as \(\mathbf{u}_{1:T_{\text{tr}}}:=[\mathbf{u}_{1},\dots,\mathbf{u}_{T_{\text{tr}}}]\). In each modeling task, a Transformer-decoder language model parameterized by \(\theta\) is tasked to solve the following autoregressive modeling problem: \[p_{\theta}(\mathbf{u}_{1:T_{\text{tr}}}|\mathbf{c}_{1:T_{\text{val}}})=\prod _{j=1}^{T_{\text{tr}}}p_{\theta}(\mathbf{u}_{j}|[\mathbf{c}_{1},\dots,\mathbf{ c}_{T_{\text{val}}},\mathbf{u}_{1},\dots,\mathbf{u}_{j-1}]), \tag{1}\] where the conditioning tokens are concatenated to the target tokens as prefixes. In AudioLM, semantic modeling takes no condition; coarse acoustic modeling takes the semantic tokens as conditions; fine acoustic modeling takes the coarse acoustic tokens as conditions. The three corresponding LMs can be trained in parallel with the ground-truth tokens, but need to be sampled sequentially for inference. #### 3.1.1 Joint Tokenization of Music and Text with MuLan and RVQ To maintain the merit of audio-only training, MusicLM relies on MuLan [43], which is a two-tower, joint audio-text embedding model that can be individually trained with large-scale music data and weakly-associated, free-form text annotations. The MuLan model is pre-trained to project the music audio and its corresponding text description into the same embedding space such that the associated embeddings can be close to each other. In MusicLM, the MuLan embeddings of music and text are tokenized using a separately learned residual vector quantization (RVQ) [42]. Different from AudioLM, MusicLM employs the MuLan tokens as the additional prefixing tokens, as in Eq. (1), for the semantic modeling and the coarse acoustic modeling. During training, the audio is first fed to the MuLan music tower to obtain the music embedding. Then, an RVQ is applied to the music embedding, resulting in the ground-truth MuLan tokens for conditioning the semantic LM and the coarse acoustic LM. To generate music from a text prompt, the text embedding obtained from the MuLan text tower is passed to the same RVQ and is discretized into the inference-time MuLan tokens. Based on the prefixing MuLan tokens, the semantic tokens, coarse acoustic tokens, and fine acoustic tokens are subsequently computed to generate high-fidelity music audio adhering to the text prompt. ## 4 Model Description The overall training and sampling pipelines of MeLoDy are shown in Figure 1, where, we have three modules for representation learning: (1) MuLan, (2) Wav2Vec2-Conformer, and (3) audio VAE, and two generative models: a language model (LM) and a dual-path diffusion (DPD) model, respectively, for semantic modeling and acoustic modeling. In the same spirit as MusicLM, we leverage LM to model the semantic structure of music for its promising capability of modeling complex relationships across long-term contexts [9; 10; 11]. Similar to MusicLM, we pre-train a MuLan model to obtain the conditioning tokens. For semantic tokenization, we opt to use the Wav2Vec2-Conformer model, which follows the same architecture as Wav2Vec2 [44] but employs the Conformer blocks [45] in place of the Transformer blocks. The remainder of this section presents our newly proposed DPD model and the audio VAE-GAN used for DPD model, while other modules overlapped with MusicLM are described in Appendix B regarding the training and implementation details. ### Dual-Dath Diffusion: Angle-Parameterized Continuous-Time Latent Diffusion Models The proposed dual-path diffusion (DPD) model is a variant of diffusion probabilistic models (DPMs) [15; 46; 1; 47] in continuous-time [47; 48; 49; 16]. Instead of directly operating on the raw data \(\mathbf{x}\sim p_{\text{data}}(\mathbf{x})\), with reference to the latent diffusion models (LDMs) [2], we consider a low-dimensional latent representation \(\mathbf{z}=\mathcal{E}_{\phi}(\mathbf{x})\), where \(\phi\) is a pre-trained autoencoder that enables reconstruction of the raw data from the latent: \(\mathbf{x}\approx\mathcal{D}_{\phi}(\mathbf{z})\). Here, we use \(\mathcal{E}_{\phi}\) to denote the encoder, and \(\mathcal{D}_{\phi}\) to denote the decoder. By working on a low-dimensional latent space, the computational burden of DPMs can be significantly relieved [2]. We present our audio autoencoder in Section 4.2, which is tailored for DPMs and performed the stablest in our experiments. In DPD, we consider a Gaussian diffusion process \(\mathbf{z}_{t}\) that is fully specified by two strictly positive scalar-valued, continuously differentiable functions \(\alpha_{t},\sigma_{t}\)[16]: \(q(\mathbf{z}_{t}|\mathbf{z})=\mathcal{N}(\mathbf{z}_{t};\alpha_{t}\mathbf{z},\sigma_{t}^{2}\mathbf{I})\) for any \(t\in[0,1]\). In the light of [48], we define \(\alpha_{t}:=\cos(\pi t/2)\) and \(\sigma_{t}:=\sin(\pi t/2)\) to benefit from some nice trigonometric properties, i.e., \(\sigma_{t}=\sqrt{1-\alpha_{t}^{2}}\) (a.k.a. variance-preserving [16]). By this definition, \(\mathbf{z}_{t}\) can be elegantly re-parameterized in terms of angles \(\delta\): \[\mathbf{z}_{\delta}=\cos(\delta)\mathbf{z}+\sin(\delta)\mathbf{\epsilon}\ \ \ \text{for any}\ \ \delta\in[0,\pi/2],\ \ \ \mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{2}\] Note that \(\mathbf{z}_{\delta}\) gets noisier as \(\delta\) increases from \(0\) to \(\pi/2\), which defines the forward diffusion process. To generate samples, we use a \(\theta\)-parameterized variational model \(p_{\theta}(\mathbf{z}_{\delta-\omega}|\mathbf{z}_{\delta})\) to invert the diffusion process by enabling running backward in angle with \(0<\omega\leq\delta\). Based on this model, we can sample \(\mathbf{z}\) from \(\mathbf{z}_{\pi/2}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) with \(T\) sampling steps, by discretizing \(\pi/2\) into \(T\) segments as follows: \[p_{\theta}(\mathbf{z}|\mathbf{z}_{\pi/2})=\int_{\mathbf{z}_{\delta_{1:T-1}}} \prod_{t=1}^{T}p_{\theta}(\mathbf{z}_{\delta_{t}-\omega_{t}}|\mathbf{z}_{ \delta_{t}})\,d\mathbf{z}_{\delta_{1:T-1}},\ \ \ \delta_{t}=\begin{cases}\frac{\pi}{2}-\sum_{i=t+1}^{T}\omega_{i},&1\leq t<T;\\ \frac{\pi}{2},&t=T,\end{cases} \tag{3}\] where the _angle schedule_, denoted by \(\omega_{1},\dots,\omega_{T}\), satisfies \(\sum_{t=1}^{T}\omega_{t}=\pi/2\). Schneider et al. [23] proposed a uniform angle schedule: \(\omega_{t}=\frac{\pi}{2T}\) for all \(t\). As revealed in previous scheduling methods [50; 51] for DPMs, taking larger steps at the beginning of the sampling followed by smaller steps could improve the quality of samples. Following this strategy, we design a new linear angle schedule, which empirically gives more stable and higher-quality results, and is written as \[\omega_{t}=\frac{\pi}{6T}+\frac{2\pi t}{3T(T+1)}. \tag{4}\] We extensively compare this linear angle schedule with the uniform one in [23] in Appendix D. #### 4.1.1 Multi-Chunk Velocity Prediction for Long-Context Generation For model training, similar to the setting in [23] for long-context generation, the neural network is tasked to predict a multi-chunk target \(\mathbf{v}_{\text{tgt}}\) that comprises \(M\) chunks of velocities, each having a different noise scale. Formally speaking, given that \(\mathbf{z},\mathbf{z}_{\delta},\mathbf{\epsilon}\in\mathbb{R}^{L\times D}\) with \(L\) representing the length of audio latents and \(D\) representing the latent dimensions, we define \(\mathbf{v}_{\text{tgt}}:=\mathbf{v}_{1}\oplus\dots\oplus\mathbf{v}_{M}\), where \[\mathbf{v}_{m}:=\cos(\delta_{m})\mathbf{\epsilon}[L_{m-1}:L_{m},:]-\sin(\delta_{m} )\mathbf{z}[L_{m-1}:L_{m},:],\ \ \ L_{m}:=\left\lfloor\frac{mL}{M}\right\rfloor. \tag{5}\] Figure 1: The training and sampling pipelines of MeLoDy Here, we use the NumPy slicing syntax (\(0\) as the first index) to locate the \(m\)-th chunk, and we draw \(\delta_{m}\sim\text{Uniform}[0,\pi/2]\) for each chunk at each training step to determine the noise scale. To learn \(\theta\), we use the mean squared error (MSE) loss in [1, 48]: \[\mathcal{L}_{\text{diff}} :=\mathbb{E}_{\mathbf{z},\mathbf{e},\delta_{1},\dots,\delta_{M}} \left[\left\|\mathbf{v}_{\text{tgt}}-\hat{\mathbf{v}}_{\theta}(\mathbf{z}_{ \text{noisy}};\mathbf{c})\right\|_{2}^{2}\right], \tag{6}\] \[\mathbf{z}_{\text{noisy}} :=\cos(\delta_{m})\mathbf{z}[L_{m-1}:L_{m},:]+\sin(\delta_{m}) \boldsymbol{\epsilon}[L_{m-1}:L_{m},:], \tag{7}\] where \(\mathbf{c}\) generally denotes the collection of conditions used for the velocity prediction. In MeLoDy, as illustrated in Figure 1, we propose to use the semantic tokens \(\mathbf{u}_{1},\dots,\mathbf{u}_{T_{\text{ST}}}\), which are obtained from the SSL model during training and generated by the LM at inference time, to condition the DPD model. In our experiments, we find that the stability of generation can be significantly improved if we use token-based discrete conditions to control the semantics of the music and let the diffusion model learn the embedding vector for each token itself. Additionally, to assist the multi-chunk prediction, we append an angle vector to the condition that represents the angles drawn in the \(M\) chunks: \[\mathbf{c}:=\left\{\mathbf{u}_{1},\dots,\mathbf{u}_{T_{\text{ST}}},\boldsymbol {\delta}\right\},\quad\boldsymbol{\delta}:=\left[\delta_{1}\right]_{r=1}^{L_{ 1}}\oplus\dots\oplus\left[\delta_{M}\right]_{r=1}^{L_{M}}\in\mathbb{R}^{L} \tag{8}\] where \(\left[a\right]_{r=1}^{B}\) denotes the operation of repeating a scalar \(a\) for \(B\) times to make a \(B\)-length vector. Suppose we have a well-trained velocity model, for sampling, we apply the trigonometric identities to the DDIM sampling algorithm [26] (see Appendix A) and obtain a simplified update rule: \[\mathbf{z}_{\delta_{t}-\omega_{t}}=\cos(\omega_{t})\mathbf{z}_{\delta_{t}}- \sin(\omega_{t})\hat{\mathbf{v}}_{\theta}(\mathbf{z}_{\delta_{t}};\mathbf{c}), \tag{9}\] by which, using the angle schedule in Eq. (4) and running from \(t=T\) to \(t=1\), we get a sample of \(\mathbf{z}\). #### 4.1.2 Dual-Path Modeling for Efficient and Effective Velocity Prediction Next, we present how \(\hat{\mathbf{v}}_{\theta}\) takes in the noisy latent and the conditions and efficiently incorporates the semantic tokens into the coarse processing path for effective velocity prediction. As a highlight of this work, we modify the dual-path technique borrowed from audio separation [35, 37, 38], and propose a novel architecture for efficient, simultaneous coarse and fine acoustic modeling, as shown in Figure 2. This architecture comprises several critical modules, which we present one by one below. To begin with, we describe how the conditions are processed in DPD (the middle part in Figure 2): Encoding Angle VectorFirst, we encode \(\boldsymbol{\delta}\in\mathbb{R}^{L}\), which records the frame-level noise scales of latents. Instead of using the classical positional encoding [1], we use a Slerp-alike spherical interpolation [52] to two learnable vectors \(\mathbf{e}_{\text{start}},\mathbf{e}_{\text{end}}\in\mathbb{R}^{256}\) based on broadcast multiplications \(\otimes\): \[\mathbf{E}_{\boldsymbol{\delta}}:=\text{MLP}\left(\sin(\boldsymbol{\delta}) \otimes\mathbf{e}_{\text{start}}+\sin(\boldsymbol{\delta})\otimes\mathbf{e}_{ \text{end}}\right)\in\mathbb{R}^{L\times D_{\text{half}}}, \tag{10}\] where \(\text{MLP}(\mathbf{x}):=\text{RMSNorm}(\mathbf{W}_{2}\text{GELU}(\mathbf{x} \mathbf{W}_{1}+\mathbf{b}_{1})+\mathbf{b}_{2})\) projects an arbitrary input \(\mathbf{x}\in\mathbb{R}^{D_{\text{in}}}\) to \(\mathbb{R}^{D_{\text{half}}}\) using RMSNorm [53] and GELU activation [54]. Here, \(D_{\text{half}}\) is hidden dimension defined for the model, and \(\mathbf{W}_{1}\in\mathbb{R}^{D_{\text{in}}\times D_{\text{half}}}\), \(\mathbf{W}_{2}\in\mathbb{R}^{D_{\text{half}}\times D_{\text{half}}}\), \(\mathbf{b}_{1},\mathbf{b}_{2}\in\mathbb{R}^{D_{\text{half}}}\) are the learnable parameters. Figure 2: The proposed dual-path diffusion (DPD) model Encoding Semantic TokensThe remaining conditions are the discrete tokens representing semantic information \(\mathbf{u}_{1},\ldots,\mathbf{u}_{T_{\text{ST}}}\). Following the typical approach for embedding natural languages [8], we directly use a lookup table of vectors to map any token \(\mathbf{u}_{t}\in\{1,\ldots,V_{\text{ST}}\}\) into a real-valued vector \(E(\mathbf{u}_{t})\in\mathbb{R}^{D_{\text{init}}}\), where \(V_{\text{ST}}\) denotes the vocabulary size of the semantic tokens, i.e., the number of clusters in k-means for Wav2Vec2-Conformer. By stacking the vectors along the time axis and applying an MLP block, we obtain \(\mathbf{E}_{\text{ST}}:=\text{MLP}\left(\left[E(\mathbf{u}_{1}),\ldots,E( \mathbf{u}_{T_{\text{ST}}})\right]\right)\in\mathbb{R}^{T_{\text{ST}}\times D _{\text{init}}}\). Next, we show how the network input (i.e., \(\mathbf{z}_{\text{noisy}}\) at training time, or \(\mathbf{z}_{\delta_{i}}\) at inference time) is processed given the condition embeddings. We use \(\mathbf{z}_{\text{noisy}}\) as input for our explanation below, since \(\mathbf{z}_{\delta_{i}}\) is only its special case with all chunks having the same noise scale. The input \(\mathbf{z}_{\text{noisy}}\) is first linearly transformed and added up with the angle embedding of the same shape: \(\mathbf{H}:=\text{RMSNorm}\left(\mathbf{z}_{\text{noisy}}\mathbf{W}_{\text{in }}+\mathbf{E}_{\delta}\right),\) where \(\mathbf{W}_{\text{in}}\in\mathbb{R}^{D\times D_{\text{init}}}\) is learnable. We then perform segmentation for dual-path processing. SegmentationAs shown in Figure 2(a), the segmentation module divides a 2-D input into \(S\) half-overlapping segments each of length \(K\), represented by a 3-D tensor \(\mathbb{H}:=\left[\mathbf{0},\mathbf{H}_{1},\ldots,\mathbf{H}_{S},\mathbf{0} \right]\in\mathbb{R}^{S\times K\times D_{\text{init}}}\), where \(\mathbf{H}_{s}:=\mathbf{H}\left[\frac{(s-1)K}{2}:\frac{(s-1)K}{2}+K,:\right],\) and \(\mathbb{H}\) is zero-padded such that we have \(S=\left\lceil\frac{2K}{K}\right\rceil+1\). With a segment size \(K\approx\sqrt{L}\), the length for sequence processing becomes sub-linear (\(\mathcal{O}(\sqrt{L})\)) as opposed to tackling the whole sequence (\(\mathcal{O}(L)\)). This greatly reduces the difficulty of learning a very long sequence and permits MeLoDy to use higher-frequency latents. Dual-Path BlocksAfter the segmentation, we obtain a 3-D tensor input for \(N\) dual-path blocks, each block exhibits an architecture shown on the rightmost of Figure 2. The input to the \(i\)-th dual-path block is denoted as \(\mathbb{H}^{(i)}\), and we have \(\mathbb{H}^{(1)}:=\mathbb{H}\). Each block contains two stages corresponding to coarse-path (i.e., inter-segment) and fine-path (i.e., intra-segment) processing, respectively. Similar to the observations in [37, 38], we find it superior to use an attention-based network for coarse-path processing and to use a bi-directional RNN for fine-path processing. The goal of fine acoustic modeling is to better reconstruct the fine details from the roughly determined audio structure [3]. At a finer scope, only the nearby elements matter and contain the most information needed for refinement, as supported by the modeling perspectives in neural vocoding [55, 56]. Specifically, we employ the Roformer network [57] for coarse-path processing, where we use a self-attention layer followed by a cross-attention layer to be conditional on \(\mathbf{E}_{\text{ST}}\) with rotary positional embedding. On the other hand, we use a stack of 2-layer simple recurrent units (SRUs) [58] for fine-path processing. The feature-wise linear modulation (FiLM) [59] is applied to the output of SRUs to assist the denoising with the angle embedding \(\mathbf{E}_{\delta}\) and the pooled \(\mathbf{E}_{\text{ST}}\). Each of these processing stages is detailed below. Coarse-Path ProcessingIn a dual-path block, we first process the coarse path corresponding to the vertical axis shown in Figure 2(a), in which the columns are processed in parallel: \[\mathbb{H}^{(i)}_{\text{-out}}:=\text{RepeatSegments}\left(\left[\text{Roformer }\left(\text{MergeSegments}\left(\mathbb{H}^{(i)}\right)[:,k,:]\right),k=0, \ldots,K^{(i)}_{\text{MS}}-1\right]\right), \tag{11}\] where the coarse-path output \(\mathbb{H}^{(i)}_{\text{-out}}\in\mathbb{R}^{S\times K\times D_{\text{init}}}\) has the same shape as \(\mathbb{H}^{(i)}\), and \(\text{MergeSegments}(\cdot)\) and \(\text{RepeatSegments}(\cdot)\) are the operations that, respectively, compress and expand the segments Figure 3: Diagrams for visually understanding the operations over the 3-D segments horizontally to aggregate the information within a segment for a coarser scale of inter-segment processing. Note that, without taking the merging and repeating operations, the vertical axis is simply a sequence formed by skipping \(K/2\) elements in \(\mathbf{H}\), which does not really capture the desired coarse information. The merging is done by averaging every pair of \(2^{\min\{i,N-i+1\}}\) columns with zero paddings and a half stride such that \(K^{(i)}_{\text{MS}}=\left\lceil\frac{K}{2^{\min\{i,N-i+1\}-T}}\right\rceil\). The upper part of Figure 2(b) illustrates the case of \(i=2\). Similar to [38], our definition of \(K^{(i)}_{\text{MS}}\) changes the width of the 3d tensor with the block index \(i\) in a sandglass style, as we have the shortest segment at the middle block and the longest segment at the first and the last block. To match with the original length, a repeating operation following from the Roformer is performed, as shown in the lower part of Figure 2(b). Fine-Path ProcessingWe then obtain the fine-path input: \(\mathbb{H}^{(i)}_{\text{f-in}}:=\text{RMSNorm}\left(\mathbb{H}^{(i)}+\mathbb{ H}^{(i)}_{\text{c-out}}\right),\) which is fed to a two-layer SRU by parallelly processing the rows illustrated in Figure 2(a): \[\mathbb{H}^{(i)}_{\text{f-out}}:=\left[\text{FiLM}\left(\text{ SRU}\left(\mathbb{H}^{(i)}_{\text{f-in}}[s,:,:]\right),\mathbf{E}_{\boldsymbol{ \delta}}\left[\frac{sL}{S},:\right]\right]+\frac{1}{T_{\text{ST}}}\sum_{t=0}^{ T_{\text{ST}}-1}\mathbf{E}_{\text{ST}}[t,:]\right),s=0,\dots,S-1\right], \tag{12}\] where \(\text{FiLM}(\mathbf{x},\mathbf{m}):=\text{MLP}_{3}\left((\mathbf{x}\otimes \text{MLP}_{1}(\mathbf{m}))+\text{MLP}_{2}(\mathbf{m})\right)\) for an arbitrary input \(\mathbf{x}\) and modulation condition \(\mathbf{m}\), and \(\otimes\) is the operations of broadcast multiplication. Followed from this, we have the input for the next dual-path block: \(\mathbb{H}^{(i+1)}:=\text{RMSNorm}\left(\mathbb{H}^{(i)}_{\text{f-in}}+ \mathbb{H}^{(i)}_{\text{f-out}}\right).\) After recursively processing through \(N\) dual-path blocks, the 3-D tensor is transformed back to a 2-D matrix using an overlap-and-add method [35]. Finally, the predicted velocity is obtained as follows: \[\hat{\mathbf{v}}_{\boldsymbol{\theta}}(\mathbf{z}_{\text{noisy}};\mathbf{c}): =\text{RMSNorm}\left(\text{OverlapAdd}\left(\mathbb{H}^{(N+1)}\right) \right)\mathbf{W}_{\text{out}}, \tag{13}\] where \(\mathbf{W}_{\text{out}}\in\mathbb{R}^{D_{\text{Mul}}\times D}\) is learnable. We present more details of our implementation in Appendix B. ### Audio VAE-GANs for Latent Representation Learning To avoid learning arbitrarily high-variance latent representations, Rombach et al. [2] examined a KL-regularized image autoencoder for latent diffusion models (LDMs) and demonstrated extraordinary stability in generating high-quality image [60], igniting a series of follow-up works [61]. Such an autoencoder imposes a KL penalty on the encoder outputs in a way similar to VAEs [62; 63], but, different from the classical VAEs, it is adversarially trained as in the generative adversarial networks (GANs) [64]. In this paper, this class of autoencoders is referred to as the VAE-GAN. Although VAE-GANs are promisingly applied to image generation, there is still a lack of comparable successful methods for the autoencoding of audio waveforms. In this work, we propose a similarly trained audio VAE-GAN, which empirically showed remarkable stability when applied to our DPD model in comparison to other commonly used VQ-VAE used in [12; 21; 65]. Specifically, the audio VAE-GAN is trained to reconstruct 24kHz audio with a striding factor of 96, resulting in a 250Hz latent sequence. The architecture of the decoder is the same as that in HiFi-GAN [66]. For the encoder, we basically replace the up-sampling modules in the decoder with convolution-based down-sampling modules while other modules stay the same. For adversarial training, we use the multi-period discriminators in [66] and the multi-resolution spectrogram discriminators in [67]. The training details are further discussed in Appendix B. To match the normal range of targets for diffusion models [1; 2], we map the encoder outputs to \([-1,1]\) by \(\mathbf{z}_{(i,j)}:=\min\left\{\max\left\{\mathbbm{z}_{(i,j)}/3,-1\right\},1 \right\}\forall i,j\), where the subscript \((i,j)\) denotes the value on the \(i\)-th row and \(j\)-th column, and the choice of \(3\) in practice would sieve extreme values occupying \(<0.1\%\). ### Music Inpainting, Music Continuation and Music Prompting with MeLoDy We show that the proposed MeLoDy supports interpolation (i.e., audio inpainting) and extrapolation (i.e., audio continuation) with tricks of manipulating random noises. Noticeably, diffusion models have been successfully used for effective audio inpainting [21; 22]. Yet, audio continuation has been an obstacle for diffusion models due to their non-autoregressive nature. Besides audio continuation, based on MuLan, MeLoDy also supports music prompts to generate music of a similar style, as shown in Figure 1. Examples of music inpainting, music continuation, and music prompting are shown on our demo page. We present the algorithms of these functionalities in Appendix C. ## 5 Experiments ### Experimental Setup Data PreparationAs shown in Table 1, MeLoDy was trained on 257k hours of music data (6.4M 24kHz audios), which were filtered with [27] to focus on non-vocal music. Additionally, inspired by the text augmentation in [6], we enriched the tag-based texts to generate music captions by asking ChatGPT [68]. This music description pool is used for the training of our 195.3M MuLan, where we randomly paired each audio with either the generated caption or its respective tags. In this way, we robustly improve the model's capability of handling free-form text. Semantic LMFor semantic modeling, we trained a 429.5M LLaMA [69] with 24 layers, 8 heads, and 2048 hidden dimensions, which has a comparable number of parameters to that of the MusicLM [5]. For conditioning, we set up the MuLan RVQ using 12 1024-sized codebooks, resulting in 12 prefixing tokens. The training targets were 10s semantic tokens, which are obtained from discretizing the 25Hz embeddings from a 199.5M Wav2Vec2-Conformer with 1024-center k-means. Dual-Path DiffusionFor the DPD model, we set the hidden dimension to \(D_{\text{hid}}=768\), and block number to \(N=8\), resulting in 296.6M parameters. For the input chunking strategy, we divide the 10s training inputs in a fixed length of \(L=2500\) into \(M=4\) parts. For segmentation, we used a segment size of \(K=64\) (i.e., each segment is 256ms long), leading to \(S=80\) segments. In addition, we applied the classifier-free guidance [70] to DPD for improving the correspondence between samples and conditions. During training, the cross-attention to semantic tokens is randomly replaced by self-attention with a probability of \(0.1\). For sampling, the predicted velocity is linearly combined as. For all of our generations, a scale of 2.5 was used for classifier-free guidance. Audio VAE-GANFor audio VAE-GAN, we used a hop size of 96, resulting in 250Hz latent sequences for encoding 24kHz music audio. The latent dimension \(D=16\), thus we have a total compression rate of 6\(\times\). The hidden channels used in the encoder were 256, whereas that used in the decoder were 768. The audio VAE-GAN in total contains 100.1M parameters. ### Performance Analysis Objective MetricsWe use the VGGish-based [71] Frechet audio distance (FAD) [72] between the generated audios and the reference audios from MusicCaps [5] as a rough measure of generation fidelity.2 To measure text correlation, we use the MuLan cycle consistency (MCC) [5], which calculates the cosine similarity between text and audio embeddings using a pre-trained MuLan.3 Footnote 2: Note that MeLoDy was mainly trained with non-vocal music data, its sample distribution could not fit the reference one as well as in [5, 6], since about 76% audios in MusicCaps contain either vocals or speech. Footnote 3: Since our MuLan model was trained with a different dataset, our MCC results cannot be compared to [5, 6]. Inference SpeedWe first evaluate the sampling efficiency of our proposed MeLoDy. As DPD permits using different numbers of sampling steps depending on our needs, we report its generation speed in Table 2. Surprisingly, MeLoDy steadily achieved a higher MCC score than that of the reference set, even taking only 5 sampling steps. This means that (i) the MuLan model determined that our generated samples were more correlated to MusicCaps captions than reference audios, and (ii) the proposed DPD is capable of consistently completing the MuLan cycle at significantly lower costs than the nested LMs in [5]. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Steps (\(T\))** & **Speed on CPU (\(\uparrow\))** & **Speed on GPU (\(\uparrow\))** & **FAD (\(\downarrow\))** & **MCC (\(\uparrow\))** \\ \hline (MusicCaps) & - & - & - & 0.43 \\ \hline 5 & **1472Hz (0.06\(\times\))** & **181.1kHz (7.5\(\times\))** & 7.23 & 0.49 \\ 10 & 893Hz (0.04\(\times\)) & 104.8kHz (4.4\(\times\)) & 5.93 & 0.52 \\ 20 & 498Hz (0.02\(\times\)) & 56.9kHz (2.4\(\times\)) & **5.41** & **0.53** \\ \hline \hline \end{tabular} \end{table} Table 2: The speed and the quality of our proposed MeLoDy on a CPU (Intel Xeon Platinum 8260 CPU @ 2.40GHz) or a GPU (NVIDIA Tesla V100) using different numbers of sampling steps. Comparisons with SOTA modelsWe evaluate the performance of MeLoDy by comparing it to MusicLM [5] and Noise2Music [6], which both were trained large-scale music datasets and demonstrated SOTA results for a wide range of text prompts. To conduct fair comparisons, we used the same text prompts in their demos (70 samples from MusicLM; 41 samples from Noise2Music),4 and asked seven music producers to select the best out of a pair of samples or voting for a tie (both win) in terms of musicality, audio quality, and text correlation. In total, we conducted 777 comparisons and collected 1,554 ratings. We detail the evaluation protocol in Appendix F. Table 3 shows the comparison results, where each category of ratings is separated into two columns, representing the comparison against MusicLM (MLM) or Noise2Music (N2M), respectively. Finally, MeLoDy consistently achieved comparable performances (all winning proportions fall into [0.4, 0.6]) in musicality and text correlation to MusicLM and Noise2Music. Regarding audio quality, MeLoDy outperformed MusicLM (\(p<0.05\)) and Noise2Music (\(p<0.01\)), where the \(p\)-values were calculated using the Wilcoxon signed-rank test. We note that, to sample 10s and 30s music, MeLoDy only takes 4.32% and 0.41% NFEs of MusicLM, and 10.4% and 29.6% NFEs of Noise2Music, respectively. Footnote 4: All samples for evaluation are available at [https://Efficient-MeLoDy.github.io/](https://Efficient-MeLoDy.github.io/). Note that our samples were not cherry-picked, whereas the samples we compared were cherry-picked [6], constituting very strong baselines. Diversity AnalysisDiffusion models are distinguished for its high diversity [25]. We conduct an additional experiment to study the diversity and validity of MeLoDy's generation given the same text prompt of open description, e.g., feelings or scenarios. The sampled results were shown on our demo page, in which we obtained samples with diverse combinations of instruments and textures. Ablation StudiesWe also study the ablation on two aspects of the proposed method. In Appendix D, we compared the uniform angle schedule in [23] and the linear one proposed in DPD using the MCC metric and case-by-case qualitative analysis. It turns out that our proposed schedule tends to induce fewer acoustic issues when taking a small number of sampling steps. In Appendix E, we showed that the proposed dual-path architecture outperformed other architectures [23; 31] used for LDMs in terms of the signal-to-noise ratio (SNR) improvements using a subset of the training data. ## 6 Discussion LimitationWe acknowledge the limitations of our proposed MeLoDy. To prevent from having any disruption caused by unnaturally sound vocals, our training data was prepared to mostly contain non-vocal music only, which may limit the range of effective prompts for MeLoDy. Besides, the training corpus we used was unbalanced and slightly biased towards pop and classical music. Lastly, as we trained the LM and DPD on 10s segments, the dynamics of a long generation may be limited. Broader ImpactWe believe our work has a huge potential to grow into a music creation tool for music producers, content creators, or even normal users to seamlessly express their creative pursuits with a low entry barrier. MeLoDy also facilitates an interactive creation process, as in Midjourney [24], to take human feedback into account. For a more precise tune of MeLoDy on a musical style, the LoRA technique [73] can be potentially applied to MeLoDy, as in Stable Diffusion [60]. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**NFE** (\(\downarrow\))} & \multicolumn{2}{c}{**Musicality** (\(\uparrow\))} & \multicolumn{2}{c}{**Quality** (\(\uparrow\))} & \multicolumn{2}{c}{**Text Corr.** (\(\uparrow\))} \\ \cline{3-8} & MLM & N2M & MLM & N2M & MLM & N2M \\ \hline MusicLM [5] & \((25+200+400)T\) & **0.541** & - & 0.465 & - & **0.548** & - \\ Noise2Music [6] & \(1000+800+800\) & - & **0.555** & - & 0.436 & - & **0.572** \\ \hline **MeLoDy** (20 steps) & \(25T+20\) & 0.459 & 0.445 & **0.535** & **0.564** & 0.452 & 0.428 \\ \hline \hline \end{tabular} \end{table} Table 3: The comparison of MeLoDy with the SOTA text-to-music generation models. **NFE** is the number of function evaluations [48] for generating \(T\)-second audio.5** **Musicality**, **Quality**, and **Text Corr.** are the winning proportions in terms of musicality, quality, and text correlation, respectively.
2302.09397
Evaluation of Linear Implicit Quantized State System method for analyzing mission performance of power systems
The Linear Implicit Quantized State System (LIQSS) method has been evaluated for suitability in modeling and simulation of long-duration mission profiles of Naval power systems which are typically characterized by stiff, nonlinear, differential algebraic equations. A reference electromechanical system consists of an electric machine connected to a torque source on the shaft end and to an electric grid at its electrical terminals. The system is highly non-linear and has widely varying rate constants; at a typical steady state operating point, the electrical and electromechanical time constants differ by three orders of magnitude being 3.2 ms and 2.7 s respectively. Two important characteristics of the simulation, accuracy, and computational intensity both depend on the quantization size of the system state variables. At a quantization size of about 1 percent of a maximum value of a variable, results from the LIQSS1 method differed by less than 1 percent from results computed by well-known continuous system state space methods. The computational efficiency of the LIQSS1 method increased logarithmically with increasing quantization size, without significant loss of accuracy, up to some particular quantization size, beyond which the error increased rapidly. For the particular system under study, a sweet spot was found at a particular quantum size that yielded both high computational efficiency and good accuracy.
Navid Gholizadeh, Joseph M. Hood, Roger Dougal
2023-02-18T18:12:34Z
http://arxiv.org/abs/2302.09397v1
Evaluation of Linear Implicit Quantized State System method for analyzing mission performance of power systems ###### Abstract The Linear Implicit Quantized State System (LIQSS) method has been evaluated for suitability in modeling and simulation of long duration mission profiles of Naval power systems which are typically characterized by stiff, non-linear; differential algebraic equations. A reference electromechanical system consists of an electric machine connected to a torque source on the shaft end and to an electric grid at its electrical terminals. The system is highly non-linear and has widely varying rate constants; at a typical steady state operating point, the electrical and electromechanical time constants differ by three orders of magnitude--being 3.2 ms and 2.7 s respectively. Two important characteristics of the simulation--accuracy and computational intensity--both depend on quantization size of the system state variables. At a quantization size of about 1% of a variable's maximum value, results from the LIQSS1 method differed by less than 1% from results computed by well-known continuous-system state-space methods. The computational efficiency of the LIQSS1 method increased logarithmically with increasing quantization size, without significant loss of accuracy, up to some particular quantization size, beyond which the error increased rapidly. For the particular system under study, a "sweet spot" was found at a particular quantum size that yielded both high computational efficiency and good accuracy. Stiff systems, QDEVS, QSS, LIQSS, power system + Footnote †: University of South Carolina, USA + Footnote †: University of South Carolina, USA + Footnote †: University of South Carolina, 301 Main Street, Columbia, SC 29208, USA. ## Definitions **Reference system**: An electromechanical machine driven by a torque source and coupled to an ideal electrical power bus (i.e. zero-impedance voltage source) **Reference model**: The set of differential-algebraic equations that describe the time-domain behavior of the _Reference System_ as formulated in a reference frame rotating synchronously with the electric grid according to the Park transformation. **Reference solution**: The solution of the _Reference Model_ obtained by applying the Euler method **State update intensity**: The number of updates to any quantized state variable per unit of time in the simulation frame **Pointwise error (PE)**: At any time instant, the difference between the value \(\mathbf{y}_{\mathbf{i}}\) of a state variable that is computed by the LIQSS method resampled at the time step of the reference simulation and the value \(\mathbf{q}_{\mathbf{i}}\) computed by a known-good reference simulation, i.e., \(\mathbf{PE=}\mathbf{y}_{\mathbf{i}}-\mathbf{q}_{\mathbf{i}}\) **Time average normalized error (TANE)**: The square root of the mean value over time of the squared PE resampled at the time step of the reference simulation and normalized by the dynamic range of \(\mathbf{y}_{i}\) during the period of interest ###### Abstract The proposed algorithm is based on a novel algorithm for the estimation of the number of parameters of a given model. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_local_ algorithm, which is based on a _local_ algorithm. The algorithm is based on a _local_ algorithm, which is based on a _local_ algorithm. where \(\mathbf{q}\) is the quantized state vector that follows piecewise constant trajectories and is related to the state vector \(\mathbf{x}\) by the quantum size \(\Delta Q\). Literature[6, 7] defines the structure and implementation of atomic DEVS models and general-purpose simulators for QSS systems. The QSS approach guarantees a bounded error[8], so analytically stable systems cannot become numerically unstable when being simulated by a fully coupled QSS algorithm[7]. Several variations of QSS offer different features. The simplest formulation, QSS1, developed in Zeigler et al.[5] and Kofman and Junco[8], relies on explicit integration and uses first-order estimates of state derivatives to predict the time at which the continuous state \(x_{i}(t)\) will increase or decrease by amount \(\Delta Q\) (quantization step size) from the current quantized value \(q_{i}(t)\) to the next higher or lower quantized value. Although QSS1 has some advantages, like being easy to implement, its disadvantage is that it uses a first-order approximation of the state trajectory to calculate the time to the next event; to get accurate results, \(\Delta Q\) has to be quite small, which produces a large number of steps. QSS2[2] and QSS3[9] use more accurate second- and third-order approximations, respectively, for the state trajectory, however, the computational cost grows with the square root and cubic root of the desired accuracy. Because we are interested in simulating realistic, stiff power systems that include both fast electrical dynamics and slow mechanical dynamics, we require a QSS method that can handle stiff systems. This requirement eliminates QSS1, QSS2, and QSS3 because these methods create fictitious high-frequency oscillations[1] which in turn generate large numbers of steps that are costly in computational cost and memory size, even when a system is nominally in steady state. We chose instead to use the LIQSS methods which were specifically developed to address the concurrent existence of slow and fast dynamics inherent in stiff systems[1]. LIQSS methods combine classic implicit integration techniques into the QSS methods. Similar to the way that several variations of QSS methods were developed, so also were variations of LIQSS, such as LIQSS1, LIQSS2, and LIQSS3. These perform first-, second-, and third-order approximations, respectively[1]. The LIQSS2[1, 10], MLIQSS2[1], and LIQSS3 methods all offer improvements and performance and stability over the original LIQSS. Despite the benefits of LIQSS2 or 3, the simplicity of LIQSS1[1, 10] compelled us to use it in this study where our focus is on how computational intensity scales with quantization size for non-linear systems of order higher than 2, (second-order systems were already reported in Kofman[2]). Furthermore, in future work, we intend to report the performance of the combination of LIQSS1 with latency insertion method (LIM)[4], or QDL[3], when solving stiff non-linear systems. Not only will LIQSS1 make it easier to implement the necessary models but we also anticipate that using the first-order method will make it easier to distinguish latency effects from integration effects. If latency methods usefully improve simulator performance for first-order methods, then extensions can later be made to higher order variations of LIQSS, perhaps with additional gains in performance and stability. ## Reference electric power system The reference system, shown in Figure 1, has three major components--a prime mover, a synchronous machine (which can act as either a generator or a motor depending on the direction of power flow), and an AC power grid. Models of the prime mover and the grid are simplified--the torque source is an ideal time-dependent source with zero inertia, and the power system is represented as an ideal three-phase sinusoidal AC voltage source, with zero impedance and constant frequency--but these simplifications do not limit the general applicability of our results because the machine model still entails the solution of non-ideal network equations. Our particular reference system was chosen because it is of widespread interest, and because it also demonstrates suitability of the QSS method to efficiently solve systems that are characterized by coupled fast and slow dynamics. Figure 1: Synchronous generator connected to an infinite bus. ## Reference model The model of the electric machine is formulated in the synchronous reference frame to eliminate the periodic sinusoidal variations of all voltages and currents. This use of the standard Park transformation [11] maximizes the value of the QSS method for the analysis of AC systems. Although QSS can be used to simulate systems having arbitrary waveforms, the sinusoidal voltage and current oscillations inherent in an AC power system, if not factored out by a method such as Park transformation, would require rapid state updates that would obviate any benefits offered by the QSS method. A common model of the synchronous generator, when formulated in the synchronous reference frame of the Park transformation, is represented by the set of equivalent circuits shown in Figure 2. For the system of our analysis, the basis frequency is 50 Hz. The seventh-order set of nonlinear equations that describe the dynamics of the transformed circuit is described in Equations (3)-(9), and the algebraic constraints that apply to the network solution are defined by Equations (10) and (11). Figure 2 and the following equations follow [12]. The equations that describe these circuits are as follows \[\frac{d}{dt}\psi_{d}=V_{d}(t)-R_{s}i_{d}+\omega_{r}\psi_{q} \tag{4}\] \[\frac{d}{dt}\psi_{q}=V_{q}(t)-R_{s}i_{q}-\omega_{r}\psi_{d} \tag{5}\] \[\frac{d}{dt}\psi_{F}=e_{d}-i_{F}R_{F} \tag{6}\] \[\frac{d}{dt}\psi_{D}=-\,i_{D}R_{D} \tag{7}\] \[\frac{d}{dt}\psi_{Q}=-\,i_{Q}R_{Q} \tag{8}\] \[\frac{d}{dt}\omega_{r}=\frac{n}{J}\left(i_{q}\psi_{d}-i_{d}\psi_{q}-T_{m}\right) \tag{9}\] \[\frac{d}{dt}\theta=\omega_{r}-\omega_{b} \tag{10}\] \[\begin{bmatrix}i_{dr}\\ i_{F}\\ i_{D}\end{bmatrix}=\begin{bmatrix}L_{md}+L_{L}&L_{md}&L_{md}\\ L_{md}&L_{F}+L_{md}&L_{md}\\ L_{md}&L_{F}+L_{md}&L_{F}+L_{md}\end{bmatrix}^{-1}\cdot\begin{bmatrix}\psi_{ dr}\\ \psi_{F}\\ \psi_{D}\end{bmatrix} \tag{11}\] \[\begin{bmatrix}i_{qr}\\ i_{Q}\end{bmatrix}=\begin{bmatrix}L_{mq}+L_{L}&L_{mq}\\ L_{mq}&L_{mq}\end{bmatrix}^{-1}\cdot\begin{bmatrix}\psi_{q}\\ \psi_{Q}\end{bmatrix} \tag{12}\] where the terms in these equations are defined as follows: \(\dot{\psi_{d}},\dot{\psi_{q}}\): direct and quadrature stator fluxes in q, d equivalent circuits, respectively \(\dot{\psi_{D}},\dot{\psi_{Q}}\): direct and quadrature damper fluxes \(\dot{V_{d}}(t),V_{q}(t)\): direct and quadrature components of the stator terminal voltages, respectively \(\dot{R_{s}}\): series resistance in both d, q stator equivalent circuits \(\dot{\omega_{r}},\dot{\omega_{b}}\): rotor speed in rad/s and base frequency (\(2\pi\)f), respectively with f = 50 Hz \(\dot{i_{d}},\dot{i_{q}}\): direct and quadrature components of currents in the d, q stator equivalent circuits \(\dot{\psi_{F}},\dot{i_{F}}\): stator field flux and field current, respectively \(\dot{e_{d}}\): direct component of the field voltage \(\dot{R_{F}}\): field resistance \(\dot{i_{D}},\dot{i_{Q}}\): direct and quadrature internal equivalent currents \(\dot{R_{D}},\dot{R_{Q}}\): d and q components of equivalent circuit resistance, respectively \(\dot{i_{dr}},\dot{i_{qr}}\): d- and q-axis rotor currents \(\dot{\theta}\): rotor angle relative to synchronous reference frame Two solutions to the _reference model_, when operated through the _reference scenario_, were developed--a reference solution and a solution by the LIQSS method. The reference solution was obtained by applying Euler forward integration to the state-space equations, using MATLAB as the tool. A fixed time step of \(10^{-4}\) s was chosen for the Figure 2: Direct, quadrature, and mechanical equivalent circuits of the synchronous machine. explicit Euler solution so that that all eigenvalues would be stable over the entire operating range of the system. The second solution was obtained using the LIQSS method as described next. The code and the model parameters for both solutions are publicly available in a GitHub repository at the URL [https://github.com/UofSC-QDEVS/LIQSS_On_NonlinearSys](https://github.com/UofSC-QDEVS/LIQSS_On_NonlinearSys). ## 0.4 Reference scenario The reference system was exercised through the following operating scenario to produce all of the data reported here. The scenario starts with the synchronous generator spinning in steady state at 3000 r/min in synchronism with the sinusoidal grid voltage of 20 \(\mathrm{kV_{rms}}\) line-to-line. The machine produces an open circuit voltage equal in magnitude and phase to that of the grid. Since the phase angle (\(\delta\)) between the vector of power source voltage and the vector of stator voltage is zero, no current flows between the machine and the electric grid, so neither real (or "active") nor imaginary (or "reactive") power flows between the two, and zero torque is required to maintain the rotational speed. This steady state condition continues for the first 15 s. Beginning at \(t=15\) s, the torque applied by the prime mover to the shaft of the machine begins to ramp up. The torque ramp continues until \(t=20\) s, at which time the torque has reached 25% of the machine's rated torque. At 25% of rated torque, the phase of the stator voltage leads that of the grid voltage, and the machine drives 83 MW (real power, or active power) and 13 MVAR (imaginary power, or reactive power) into the grid. ## 0.5 Implementation of the LIQSS1 model The LIQSS1 model was formulated based on the specification in Di Pietro et al. [1] and Migoni and Kofman [10]. According to this method, any QSS atom computes the time at which it will reach a next state, and that is the time when the atom is next updated, unless an earlier update is required because an input has changed. Figure 3 shows the seven QSS atoms used in modeling the synchronous machine. In this implementation, every input to any atom comes from an output that has been quantized. The lines with direction arrows indicate that the output of one atom is conveyed to the input of another atom. Since the system is tightly coupled, many components have bidirectional arrows. Iteration between the atoms represents a sort of relaxation process. Bidirectional arrows indicate that a state update of either atom requires an update of the other. As an example, following the string of just one arrow, any update to the output of \(\psi_{dr}\) requires \(\omega_{r}\) to update the time at which it expects to reach its next quantum level. If the update of \(\psi_{dr}\) results in a change of the quantized state of \(\omega_{r}\), then the \(\theta\) atom must update the estimate of its time to the next quantum transition and that will feed back to require a next update of \(\psi_{dr}\). To start the solution, the "next update" time of each atom is initialized to infinity. Then, each atom computes its own next update time--the time at which it should arrive at its next higher or lower quantum state. The atom with the closest (smallest) update time is then updated. The occurrence of this update flags the atoms connected to it to again update their own time to the next quantum state, and the loop continues. A flowchart of the process is shown in Hood and Dougal [3]. ## 0.6 LIQSS performance The performance of the QSS method is shown in comparison to the performance of the reference method in a series of plots. Each plot shows the trajectory of a state variable and the number of updates of the computational atom for that state variable. All states in the system use the same quantum size (\(\Delta\)Q) of \(10^{-4}\) Wb, except the machine rotor speed (\(\omega_{r}\)) for which we choose a quantum size 1/10 as large (\(10^{-5}\) rad/s) since the dynamic range of the rotor speed is much less than that of the system fluxes. We have not established a mathematically rigorous methodology for choosing the best quantum size. Our initial method was to choose a quantum size that is roughly 0.015%-0.1% of the total expected deviation (the absolute value of the range of the quantity in the reference simulation) of the quantized state variable. One objective of the work reported in this paper was to investigate the relationship between quantum size and error. Understanding this relationship could lead to a more rigorous methodology for choosing quantum size based on a desired error bound. Figures 4-6 show the accuracy with which the LIQSS method tracks the reference method. The figures also show that atoms update asynchronously; over any particular period of time, each individual atom experiences a Figure 3: Atoms with lines showing which atoms trigger the updates in the other ones. different number of updates. The cumulative number of updates for a particular atom is shown by the red lines in the following charts, where one can observe that any particular line reaches to a different number at the end of the simulation period. An advantage of the QSS method over the reference method can be noted as the system reaches the new steady state condition and the rate of atom updates markedly decreases. For example, in Figure 4, during the interval from 0 to 15 s, while the system is in steady state, there are very few state updates. Then, during the period from 15 to 35 s, while the machine is accelerating and other states of the system are changing, the slope of the red line--representing the cumulative number of updates--is large. Then, finally after about 35 s, as the system approaches a new steady state, the update rate is again small. This aspect of the QSS method allows the simulation model to advance rapidly during steady state conditions. Furthermore, the cumulative number of updates depends on the chosen quantum size: A smaller quantum causes the system to require more frequent updates and hence the simulation advances slowly through time, while a larger quantum--up to a point--requires fewer updates and hence the simulation can advance in larger time increments. The effect of quantum size on simulation speed will be explored more fully in the next section. Figure 4: Rotor d-axis flux. The flux computed by the QSS method is nearly identical to that computed by the reference method so the two lines are nearly indistinguishable. Cumulative count of \(\Psi_{\alpha}\) atom updates shows little activity prior to torque ramp, higher activity during torque ramp, and a return to little activity as new steady state is attained. Figure 5: Rotor q-axis flux. The values computed by the QSS method and the reference method are nearly indistinguishable. Figure 6: Field flux, showing good agreement between both computing methods and a total number of QSS atom updates that is smaller than the counts for d- and q-axis fluxes. Figure 7: Rotor angle. QSS update rate shows interesting behavior with faster rates associated with beginning and ending of the torque ramp. The trajectory of rotor angle (\(\theta\)), as shown in Figure 7, is particularly interesting in that it shows how the count of this atom's updates increases immediately after the start of the torque ramp, then tapers off while the torque slew rate is constant during the interval between 15 and 20 s, then increases again at 20 s when the torque stops slewing and finally becomes small again as the rotor angle reaches its final steady state value. Figure 8 shows the trajectory of the rotor speed which always remains very close to 100 \(\pi\) rad/s but shows structure near the beginning and ending of the torque ramp and a small speed increase during the torque ramp as required to advance the rotor angle. In a later section of this paper, we will describe how both the accuracy and the error of rotor speed vary with choice of quantum size. The variables plotted in Figures 9 and 10 are functions of quantized states and therefore they do not have unique update rates. ## 0.4 Accuracy and error analysis Kofman[13] proved that for linear time invariant systems, the global error in QSS method can be bounded by a constant proportional to the quantum size. However, our reference system is highly non-linear so it is interesting to explore the behavior of the error with quantum size. The effect of quantization size on simulation accuracy is shown in Figures 11-13, where several system variables are plotted for several different quantum sizes, with enhanced detail during particular time periods. A larger quantum size results in both a lower update rate and higher error amplitude compared to a situation with a smaller quantum size. In each case, the quoted quantum size applies to all of the state variables except rotor speed, for Figure 8: Rotor speed trajectory and update rates. Figure 10: Comparisons of d- and q-axis voltages computed by the QSS and reference methods. Figure 9: Comparisons of d- and q-axis rotor currents computed by the QSS and reference methods. which the quantum is 1/10 that of the other variables. In each plot, the trajectory with green corresponds to the largest quantum size (\(\Delta Q\!=\!10^{-3}\)), while blue and red correspond to smaller sizes (\(\Delta Q\!=\!10^{-4}\)_and_\(\Delta Q\!=\!10^{-5}\), respectively). The rotor speed calculated with the largest quantum size has the largest error in comparison to the reference solution. This is evident in both Figure 12--in which resolution is increased near the apex of the speed trajectory--and Figure 13 where higher resolution shows a slightly oscillating speed trajectory after the torque ramp. These high frequency oscillations are inherent to the QSS method, and the amplitude of these oscillations increases as the quantum size increases. Although there appears to be a predictable relationship between error and quantum size over a certain range of quantum sizes, it is not known how to specify the appropriate quantum size for a desired error for any particular QSS model description. The sensitivity data provided in this paper have been empirically determined from simulation of this specific system with specific component parameters. Although this empirical data do provide useful insight into the relationship between quantum size and error, it does not solve the problem for the general case. Figures 14 and 15 describe the same behavior as was shown in Figures 11-13 but from the error perspective. Figure 14: Error between reference solution and QSS solution of rotor d-axis flux for several different quantization sizes \(\Delta\)Q = \(10^{-2}\), \(\Delta\)Q = 8.86 \(\times\)\(10^{-4}\), and \(\Delta\)Q = \(10^{-6}\). Figure 13: Zoom-in plots that show details for steady state situation of rotor speed using different quantization sizes \(\Delta\)Q = \(10^{-5}\), \(\Delta\)Q = \(10^{-4}\), \(\Delta\)Q = \(10^{-3}\) versus Euler reference solution. The oscillations have very small amplitude. Figure 15: Rotor speed updates versus relative error when rotor speed quantum is set to \(\Delta\)Q = \(10^{-7}\) and the quantization size of the rest of the system variables is \(\Delta\)Q = \(10^{-4}\). Possibly unnecessary high precision with low benefit of error reduction. Figure 14 shows how the pointwise absolute error--the difference between the QSS simulation and the reference simulation--varies over a particular half-second interval. The time variation of error in the state variable \(\Psi_{\text{dr}}\) is shown for several different quantum sizes ranging from \(\Delta Q\!=\!10^{-6}\) to \(\Delta Q\!=\!10^{-2}\) per unit. Clearly, a bigger quantum size produces a bigger error, but the relationship was not linear. Before the torque ramps up, the three different quantizations all produce negligible errors. After the torque starts to ramp up, the number of updates starts to grow and the models using larger quantization sizes produce larger errors. Although, a small quantization size does improve the simulation accuracy (smaller error amplitude), it also causes a larger model update rate. So, if computing speed is important, and if say, 1% error is tolerable, one might choose a large quantum size like \(\Delta Q\!=\!10^{-2}\) to achieve the requisite computing speed. To generate the data shown in Figure 15, we quantized the rotor speed at \(\Delta Q\!=\!10^{-7}\) and the other state variables at \(\Delta Q\!=\!10^{-4}\). This was an experiment to see if choosing a relatively small quantization size for some particular interesting state, but leaving other quanta larger, would produce fewer updates for the whole system and still a small error for the particular state of interest. Here, the error is calculated according to Equation (1). The error is not invariant to the chosen system parameters. It is unknown how a different simulation scenario (i.e. a different set of model parameters) could affect the error. Figure 15 shows that the state of rotor speed experienced a high number of updates but the error did not reduce compared to choosing \(\Delta Q\!=\!10^{-4}\) for the whole system which produced the same output with a smaller number of updates. So, we suggest to keep the whole system at a unified quantum size when state variables have comparable magnitudes--except those that are derivatives of other states--instead of choosing any particular quantum size very small and the rest of the quantum sizes relatively larger. Figure 16 shows the maximum error among any system variable at any time during the simulation interval. The graph is plotted on a logarithmic scale as a function of the quantum size, which was varied from \(\Delta Q\!=\!10^{-6}\) to \(\Delta Q\!=\!10^{-2}\) Wb. Also plotted is the corresponding sum of all updates of all atoms over the entire simulation. For small quantum sizes of \(\Delta Q\!=\!10^{-6}\) to \(\Delta Q\!=\!10^{-5}\) Wb, the error is very small and independent of quantum size. A logarithmic scale is used to emphasize that for very small quantum sizes (\(\Delta Q\!<\!10^{-5}\) Wb), a decrease in quantum size does not improve the accuracy, but it does impose a penalty on computational intensity (simulation update rate); the simulation takes longer to advance through time with no benefit in error reduction. A sweet spot is evident at quantum size between \(\Delta Q\!=\!10^{-5}\) and \(\Delta Q\!=\!10^{-4}\), where computational intensity has become relatively low while error also remains low. Above quantum size of \(10^{-4}\), the error increases rapidly, but without concomitant reductions in computational intensity. Although the QSS method does accurately track the reference solution, the method does inherently exhibit single-quantum oscillations. These oscillations are evident in Figure 17, which shows the direct axis rotor flux at high resolution just near the onset of the torque ramp at 15 s. The amplitudes of these high frequency oscillations reduce as the quantum size is reduced, but the oscillations are always present. Each oscillation represents an update Figure 16: Maximum error of all atoms for different quantum sizes. Total number of the updates decreases as the system is simulated with bigger quantum sizes at the expense of increasing the error. Figure 17: High-resolution plot showing the ripples in QSS solution of the rotor d-axis flux \(\Psi_{\text{dr}}\) with quantum size of \(\Delta\text{Q}=10^{-4}\). event, so reducing the oscillation size comes at the expense of computation time. Figures 17 and 18 show these high frequency oscillations at quantum sizes of \(\Delta Q=10^{-4}\) and \(\Delta Q=10^{-5}\), respectively. This is an inherent nature of the QSS method and cannot be avoided.[5] ## 4 Conclusion The performance of the LIQSS1 method for analyzing the dynamics of power networks has been characterized by the simulation of a reference system. Uniform quantization of system state variables at 0.01% was found to yield accuracy within 0.4% of that achieved with a conventional state-space solution, but with a significant advantage in computational intensity, especially for systems that operate for long time in a quasi-steady state. Since the LIQSS method enables the user to individually set the quantum sizes of each state, we evaluated the performance as a function of quantization size. When the system was simulated using one uniform quantization size for all states, the total number of state updates decreased as quantum size increased, but above a quantization size of about \(10^{-4}\), further increase in the quantization size did not significantly reduce the computational cost but it did decrease the simulation accuracy. When the quantization size for a single state of the interest was set smaller than the uniform quantization size of other states, refining the quantization size of that particular state variable did not necessarily improve the error of that particular state, and it did logarithmically increase the state update intensity of the system. Our observations of the effects of quantization size are limited by the particular system that was studied; other systems, especially those having state variables that are widely different in magnitude, may behave differently. ## Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported in part by the US Office of Naval Research under grant N00014-16-1-2956. ## 5 Orcid iD Roger A Dougal (c)[https://orcid.org/0000-0001-6152-1799](https://orcid.org/0000-0001-6152-1799)
2308.03187
A Combinatorial Hopf Algebra on Partition Diagrams
We introduce a Combinatorial Hopf Algebra (CHA) with bases indexed by the partition diagrams indexing the bases for partition algebras. By analogy with the operation $H_{\alpha} H_{\beta} = H_{\alpha \cdot \beta}$ for the complete homogeneous basis of the CHA $ \textsf{NSym}$ given by concatenating compositions $\alpha$ and $\beta$, we mimic this multiplication rule by setting $\textsf{H}_{\pi} \textsf{H}_{\rho} = \textsf{H}_{\pi \otimes \rho}$ for partition diagrams $\pi$ and $\rho$ and for the horizontal concatenation $\pi \otimes \rho$ of $ \pi$ and $\rho$. This gives rise to a free, graded algebra $\textsf{ParSym}$, which we endow with a CHA structure by lifting the CHA structure of $ \textsf{NSym}$ using an analogue, for partition diagrams, of near-concatenations of integer compositions. Unlike the Hopf algebra $\textsf{NCSym}$ on set partitions, the new CHA $\textsf{ParSym}$ projects onto $\textsf{NSym}$ in natural way via a ``forgetful'' morphism analogous to the projection of $\textsf{NSym}$ onto its commutative counterpart $\textsf{Sym}$. We prove, using the Boolean transform for the sequence $(B_{2n} : n \in \mathbb{N})$ of even-indexed Bell numbers, an analogue of Comtet's generating function for the sequence counting irreducible permutations, yielding a formula for the number of generators in each degree for $\textsf{ParSym}$, and we prove, using a sign-reversing involution, an evaluation for the antipode for $\textsf{ParSym}$. An advantage of our CHA being defined on partition diagrams in full generality, in contrast to a previously defined Hopf algebra on uniform block permutations, is given by how the coproduct operation we have defined for $\textsf{ParSym}$ is such that the usual diagram subalgebras of partition algebras naturally give rise to Hopf subalgebras of $\textsf{ParSym}$ by restricting the indexing sets of the graded components to diagrams of a specified form.
John M. Campbell
2023-08-06T18:38:01Z
http://arxiv.org/abs/2308.03187v2
# A Combinatorial Hopf Algebra on partition diagrams ###### Abstract We introduce a Combinatorial Hopf Algebra (CHA) with bases indexed by the partition diagrams indexing the bases for partition algebras. By analogy with the operation \(H_{\alpha}H_{\beta}=H_{\alpha\cdot\beta}\) for the complete homogeneous basis of the CHA \(\mathsf{NSym}\) given by concatenating compositions \(\alpha\) and \(\beta\), we mimic this multiplication rule by setting \(\mathsf{H}_{\pi}\mathsf{H}_{\rho}=\mathsf{H}_{\pi\otimes\rho}\) for partition diagrams \(\pi\) and \(\rho\) and for the horizontal concatenation \(\pi\otimes\rho\) of \(\pi\) and \(\rho\). This gives rise to a free, graded algebra \(\mathsf{ParSym}\), which we endow with a CHA structure by lifting the CHA structure of \(\mathsf{NSym}\) using an analogue, for partition diagrams, of near-concatenations of integer compositions. Unlike the Hopf algebra \(\mathsf{NCSym}\) on set partitions, the new CHA \(\mathsf{ParSym}\) projects onto \(\mathsf{NSym}\) in natural way via a "forgetful" morphism analogous to the projection of \(\mathsf{NSym}\) onto its commutative counterpart \(\mathsf{Sym}\). We prove, using the Boolean transform for the sequence \((B_{2n}:n\in\mathbb{N})\) of even-indexed Bell numbers, an analogue of Comtet's generating function for the sequence counting irreducible permutations, yielding a formula for the number of generators in each degree for \(\mathsf{ParSym}\), and we prove, using a sign-reversing involution, an evaluation for the antipode for \(\mathsf{ParSym}\). An advantage of our CHA being defined on partition diagrams in full generality, in contrast to a previously defined Hopf algebra on uniform block permutations, is given by how the coproduct operation we have defined for \(\mathsf{ParSym}\) is such that the usual diagram subalgebras of partition algebras naturally give rise to _Hopf_ subalgebras of \(\mathsf{ParSym}\) by restricting the indexing sets of the graded components to diagrams of a specified form, as in with perfect matchings, partial permutations, planar diagrams, etc. _Keywords:_ Combinatorial Hopf Algebra; partition diagram; set partition; noncommutative symmetric function; antipode; Boolean transform; free algebra; diagram algebra _MSC:_ 16T30, 05E05, 16T05 ## 1 Introduction The Hopf algebra \(\mathsf{NSym}\) of noncommutative symmetric functions introduced in [30] continues to be applied in important ways within combinatorics and many other areas. The underlying algebra of the bialgebra \(\mathsf{NSym}\) is such that \[\mathsf{NSym}=\Bbbk\langle H_{1},H_{2},\ldots\rangle, \tag{1}\] providing a noncommutative companion to \[\mathsf{Sym}=\Bbbk[h_{1},h_{2},\ldots]. \tag{2}\] The free algebra structure indicated in (1) naturally gives rise to the complete homogeneous basis \[\{H_{\alpha}:\alpha\in\mathcal{C}\} \tag{3}\] of \(\mathsf{NSym}\), subject to the multiplication rule \[H_{\alpha}H_{\beta}=H_{\alpha\cdot\beta}, \tag{4}\] letting \(\mathcal{C}\) denote the set of all integer compositions, and letting \(\alpha\cdot\beta=(\alpha_{1}\), \(\alpha_{2}\), \(\ldots\), \(\alpha_{\ell(\alpha)}\), \(\beta_{1}\), \(\beta_{2}\), \(\ldots\), \(\beta_{\ell(\beta)})\) denote the concatenation of \(\alpha=(\alpha_{1}\), \(\alpha_{2}\), \(\ldots\), \(\alpha_{\ell(\alpha)})\) and \(\beta=(\beta_{1}\), \(\beta_{2}\), \(\ldots\), \(\beta_{\ell(\beta)})\). Given a class of combinatorial objects, if we construct an algebra with bases indexed by such objects according to a multiplication rule given by a concatenation or concatenation-type operation, and without further conditions being imposed on the multiplication, this gives rise to a graded algebra structure closely related to (1), and motivates the construction of new Hopf algebras satisfying the axioms for Combinatorial Hopf Algebras (CHAs), as introduced by Aguiar, Bergeron, and Sottile [2]. We introduce a CHA, which we denote as \(\mathsf{ParSym}\), with bases indexed by the combinatorial objects indexing the bases of partition algebras and with a multiplication rule defined by analogy with (4). In this regard, the horizontal concatenation operation \(\otimes\) on partition diagrams is a natural operation to use to form a weight function for a graded algebra with bases indexed by partition diagrams. Our use of the \(\otimes\) operation on the basis elements of partition algebras is also inspired by the uses of this operation in the context of the character theory of partition algebras [33], and by how product operations for CHAs on graphs and on many other combinational objects are often defined via concatenation or closely related operations, as in with the disjoint union of graphs. Hopf algebras and partition algebras play important roles in many different areas within algebraic combinatorics and physics. However, it appears that partition algebras have not previously been endowed with any Hopf algebra or bialgebra structures. The foregoing considerations motivate the problem of introducing a Hopf algebra structure on the combinatorial objects indexing the bases of partition algebras. Aguiar and Orellana [4] have introduced a Hopf algebra on uniform block permutations, which are special cases of partition diagrams, and this motivates the construction of a Hopf algebra on partition diagrams in full generality. The Aguiar-Orellana Hopf algebra being free also motivates our constructing, as below, a free Hopf algebra on partition diagrams in full generality. It seems that past references related to the Aguiar-Orellana algebra [4], including [18, 21, 24, 25, 26, 41, 43], have not concerned Hopf algebras on partition diagrams in full generality. There is a great amount of literature on Hopf algebras on families of graphs; see [4, 6, 11, 22, 27, 28, 32, 37, 38, 40, 47, 51, 52] and many related references on CHAs on graphs and graph-like objects. This past literature motivates the problem of constructing a CHA on partition diagrams, which form a naturally occurring family of simple graphs that are often used within statistical mechanics and within representation theory. The coproduct operation for partition diagrams that we introduce depends on the labeling system for partition diagrams together with a binary operation \(\bullet\) that provides an analogue of near-concatenation for partition diagrams, as opposed to integer compositions. This is in contrast to comultiplication operations for previously studied Hopf algebras on graphs, such as the Hopf algebra \(\mathscr{G}\) on finite graphs introduced in [48] and later studied in references such as [35]. The Hopf algebra of diagrams due to Duchamp et al. [22] is not related to diagram algebras or partition algebras, but is based on a family of combinatorial objects that are defined and denoted in something of a similar way relative to partition diagrams, which further motivates the interest in constructing a Hopf algebra with bases indexed by partition diagrams. See also the work on Hopf algebras on dissection diagrams due to Dupont [23] and Mammez [42]. In Section 1.2 below, we consider how our new CHA \(\mathsf{ParSym}\) relates to the Hopf algebra \(\mathsf{NCSym}\)[13], the bases of which are indexed by set partitions in a similar way, relative to the bases of \(\mathsf{ParSym}\), and Section 1.2 also highlights some of the main points of interest concerning the new Hopf algebra \(\mathsf{ParSym}\), in relation to \(\mathsf{NCSym}\) and otherwise. Beforehand, we are to briefly review preliminaries on partition diagrams, as in Section 1.1 below. The main, nonintroductory sections of our article are summarized as below. - In Section 2, we define \(\mathsf{ParSym}\) as a graded algebra and determine an irreducible generating set for \(\mathsf{ParSym}\), and we determine the number of generators in each degree for \(\mathsf{ParSym}\). - In Section 3, we introduce a CHA structure on \(\mathsf{ParSym}\), and we prove a graph-theoretic property concerning an analogue of near-concatenation, to construct a CHA projection morphism from \(\mathsf{ParSym}\) to \(\mathsf{NSym}\), analogous to the projection of \(\mathsf{NSym}\) onto \(\mathsf{Sym}\). We introduce an analogue of the \(E\)-generators of NSym to define an antipode antihomomorphism on ParSym, and we prove that the required antipode axioms are satisfied, using a sign-reversing involution. Our CHA morphism from ParSym to NSym allows us to evaluate the unique CHA morphism from ParSym to QSym. - In Section 4, we show how each of the families of partition diagrams associated with what may be regarded as the main or most important subalgebras of \(\mathbb{C}A_{k}(n)\) naturally gives rise to a Hopf subalgebra of ParSym. - In Section 5, we conclude with a number of further research areas to explore related to the CHA ParSym introduced in this article. ### Partition diagrams For a set partition \(\pi\) of \(\{1,2,\ldots,k,1^{\prime},2^{\prime},\ldots,k^{\prime}\}\), we denote \(\pi\) as a graph \(G\) by arranging the elements of \(\{1,2,\ldots,k\}\) into a top row and arranging the members of \(\{1^{\prime},2^{\prime},\ldots,k^{\prime}\}\) into a bottom row, with members in \(\pi\) forming the connected components of \(G\). We consider any two graphs \(G\) and \(G^{\prime}\) on \(\{1\), \(2\), \(\ldots\), \(k\), \(1^{\prime}\), \(2^{\prime}\), \(\ldots\), \(k^{\prime}\}\) to be equivalent if the components of \(G\) and \(G^{\prime}\) are the same, and we may identify \(\pi\) with any graph \(G\) equivalent to \(\pi\). We may refer to \(G\) or its equivalence class as a _partition diagram_, and it may equivalently be denoted as a set partition. **Example 1**.: The partition diagram corresponding to the set partition \(\{\{5^{\prime}\), \(5\}\), \(\{4^{\prime}\}\), \(\{3^{\prime}\), \(1\), \(2\), \(3\), \(4\}\), \(\{2^{\prime}\}\), \(\{1^{\prime}\}\}\) may be illustrated as below. (5) The set of all partition diagrams of order \(k\) may be denoted as \(A_{k}\), and this is typically endowed with a monoid structure. The partition algebra \(\mathbb{C}A_{k}(n)\) is closely related to the monoid \(A_{k}\), and we refer to [34] for background material on and definitions concerning \(\mathbb{C}A_{k}(n)\). The algebra \(\mathbb{C}A_{k}(n)\) is spanned by a diagram basis given by the underlying set of \(A_{k}\), so that the dimension of \(\mathbb{C}A_{k}(n)\) is equal to \(B_{2k}\), i.e., the Bell number indexed by \(2k\), where the Bell number \(B_{m}\) is equal to the number of set partitions of \(\{1,2,\ldots,m\}\). The multiplicative operations for both \(A_{k}\) and \(\mathbb{C}A_{k}(n)\) are defined using the _vertical_ concatenation \(d_{1}*d_{2}\) of partition diagrams, as opposed to the horizontal concatenation operation \(\otimes\) referred to as above. The product \(d_{1}*d_{2}\) is obtained by placing \(d_{1}\) on top of \(d_{2}\) in such a way so that the bottom nodes of \(d_{1}\) overlap with the top nodes of \(d_{2}\). The underlying multiplicative operation for \(\mathbb{C}A_{k}(n)\) may be defined by removing the middle row of \(d_{1}*d_{2}\), in a way that preserves the relation of topmost nodes in \(d_{1}*d_{2}\) being in the same component as bottommost nodes, so that \(n^{\ell}\) times the resultant diagram is equal to the product \(d_{1}d_{2}\) in \(\mathbb{C}A_{k}(n)\), where \(\ell\) denotes the number of components removed from the middle row of \(d_{1}*d_{2}\), and where \(n\) denotes a complex parameter. To form a graded algebra or graded ring structure based on a multiplicative operation on partition diagrams, it would be appropriate to use the horizontal concatenation of partition diagrams. Following Halverson's work in [33], by letting \(d_{1}\) and \(d_{2}\) denote partition diagrams that are, respectively, of orders \(k_{1}\) and \(k_{2}\), we may let \(d_{1}\otimes d_{2}\) denote the partition diagram of order \(k_{1}+k_{2}\) obtained by placing \(d_{2}\) to the right of \(d_{1}\). The partition algebra \(\mathbb{C}A_{k}(n)\) naturally arises in the field of statistical mechanics via the Schur-Weyl duality associated with the centralizer algebra \[\operatorname{End}_{S_{n}}\left(V^{\otimes k}\right)\cong\mathbb{C}A_{k}(n), \tag{6}\] for an \(n\)-dimensional vector space \(V\), and where \(S_{n}\) acts on \(V^{\otimes k}\) diagonally, as a subgroup of \(\operatorname{GL}_{n}(\mathbb{C})\). The interdisciplinary interest surrounding the construction of a Hopf algebra on the combinatorial objects indexing the bases of centralizer algebras of the form \(\operatorname{End}_{S_{n}}\left(V^{\otimes k}\right)\) is motivated by past research on Schur-Weyl duality and Hopf algebras, as in the work of Benkart and Witherspoon [9] and the work of Novelli, Patras, and Thibon [44]. ### Relationship with NCSym It appears that no coproduct operation has previously been defined on partition algebras or partition diagrams. However, there have been a number of previously introduced Hopf algebras with bases indexed by set partitions or closely related combinatorial objects, which further motivates the interest in the new Hopf algebra \(\operatorname{ParSym}\). What is typically meant by the Hopf algebra on set partitions refers to \(\operatorname{NCSym}\) (cf. [45]), the Hopf structure for which was introduced by Bergeron et al. in [13]. One might think that the Hopf algebras \(\operatorname{NCSym}\) and \(\operatorname{ParSym}\) would be closely related, since both of these Hopf algebras have bases indexed by families of set partitions and since both of these Hopf algebras contain \(\operatorname{NSym}\) as a Hopf subalgebra. However, our methods and results and constructions differ greatly compared to [13], and this is summarized below. Our CHA \(\mathsf{ParSym}\) naturally projects, via a CHA morphism, onto \(\mathsf{NSym}\), but \(\mathsf{NCSym}\) does not seem to project onto \(\mathsf{NSym}\), at least in any natural or useful or combinatorially significant way, with reference to [13, 15], and with a particular regard toward Theorem 4.9 in [13]. The coproduct operation on \(\mathsf{NCSym}\) introduced in [13] is completely different from the coproduct for \(\mathsf{ParSym}\) that we introduce, which, arguably, gives us a more natural lifting of \(\mathsf{NSym}\) in terms of how \(\mathsf{ParSym}\) projects onto \(\mathsf{NSym}\), compared to the comultiplication operation \[\Delta(\mathbf{m}_{A})=\sum_{S\subseteq[\ell(A)]}\mathbf{m}_{A_{S}}\otimes \mathbf{m}_{A_{S}C}, \tag{7}\] referring to [13] for details as to the notation in (7). The bases of \(\mathsf{ParSym}\) are indexed by families of simple graphs denoting partition diagrams and indexing the bases for partition algebras, and our work is directly motivated by combinatorial and representation-theoretic properties associated with partition diagrams and partition algebras, whereas partition algebras, partition diagrams, graphs, etc., are not considered in [13]. Our explicit evaluation of the antipode \(S_{\mathsf{ParSym}}\colon\mathsf{ParSym}\to\mathsf{ParSym}\) that we introduce requires the construction of a new, elementary-like basis, but the construction of such a basis is not required in [13], and the antipode for \(\mathsf{NCSym}\) is not given explicitly in [13]. Finally, our lifting properties associated with integer compositions from \(\mathsf{NSym}\) so as to be applicable in \(\mathsf{ParSym}\) requires the concept of a \(\bullet\)-decomposition given in this article, using an analogue, for partition diagrams, of near-concatenation, but this kind of approach is not involved in [13]. The Aguiar-Orellana Hopf algebra on uniform block permutations [4] contains \(\mathsf{NCSym}\) as a Hopf algebra, so much of the above commentary contrasting \(\mathsf{ParSym}\) and \(\mathsf{NCSym}\) similarly applies with regard to the relationship between \(\mathsf{ParSym}\) and the Hopf algebra from [4]. We encourage the interested reader to consider past references that have been influenced by or otherwise reference [13] and that also motivate our interest in \(\mathsf{ParSym}\), including [1, 5, 12, 14, 15, 50]. ## 2 Irreducible partition diagrams We define \[\mathsf{ParSym}_{i}=\operatorname{span}_{\mathbb{R}}\{\mathsf{H}_{\pi}:\pi \in A_{i}\}, \tag{8}\] for \(i\in\mathbb{N}_{0}\), where an expression of the form \(\mathsf{H}_{\pi}\), for a partition diagram \(\pi\) that is irreducible according to Definition 1 below, may be seen as a variable, by analogy with the \(H\)-generators for \(\mathsf{NSym}\). We adopt the convention whereby \(A_{0}\) consists of the unique "empty partition diagram" \(\varnothing\) without any nodes. By direct analogy with (4), we define \[\mathsf{H}_{\pi}\mathsf{H}_{\rho}=\mathsf{H}_{\pi\otimes\rho} \tag{9}\] for partition diagrams \(\pi\) and \(\rho\). We define \[\mathsf{ParSym}:=\bigoplus_{i\in\mathbb{N}_{0}}\mathsf{ParSym}_{i}, \tag{10}\] and we endow the direct sum of \(\Bbbk\)-spans in (10) with the operation defined in (9), extended linearly, yielding an associative operation on \(\mathsf{ParSym}\). We let the morphism \(\eta\colon\Bbbk\to\mathsf{ParSym}\) be such that \[\eta\left(1_{\Bbbk}\right)=\mathsf{H}_{\varnothing}, \tag{11}\] giving a unit morphism that gives \(\mathsf{ParSym}\) the structure of an associative \(\Bbbk\)-algebra. **Definition 1**.: For a partition diagram \(\pi\), let \(\pi\) be referred to as being \(\otimes\)_-irreducible_ if it cannot be expressed as \(\rho^{(1)}\otimes\rho^{(2)}\) for nonempty partition diagrams \(\rho^{(1)}\) and \(\rho^{(2)}\). We are to later refer to the concept of \(\bullet\)-irreducibility, in contrast to Definition 1, for an analogue \(\bullet\) of near-concatenations for integer compositions. For the sake of convenience, we may refer to \(\otimes\)-irreducibility as irreducibility, depending on the context. **Example 2**.: We may verify that the irreducible diagrams in \(\mathsf{ParSym}_{2}\) are as below. _Remark 1_.: We adopt the convention whereby a given partition diagram denoted as a graph is written in such a way so that any of its edges are within the rectangular formation, including the borders, given by the upper and lower nodes of the graph, which are arranged horizontally, as in the partition diagram illustrations we have previously provided. Let \(\pi\) be a partition diagram in \(\mathsf{ParSym}_{k}\). We find that \(\pi\) may be expressed as the concatenation of two nonempty diagrams if and only if there is a natural number \(j\in\{1,2,\ldots,k-1\}\) such that \(\pi\) may be drawn in such a way so that no edge of \(\pi\) crosses a vertical line that is placed between \(j\) and \(j+1\) and between \(j^{\prime}\) and \((j+1)^{\prime}\). We refer to a vertical line of this form as a _separating line_, and we may refer to there being a _separation_ between \(j\) and \(j+1\) and between \(j^{\prime}\) and \((j+1)^{\prime}\) in \(\pi\) if the situation described in the preceding sentence holds. **Lemma 1**.: _Let \(\pi\) be a partition diagram. Then \(\pi\) can be uniquely written in the form_ \[\pi=\pi^{(1)}\otimes\pi^{(2)}\otimes\cdots\otimes\pi^{(n)},\] _with \(n\in\mathbb{N}\), and where \(\pi^{(1)}\), \(\pi^{(2)}\), \(\ldots\), \(\pi^{(n)}\) are \(\otimes\)-irreducible diagrams._ Proof.: A partition diagram is non-irreducible if and only if it has at least one separation. By taking all possible separations for a given partition diagram, this leads us to construct a bijection between non-irreducible partition diagrams \(\rho\) of order \(k\) and ordered tuples of the form \[(\pi^{\alpha_{1}},\pi^{\alpha_{2}},\ldots,\pi^{\alpha_{\ell(\alpha)}})\,, \tag{12}\] where \(\pi^{\alpha_{i}}\), for a given index \(i\), is an irreducible partition diagram of order \(\alpha_{i}\), and where \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{\ell(\alpha)})\) is an integer composition of \(k\) such that \(\alpha\neq(k)\). Explicitly, the separations for a given diagram \(\rho\) partition \(\rho\) in such a way so that \[\rho=\pi^{\alpha_{1}}\otimes\pi^{\alpha_{2}}\otimes\cdots\otimes\pi^{\alpha_ {\ell(\alpha)}}, \tag{13}\] so we let \(\rho\) be mapped to (12), with the surjectivity being immediate from the condition that \(\alpha\neq(k)\). For domain elements \(\rho^{(1)}=\pi^{\alpha_{1}}\otimes\pi^{\alpha_{2}}\otimes\cdots\otimes\pi^{ \alpha_{\ell(\alpha)}}\) and \(\rho^{(2)}=\mu^{\beta_{1}}\otimes\mu^{\beta_{2}}\otimes\cdots\otimes\mu^{ \beta_{\ell(\beta)}}\) written as \(\otimes\)-products of nonempty, irreducible diagrams, with at least two factors in each case, the equality of the images of these domain elements gives us that \(\rho^{(1)}=\rho^{(2)}\) in an immediate fashion, by comparing the lengths and entries of the corresponding tuples and by appealing to the irreducibility of the entries. **Theorem 1**.: _Let \(a_{k}\) denote the number of irreducible diagrams in \(A_{k}\) for a positive integer \(k\). Then the recursion_ \[a_{k}=B_{2k}-\sum_{\begin{subarray}{c}\alpha=k\\ \alpha\neq(k)\end{subarray}}a_{\alpha_{1}}a_{\alpha_{2}}\cdots a_{\alpha_{ \ell(\alpha)}} \tag{14}\] _holds, with \(a_{1}=2\)._ Proof.: By Lemma 1, the number of irreducible diagrams in \(A_{k}\) is equal to \(B_{2k}\) minus the cardinality of the set of all possible tuples that are of the form indicated in (12) and that are subject to the given conditions. This gives us an equivalent version of (14). **Example 3**.: According to the recursion and the initial condition specified in Theorem 1, we find that \[(a_{k}:k\in\mathbb{N})=(2,11,151,3267,96663,3663123,171131871,\ldots), \tag{15}\] noting that the diagrams corresponding to the \(a_{2}=11\) evaluation are shown in Example 2. The integer sequence in (15) is not currently included in the On-Line Encyclopedia of Integer Sequences, and neither of the integers among 3663123 and 171131871 shown in (15) are currently in the OEIS. This strongly suggests that the free algebra structure indicated in Theorem 2 is original. A _free \(\Bbbk\)-algebra_ is of the form \[\Bbbk\langle X\rangle=\bigoplus_{w\in X^{*}}\Bbbk w, \tag{16}\] where the multiplicative operation is given by concatenation of words in the free monoid \(X^{*}\), and where this operation is extended linearly, and where \(\Bbbk w\) denotes the free \(\Bbbk\)-module on the singleton set \(\{w\}\). **Theorem 2**.: _As a \(\Bbbk\)-algebra, \(\mathsf{ParSym}\) is the free \(\Bbbk\)-algebra with \(a_{k}\) generators in each degree, for \(a_{k}\) as in Theorem 1._ Proof.: From (10), we may obtain an algebra isomorphism with an algebra of the form indicated in (16), by expressing the basis elements in the graded components shown in (10) using irreducible diagrams, in the following manner. According to Lemma 1, for a partition diagram \(\rho\), by taking all possible separating lines for \(\rho\), this gives us how \(\rho\) may be expressed in a unique way as a concatenation of irreducible diagrams. With respect to the notation in (16), we set \(X\) to be the set of all irreducible diagrams. So, by identifying the concatenation in (13) with the word \(\pi^{\alpha_{1}}\pi^{\alpha_{2}}\cdots\pi^{\alpha_{\ell(\alpha)}}\) in \(X^{*}\), this gives us an algebra isomorphism of the desired form. **Definition 2**.: For a sequence \((\mathtt{a}_{n}:n\in\mathbb{N})\), the _Boolean transform_\((\mathtt{b}_{n}:n\in\mathbb{N})\) for the \(\mathtt{a}\)-sequence may be defined so that (cf. [3]) \[\sum_{n=1}^{\infty}\mathtt{b}_{n}x^{n}:=1-\frac{1}{1+\sum_{n=1}^{\infty} \mathtt{a}_{n}x^{n}}. \tag{17}\] From the definition in (17), a combinatorial argument involving the Cauchy product of the generating functions involved can be used to prove the equivalent definition of the Boolean transform indicated below [3]: \[\mathtt{a}_{n}=\sum_{\alpha\in n}\mathtt{b}_{\alpha_{1}}\mathtt{b}_{\alpha_{2 }}\cdots\mathtt{b}_{\alpha_{\ell(\alpha)}}. \tag{18}\] So, by rewriting (18) so that \[\mathtt{b}_{n}=\mathtt{a}_{n}-\sum_{\begin{subarray}{c}\alpha\in n\\ \alpha\neq(n)\end{subarray}}\mathtt{b}_{\alpha_{1}}\mathtt{b}_{\alpha_{2}} \cdots\mathtt{b}_{\alpha_{\ell(\alpha)}}, \tag{19}\] we find that the recursion in Theorem 1 is of the form indicated in (19), leading us toward the generating function highlighted in Theorem 3 below. This gives us an analogue for partition diagrams of Comtet's formula for enumerating irreducible permutations [20] (cf. [29, 36]). **Theorem 3**.: _The generating function for the sequence \((a_{k}:k\in\mathbb{N})\) given in Theorem 1 satisfies_ \[\sum_{k=1}^{\infty}a_{k}x^{k}=1-\frac{1}{1+\sum_{k=1}^{\infty}B_{2k}x^{k}}.\] Proof.: Since \((a_{k}:k\in\mathbb{N})\) is the Boolean transform of \((B_{2k}:k\in\mathbb{N})\), the desired generating function identity is immediate from the equivalence of (17) and (19). ## 3 A CHA on partition diagrams For a coalgebra \(C\) over \(\Bbbk\), the coproduct morphism \(\Delta\colon C\to C\otimes C\) satisfies the coassociativity axiom, and for a bialgebra, the operation \(\Delta\) would be compatible with the multiplicative operation. So, to endow \(\mathsf{ParSym}\) with a bialgebra structure, we would need to define a coproduct operation \(\Delta\) such that \[\Delta(\mathsf{H}_{\pi}\mathsf{H}_{\rho})=\Delta(\mathsf{H}_{\pi})\Delta(\mathsf{ H}_{\rho}) \tag{20}\] for irreducible generators \(\mathsf{H}_{\pi}\) and \(\mathsf{H}_{\rho}\) in \(\mathsf{ParSym}\), with \[\Delta\left(\mathsf{H}_{\pi^{(1)}\otimes\pi^{(2)}\otimes\cdots \otimes\pi^{(\ell)}}\right) =\Delta\left(\mathsf{H}_{\pi^{(1)}}\mathsf{H}_{\pi^{(2)}}\cdots \mathsf{H}_{\pi^{(\ell)}}\right)\] \[=\Delta\left(\mathsf{H}_{\pi^{(1)}}\right)\Delta\left(\mathsf{H}_ {\pi^{(2)}}\right)\cdots\Delta\left(\mathsf{H}_{\pi^{(\ell)}}\right)\] for \(\otimes\)-irreducible partition diagrams \(\pi^{(1)}\), \(\pi^{(2)}\), \(\ldots\), \(\pi^{(\ell)}\). This leads us to consider what would be appropriate as a way of lifting the expansion formula \[\Delta\left(H_{n}\right)=H_{0}\otimes H_{n}+H_{1}\otimes H_{n-1}+\cdots+H_{n }\otimes H_{0} \tag{21}\] for coproducts of generators in \(\mathsf{NSym}\). Since the generators of \(\mathsf{ParSym}\) are indexed by graphs, this leads us to consider how the expansion in (21) could be reformulated and lifted in a graph-theoretic way. The generator \(H_{n}\) may be rewritten so as to be indexed by an integer composition, in accordance with (3), writing \(H_{n}=H_{(n)}\). By denoting compositions as composition tableaux, we may let this generator be denoted as \[\tikzfig{height=1.5} \tag{22}\] We may rewrite the index in (22) as a path graph with \(n\) nodes, by analogy with Ferrers diagrams and by analogy with our notation for partition diagrams as in (5): \[\tikzfig{height=1.5} \tag{23}\] ### Near-concatenations of partition diagrams The _near-concatenation_ of compositions \(\alpha\) and \(\beta\), which we may denote as \(\alpha\bullet\beta\), is given by the composition \((\alpha_{1}\), \(\alpha_{2}\), \(\ldots\), \(\alpha_{\ell(\alpha)-1}\), \(\alpha_{\ell(\alpha)}+\beta_{1}\), \(\beta_{2}\), \(\beta_{3}\), \(\ldots\), \(\beta_{\ell(\beta)})\) and naturally arises in the context of noncommutative symmetric functions, as in with the multiplication rule for the ribbon basis of \(\mathsf{NSym}\). Grinberg [31] has explored the free algebra structure given by endowing the dual of \(\mathsf{NSym}\) with an operation that may be defined via the near-concatenation of compositions, and this has inspired us to reformulate \(\mathsf{NSym}\) in a related way in terms of \(\bullet\). According to the notation in (23), the binary operation \(\bullet\) has the effect of joining concatenated path graphs to form a path graph, by adding an edge appropriately. If we take the singleton set consisting of \(H_{(1)}\), we may redefine \(\mathsf{NSym}\) as the underlying algebra of a free structure generated by this singleton set with the multiplicative operations \(\bullet\) and \(\circ\), where \(\circ\) denotes the usual multiplication operation for \(\mathsf{NSym}\). **Example 4**.: We may rewrite the complete homogeneous basis element \(H_{(3,1,4)}\) as \[H_{1}\bullet H_{1}\bullet H_{1}\circ H_{1}\circ H_{1}\bullet H_{1}\bullet H_{1} \tag{24}\] or as \[H_{\mathsf{O}}\] \[H_{\mathsf{O}}\] where the sum in (26) is over all \(\otimes\)-irreducible partitions diagrams \(G_{1}\) and \(G_{2}\) such that \(\mathsf{H}_{G_{1}}\bullet\mathsf{H}_{G_{2}}=\mathsf{H}_{\pi}\). We let \(\Delta\) be compatible with the multiplicative operation of \(\mathsf{ParSym}\), i.e., so that (20) holds, and we let \(\Delta\) be extended linearly, so as to obtain a morphism \(\Delta\colon\mathsf{ParSym}\to\mathsf{ParSym}\otimes\mathsf{ParSym}\). **Example 6**.: According to the definition of \(\Delta\) for \(\mathsf{ParSym}\), we have that In our constructing a bialgebra structure for \(\mathsf{ParSym}\), it would be appropriate to prove the coassociativity of the operation given in Definition 4. **Lemma 2**.: _The operation \(\bullet\) is associative._ Proof.: For diagrams \(\pi^{(1)}\), \(\pi^{(2)}\), and \(\pi^{(3)}\), by placing \(\pi^{(3)}\) to the right of \(\pi^{(2)}\) and joining these diagrams with a bottom edge according to the definition of \(\bullet\), and by then placing \(\pi^{(1)}\) to the left of the resultant configuration and again adding a joining edge at the bottom according to \(\bullet\), we obtain the product \(\pi^{(1)}\bullet(\pi^{(2)}\bullet\pi^{(3)})\), which is the same as the configuartion obtained by taking \(\pi^{(1)}\) and placing \(\pi^{(2)}\) to the right of it and joining the diagrams according to \(\bullet\), and by then placing \(\pi^{(3)}\) to the right and again adding a bottom edge according to \(\bullet\). **Theorem 4**.: _The operation \(\Delta\) is coassociative._ Proof.: Starting with (26), we proceed to apply \(\operatorname{id}\otimes\Delta\), writing \[(\operatorname{id}\otimes\Delta)\left(\Delta\mathsf{H}_{\pi}\right)=\sum_{G_{ 1}}\mathsf{H}_{G_{1}}\otimes\left(\sum_{G_{3},G_{4}}\mathsf{H}_{G_{3}}\otimes \mathsf{H}_{G_{4}}\right), \tag{27}\] where \(G_{1}\) is as before, and where the inner sum, in accordance with Definition 4, is over all \(\otimes\)-irreducible partition diagrams \(G_{3}\) and \(G_{4}\) such that \(\mathsf{H}_{G_{3}}\bullet\mathsf{H}_{G_{4}}=\mathsf{H}_{G_{2}}\), letting \(G_{2}\) be as before. So, we may rewrite (27) so that \[(\operatorname{id}\otimes\Delta)\left(\Delta\mathsf{H}_{\pi}\right)=\sum_{G_{ 1},G_{3},G_{4}}\mathsf{H}_{G_{1}}\otimes\mathsf{H}_{G_{3}}\otimes\mathsf{H}_ {G_{4}}, \tag{28}\] where the sum in (28) is over all \(\otimes\)-irreducible partition diagrams \(G_{1}\), \(G_{3}\), and \(G_{4}\) such that \(\mathsf{H}_{G_{1}}\bullet(\mathsf{H}_{G_{3}}\bullet\mathsf{H}_{G_{4}})\) equals \(\pi\). By Lemma 2, the sum in (28) is the same as the corresponding sum over all \(\otimes\)-irreducible partition diagrams \(G_{1}\), \(G_{3}\), and \(G_{4}\) such that \((\mathsf{H}_{G_{1}}\bullet\mathsf{H}_{G_{3}})\bullet\mathsf{H}_{G_{4}}\). So, by a symmetric argument, relative to our derivation of (28), by applying \(\Delta\otimes\mathsf{id}\) to \(\Delta\mathsf{H}_{\pi}\), we obtain the same sum in (28). By defining a counit morphism \(\varepsilon\colon\mathsf{ParSym}\to\Bbbk\) so that \(\varepsilon(\mathsf{H}_{\varnothing})=1_{\Bbbk}\) and \(\varepsilon(\mathsf{H}_{\pi})\)\(=0\) for a nonempty partition diagram \(\pi\), from the coassociativity of \(\Delta\), we may obtain a bialgebra structure on \(\mathsf{ParSym}\). The following result concerning the relationship between \(\bullet\) and \(\otimes\) will be useful in our constructing an antipode map for \(\mathsf{ParSym}\). **Theorem 5**.: _Let \(\rho\) and \(\pi\) be two partition diagrams. Then \(\rho\bullet\pi\) is \(\otimes\)-irreducible if and only if \(\rho\) and \(\pi\) are \(\otimes\)-irreducible._ Proof.: \((\Longrightarrow)\) Suppose that \(\rho\bullet\pi\) is \(\otimes\)-irreducible. It could then not be the case that \(\rho\) has a separation, since this would result in a separation in \(\rho\bullet\pi\), and the same applies with respect to \(\pi\). \((\Longleftarrow)\) Suppose that \(\rho\) and \(\pi\) are \(\otimes\)-irreducible. By placing \(\pi\) to the right of \(\rho\), and then forming an isthmus joining the bottom right node of \(\rho\) and the bottom left node of \(\pi\), any graph equivalent to \(\rho\bullet\pi\) could not have a separating line between the nodes corresponding to \(\rho\) and the nodes corresponding to \(\pi\). So, there are no separations in \(\rho\) and no separations in \(\pi\), and there could not be a separation between the nodes corresponding to \(\rho\) and the nodes corresponding to \(\pi\), so we may conclude that \(\rho\bullet\pi\) is \(\otimes\)-irreducible. While the result highlighted as Theorem 6 below is to be used in a key way in our construction of a projection morphism from \(\mathsf{ParSym}\) to \(\mathsf{NSym}\), this result is of interest in its own right, since it gives us a way of formalizing how partition diagrams can be "broken up" into components analogous to the entries of an integer composition, in a way suggested by (24) and (25). We may define \(\bullet\)-irreducibility by direct analogy with Definition 1. Our proof of the following result is also useful in relation to the material on diagram subalgebras in Section 4 below. **Theorem 6**.: _For a diagram \(\pi\) that is nonempty, there is a unique decomposition_ \[\pi=\theta^{(1)}\bullet\theta^{(2)}\bullet\cdots\bullet\theta^{(m)} \tag{29}\] _into diagrams that are \(\bullet\)-irreducible, and where \(m=m(\pi)\) is a fixed statistic depending on \(\pi\)._ Proof.: As above, we adopt the notational convention indicated in Remark 1. Let \(i^{\prime}\) and \((i+1)^{\prime}\) be consecutive vertices in the bottom row of \(\pi\), and suppose that these bottom nodes are in the same connected component of \(\pi\). For a fixed pair \((i^{\prime},(i+1)^{\prime})\) of bottom vertices of the specified form, if it is possible to apply a construction of the following form, while maintaining the connected components of \(\pi\), then we let this construction be applied: If \(i^{\prime}\) and \((i+1)^{\prime}\) are non-adjacent, then we add the edge \(\{i^{\prime},(i+1)^{\prime}\}\), and if \(i\) and \(i^{\prime}\) are in the same component but non-adjacent, then we add the edge \(\{i,i^{\prime}\}\), and if \(i+1\) and \((i+1)^{\prime}\) are in the same component but non-adjacent, then we add the edge \(\{i+1,(i+1)^{\prime}\}\), and, if possible, we then remove or rewrite edges so that \(i\) and \(i+1\) are not adjacent and so that no edge partly or fully appears _strictly_ within the area given by the rectangle formed by the vertices in \(\{i,i+1,i^{\prime},(i+1)^{\prime}\}\), and this does not include the unique edge incident with \(i^{\prime}\) and \((i+1)^{\prime}\) and does not include the possibility of an edge incident with \(i\) and \(i^{\prime}\) and does not include the possibility of an edge incident with \(i+1\) and \((i+1)^{\prime}\). For the same fixed pair \((i^{\prime},(i+1)^{\prime})\) satisfying all of the specified conditions, the edge \(\{i^{\prime},(i+1)^{\prime}\}\) is an isthmus, by construction. By this construction, any edge \(e\) removed is such that \(e\) is either incident with both \(i\) and \(i+1\) or such that \(e\) partly or fully touches the strict rectangular area specified. So, for a pair \((j^{\prime},(j+1)^{\prime})\neq(i^{\prime},(i+1)^{\prime})\) of bottom nodes satisfying the same conditions as before, we let it be understood that we have already formed, as above, the isthmus \(\{i^{\prime},(i+1)^{\prime}\}\), in such a way so that \(i\) and \(i+1\) are not adjacent, and in such a way so that the strict rectangular region corresponding to \(\{i,i+1,i^{\prime},(i+1)^{\prime}\}\) is "empty" in the sense specified as above. By then, if necessary, and according to the above procedure, adding the edge \(\{j^{\prime},(j+1)^{\prime}\}\) and/or the edge \(\{j,j^{\prime}\}\) and/or the edge \(\{j+1,(j+1)^{\prime}\}\), and, if necessary, removing a possible edge \(\{j,j+1\}\) or any edges partly or wholly touching the strict rectangular area corresponding to \(\{j,j+1,j^{\prime},(j+1)^{\prime}\}\), this would have no effect on the following, even for the "borderline" or extremal cases whereby \(i+1=j\) or \(j+1=i\): (i) The edge \(\{i^{\prime},(i+1)^{\prime}\}\) being an isthmus; (ii) The vertices \(i\) and \(i+1\) being non-adjacent; (iii) The vertices \(i\) and \(i^{\prime}\) being adjacent if these vertices happen to be in the same component; (iv) The vertices \(i+1\) and \((i+1)^{\prime}\) being adjacent if these vertices happen to be in the same component; and (v) The emptiness of the strict rectangular region corresponding to \(\{i,i+1,i^{\prime},(i+1)^{\prime}\}\). So, after applying our construction with respect to \((i^{\prime},(i+1)^{\prime})\), we have shown that the application of our construction to another pair \((j^{\prime},(j+1)^{\prime})\) satisfying the same conditions would have no effect on how our construction was initially applied to form an isthmus \(\{i^{\prime},(i+1)^{\prime}\}\) such that the listed properties hold. So, inductively, we may apply our construction over and over, with each such appli cation having no effect on the previous applications. So, we repeat our edge addition/removal process wherever possible to the nonempty diagram \(\pi\), and, from such successive applications, we may express \(\pi\) as in (29), where each application of the \(\bullet\) operation in (29) has the effect of adding an edge of the form \(\{k^{\prime},(k+1)^{\prime}\}\), for a pair \((k^{\prime},(k+1)^{\prime})\) of bottom nodes satisfying the above specified conditions in \(\pi\), and where each factor on the right of (29) is a nonempty diagram. Each factor in (29) is irreducible in the sense that it cannot be the case that \(\theta^{(i)}=\theta^{(j_{1})}\bullet\theta^{(j_{2})}\) for nonempty diagrams \(\theta^{(j_{1})}\) and \(\theta^{(j_{2})}\), because, otherwise, the diagrams \(\theta^{(j_{1})}\) and \(\theta^{(j_{2})}\) would have already been included in the above \(\bullet\)-decomposition, since the edge addition/removal process described above was applied wherever possible and since each application of this process is independent of any previous such applications. Given a decomposition \[\pi=\gamma^{(1)}\bullet\gamma^{(2)}\bullet\cdots\bullet\gamma^{(m)}, \tag{30}\] for \(\bullet\)-irreducible and nonempty diagrams of the form \(\gamma^{(i)}\), we can show, as follows, that the right-hand sides of (29) and (30) agree: Again, the application of \(\bullet\) has the effect of adding an edge \(\{i^{\prime},(i+1)^{\prime}\}\) such that \((i^{\prime},(i+1)^{\prime})\) satisfies the above indicated conditions, so that we may, inductively, compare such edges corresponding to (29) and (30). So, there is a unique \(\bullet\)-decomposition of \(\pi\) of the desired form. Apart from our applying Theorem 6 to prove Theorem 9, Theorem 6 is also of interest in terms of how it can be used to formalize and give light to how the \(\bullet\) operation in Definition 3 is appropriate and natural as a lifting of near-concatenation. For example, one might think that there could be advantages of using a variant of \(\bullet\) given by, say, joining top and bottom edges, letting \(\odot\) be such that \(\pi\odot\rho\) is the graph obtained by adding an edge incident with the top left node of \(\pi\) and the top right node of \(\rho\), and by adding an edge incident with the bottom left node of \(\pi\) and the bottom right node of \(\rho\). However, this operation \(\odot\) would not provide the uniqueness property indicated in Theorem 6, as shown in Example 7 below. **Example 7**.: The uniqueness property in Theorem 6 may be illustrated by evaluating the \(\bullet\)-products in contrast to the non-uniqueness suggested by the \(\odot\)-products shown below: \[\xy(0,0)*{\circ}="t";(-1,0)*{\circ}="t";(-1,0)*{\circ}="t";(-1,0)*{\circ}="t";(-1,0 )*{\circ}="t";(-1,0)*{\circ}="t";(-1,0)*{\circ}="t";(-1,0)*{\circ}="t";(-1,0)*{ \circ}="t";(-1,0)*{\circ}="t";(-1,0)*{\circ}= \[=\eta_{\mathsf{NSym}}\left(\varepsilon_{\mathsf{NSym}}\left(H_{n}\right) \right),\] and a symmetric argument may be used to complete a proof that the following diagram commutes, for \(\mathcal{H}=\mathsf{NSym}\). (35) The antipode antihomomorphism for \(\mathsf{NSym}\) may be defined in an equivalent way so that \[S(H_{n})=\sum_{\alpha}(-1)^{\ell(\alpha)}H_{\alpha_{1}}H_{\alpha_{2}}\cdots H _{\alpha_{\ell(\alpha)}}, \tag{36}\] where the sum is over all tuples \(\alpha\) of positive integers such that \(H_{\alpha_{1}}\bullet H_{\alpha_{2}}\bullet\cdots\bullet H_{\alpha_{\ell( \alpha)}}=H_{(n)}\). Writing \(S_{\mathsf{ParSym}}=S\), we set \(S(\mathsf{H}_{\varnothing})=\mathsf{H}_{\varnothing}\), and, by direct analogy with (36), for a nonempty, irreducible partition diagram \(\pi\), we define \[S(\mathsf{H}_{\pi})=\sum(-1)^{\ell}\mathsf{H}_{\rho^{(1)}}\mathsf{H}_{\rho^{( 2)}}\cdots\mathsf{H}_{\rho^{(\ell)}}, \tag{37}\] where the sum in (37) is over all possible tuples \((\rho^{(1)},\rho^{(2)},\ldots,\rho^{(\ell)})\) of partition diagrams such that \(\mathsf{H}_{\rho^{(1)}}\bullet\mathsf{H}_{\rho^{(2)}}\bullet\cdots\bullet \mathsf{H}_{\rho^{(\ell)}}=\mathsf{H}_{\pi}\). By Theorem 5, the \(\otimes\)-irreducibility of each \(\rho\)-factor follows from the \(\otimes\)-irreducibility of \(\pi\). We extend the mapping in (37) linearly and so as to obtain an _antimorphism_ of algebras, with \(S(\mathsf{H}_{\pi}\mathsf{H}_{\rho})=S(\mathsf{H}_{\rho})S(\mathsf{H}_{\pi})\) for irreducible diagrams \(\pi\) and \(\rho\). **Example 8**.: In view of coproduct expansion in Example 6 and the commutative diagram shown in (35), we consider the application of \(S\otimes\mathrm{id}\) to the right-hand side of the equality in Example 6. To evaluate the algebra antimorphism \(S\), one way of going about with this would be through the use of _Takeuchi's formula_[49], by analogy with bijective methods employed by Benedetti and Sagan [8] through the use of Takeuchi's formula. Following [8] (cf. [49]), if \(H\) is a bialgebra that is connected and graded, then \(H\) is a Hopf algebra, with an antipode given as follows. Let the projection map \(\pi\colon H\to H\) be extended linearly so that \(\pi\) restricted to the graded component \(H_{n}\) is the zero map if \(n=0\) and is the identity map for \(n\geq 1\). With this setup, the antipode for \(S\) is such that \[S=\sum_{k\geq 0}(-1)^{k}\nabla^{k-1}\pi^{\otimes k}\Delta^{k-1}, \tag{38}\] with the understanding that \(\nabla^{-1}=u\) and \(\Delta^{-1}=\epsilon\), and where \(\nabla\) denotes the multiplicative operation for \(H\). The formulation in (38) of Takeuchi's formula, as applied to the graded, connected bialgebra \(\mathsf{ParSym}\) and to \(\mathsf{H}_{\pi}\) for an irreducible partition diagram \(\pi\), can be used to obtain (37), giving us that (37), extended linearly and as an antimorphism, does indeed give us an antipode such that the diagram in (35) commutes. An alternative, bijective approach toward proving this result is given below. **Theorem 7**.: _The diagram in (35) commutes, for the specified antimorphism \(S\)._ Proof.: Let \(\pi\) denote a nonempty, \(\otimes\)-irreducible diagram. From (26), by applying \(S\otimes\operatorname{id}\) to both sides of this equality, we obtain \[\left(S\otimes\operatorname{id}\right)\left(\Delta\mathsf{H}_{ \pi}\right) =\sum_{G_{1},G_{2}}S\left(\mathsf{H}_{G_{1}}\right)\otimes\mathsf{H}_{G _{2}}\] \[=\sum_{G_{1},G_{2}}\left(\sum(-1)^{\ell}\mathsf{H}_{\rho^{(1)}} \mathsf{H}_{\rho^{(2)}}\cdots\mathsf{H}_{\rho^{(\ell)}}\right)\otimes\mathsf{ H}_{G_{2}}, \tag{39}\] where the inner sum in (39) is such that: If \(G_{1}\) is empty, then \(S\left(\mathsf{H}_{\varnothing}\right)=\mathsf{H}_{\varnothing}\), so that the inner sum, in this case, is indexed by the empty set, and if \(G_{1}\) is nonempty, then the inner sum is is over all possible tuples \((\rho^{(1)},\rho^{(2)},\ldots,\rho^{(\ell)})\) of nonempty partition diagrams that are irreducible with respect to \(\otimes\), by Theorem 5, and that are such that \(\mathsf{H}_{\rho^{(1)}}\bullet\mathsf{H}_{\rho^{(2)}}\bullet\cdots\bullet \mathsf{H}_{\rho^{(\ell)}}=\mathsf{H}_{G_{1}}\), again for \(\otimes\)-irreducible partition diagrams \(G_{1}\) and \(G_{2}\) such that \(\mathsf{H}_{G_{1}}\bullet\mathsf{H}_{G_{2}}=\mathsf{H}_{\pi}\). Applying the multiplicative operation \(\nabla=\nabla_{\mathsf{ParSym}}\) from \(\mathsf{ParSym}\otimes\mathsf{ParSym}\) to \(\mathsf{ParSym}\), we may write \[\nabla\left(\left(S\otimes\operatorname{id}\right)\left(\Delta\mathsf{H}_{\pi} \right)\right)=\sum_{G_{1},G_{2}}\sum(-1)^{\ell}\mathsf{H}_{\rho^{(1)}} \mathsf{H}_{\rho^{(2)}}\cdots\mathsf{H}_{\rho^{(\ell)}}\mathsf{H}_{G_{2}}, \tag{40}\] where the inner sum in (40) is as before. This gives rise to a sign-reversing involution, as indicated below. According to the index set for the double sum in (40), we fix \(G_{2}\), and, for this fixed graph, we then associate to it a tuple \((\rho^{(1)},\,\rho^{(2)},\,\ldots,\,\rho^{(\ell)})\) of nonempty partition diagrams that are irreducible with respect to \(\otimes\), by Theorem 5, and that are such that \(\mathsf{H}_{\rho^{(1)}}\bullet\mathsf{H}_{\rho^{(2)}}\bullet\cdots\bullet \mathsf{H}_{\rho^{(\ell)}}\bullet\mathsf{H}_{G_{2}}=\pi\), so that \[(-1)^{\ell}\mathsf{H}_{\rho^{(1)}}\mathsf{H}_{\rho^{(2)}}\cdots\mathsf{H}_{ \rho^{(\ell)}}\mathsf{H}_{G_{2}} \tag{41}\] is a given term appearing in (40). If \(G_{2}\) is nonempty, then we let the term indicated in (41) be mapped to \[(-1)^{\ell+1}\left(\mathsf{H}_{\rho^{(1)}}\mathsf{H}_{\rho^{(2)}}\cdots \mathsf{H}_{\rho^{(\ell)}}\mathsf{H}_{\rho^{(\ell+1)}}\right)\mathsf{H}_{G_{3}}, \tag{42}\] where \(\rho^{(\ell+1)}=G_{2}\) and where \(G_{3}=\varnothing\), giving a term appearing in (40). Conversely, for terms of (40) such that \(G_{2}=\varnothing\), we let such terms be mapped in such a way so that a term of the form indicated in (42) gets mapped back to (41). This gives us a sign-reversing involution that gives us that (40) vanishes if \(\pi\) is nonempty. If \(\pi\) is empty, then \(\Delta\mathsf{H}_{\varnothing}=\mathsf{H}_{\varnothing}\otimes\mathsf{H}_{\varnothing}\), so that the application of \(S\otimes\operatorname{id}\) again gives \(\mathsf{H}_{\varnothing}\otimes\mathsf{H}_{\varnothing}\), so that the application of \(\nabla\) then gives \(1_{\mathsf{ParSym}}\). So, we have shown that \[\nabla\left(\left(S\otimes\operatorname{id}\right)\left(\Delta \mathsf{H}_{\pi}\right)\right) =\begin{cases}1_{\mathsf{ParSym}}&\text{if }\pi=\varnothing,\\ 0&\text{if }\pi\neq\varnothing,\end{cases}\] \[=\eta\left(\varepsilon\left(H_{\pi}\right)\right),\] as desired. A symmetric argument gives us the same evaluation for \(\nabla((\operatorname{id}\otimes S)(\Delta\mathsf{H}_{\pi}))\). So, we have shown that the desired diagram commutes, with respect to generators \(\mathsf{H}_{\pi}\), for an \(\otimes\)-irreducible diagram \(\pi\). Using the property that \(S\) is defined as an antimorphism, together with the morphisms \(\Delta\), \(\nabla\), etc., we may obtain that the aforementioned diagram commutes in full generality. Theorem 7 gives us that \(\mathsf{ParSym}\), with its specified morphisms, is a Hopf algebra. If we consider the antipode relation for \(\mathsf{NSym}\) shown in (34) and compare it to the definition of \(S=S_{\mathsf{ParSym}}\) in (37), this motivates the construction of an elementary-like basis for \(\mathsf{ParSym}\), as below. **Definition 5**.: Let \(\pi\) be an irreducible partition diagram of order \(n\). We define \[\mathsf{E}_{\pi}=\sum(-1)^{\ell+n}\mathsf{H}_{\rho^{(1)}}\mathsf{H}_{\rho^{(2)} }\cdots\mathsf{H}_{\rho^{(\ell)}}, \tag{43}\] where the sum in (43) is over the same index set in (37). For irreducible partition diagrams \(\pi^{(1)}\), \(\pi^{(2)}\), \(\ldots\), \(\pi^{(p)}\), we then set \(\mathsf{E}_{\pi^{(1)}\otimes\pi^{(2)}\otimes\cdots\otimes\pi^{(p)}}=\mathsf{E }_{\pi^{(1)}}\mathsf{E}_{\pi^{(2)}}\cdots\mathsf{E}_{\pi^{(p)}}\). **Theorem 8**.: _The family \(\{\mathsf{E}_{\pi}:\pi\text{ is a partition diagram of order }n\}\) is a basis of \(\mathsf{ParSym}_{n}\)._ Proof.: The antipode of a graded, connected Hopf algebra is always bijective. So, the set of expressions of the form \(S(\mathsf{H}_{\pi})\), for irreducible partition diagrams \(\pi\), freely generate \(\mathsf{ParSym}\), which is equivalent to the desired result. ### The character of \(\mathsf{ParSym}\) A _Combinatorial Hopf Algebra_, according to Aguiar, Bergeron, and Sottile [2], is a graded, connected Hopf \(\Bbbk\)-algebra \(\mathscr{H}\) together with a multiplicative, linear map of the form \(\zeta\colon\mathscr{H}\to\Bbbk\) referred to as the _character_ of \(\mathscr{H}\). The direct sum decomposition of \(\mathsf{ParSym}\) indicated in (10) is such that: 1. For homogeneous generators \(\mathsf{H}_{\pi}\in\mathsf{ParSym}_{i}\) and \(\mathsf{H}_{\rho}\in\mathsf{ParSym}_{j}\), the product \(\mathsf{H}_{\pi}\mathsf{H}_{\rho}\) is in \(\mathsf{ParSym}_{i+j}\), according to the horizontal concatenation operation indicated in (9) together with \(\mathsf{ParSym}_{i+j}\) being spanned by all \(\mathsf{H}\)-basis elements indexed by partition diagrams of order \(i+j\); 2. For a homogeneous generator \(\mathsf{H}_{\pi}\in\mathsf{ParSym}_{n}\), the coproduct \(\Delta(\mathsf{H}_{\pi})\) is in \(\bigoplus_{i+j=n}\mathsf{ParSym}_{i}\otimes\mathsf{ParSym}_{j}\), according to Definition 4. We thus obtain a graded Hopf algebra structure on \(\mathsf{ParSym}\). Since \(\mathsf{ParSym}_{0}=\operatorname{span}_{\Bbbk}\{\mathsf{H}_{\varnothing}\}\), and since \(\mathsf{H}_{\varnothing}\) may be identified with \(1_{\Bbbk}\) in the sense indicated in (11), the graded component \(\mathsf{ParSym}_{0}\) may be identified with \(\Bbbk\), giving us that \(\mathsf{ParSym}\) is connected as a graded Hopf algebra. Following the seminal reference on CHAs [2], along with further references as in [35] and [39, SS2.2], a multiplicative linear map \(\zeta\) from a polynomial \(\Bbbk\)-algebra to \(\Bbbk\) or from a \(\Bbbk\)-algebra on power series to \(\Bbbk\) is, canonically, given by setting one variable to \(1\) and the remaining variables to \(0\). In particular, for the canonical character \(\zeta\colon\Bbbk[x_{1},x_{2},\ldots]\to\Bbbk\), this is given by setting \(\zeta(x_{1})=1\) and \(\zeta(x_{i})=0\) for \(i\neq 1\) and by extending these relations linearly and multiplicatively. So, we may set \[\zeta_{\text{NSym}}(H_{\alpha})=\begin{cases}1&\text{if $\alpha=(1)$ or $\alpha=()$},\\ 0&\text{otherwise},\end{cases} \tag{44}\] for a composition \(\alpha\). By direct analogy with (44), we set \[\zeta_{\text{ParSym}}(\mathsf{H}_{\pi})=\begin{cases}1&\text{if $\pi=\quad$ or $\pi=\varnothing$},\\ 0&\text{otherwise},\end{cases} \tag{45}\] giving us a CHA structure for \(\text{ParSym}\). ### Relationship with \(\text{NSym}\) A _combinatorial Hopf morphism_[39, SS2.2] from a CHA \((\mathscr{H}_{1},\zeta_{1})\) to a CHA \((\mathscr{H}_{2},\zeta_{2})\) is a Hopf algebra morphism \(\Phi\colon\mathscr{H}_{1}\to\mathscr{H}_{2}\) such that \(\zeta_{1}=\zeta_{2}\circ\Phi\). A Hopf algebra homomorphism \(\Phi\colon\mathscr{H}_{1}\to\mathscr{H}_{2}\) is such that \(\Phi\) is an algebra homomorphism such that \[\Delta_{2}\circ\Phi=(\Phi\otimes\Phi)\circ\Delta_{1} \tag{46}\] and \[\varepsilon_{2}\circ\Phi=\varepsilon_{1}, \tag{47}\] with (46) and (47) yielding a coalgebra morphism, where \(\Delta_{1}\) and \(\Delta_{2}\) respectively denote the coproduct operations for \(\mathscr{H}_{1}\) and \(\mathscr{H}_{2}\), and where \(\varepsilon_{1}\) and \(\varepsilon_{2}\) denote the counit morphisms of \(\mathscr{H}_{1}\) and \(\mathscr{H}_{2}\). Define \[\Phi\colon\text{NSym}\to\text{ParSym} \tag{48}\] so that \[\Phi(H_{n})=\mathsf{H}_{\bigcirc}\quad\text{\Large$\circ$}\quad\text{\Large$ \circ$}\quad\text{\Large$\circ$}\quad\text{\Large$\circ$}\quad\text{\Large$ \circ$}, \tag{49}\] with (49) extended linearly multiplicatively, and where the partition diagram on the right of (49) is of order \(n\). The mapping in (48) is an injective, combinatorial Hopf morphism, noting that the equality \(\zeta_{\mathsf{NSym}}=\zeta_{\mathsf{ParSym}}\circ\Phi\) follows in a direct way from (44) and (45). As below, we introduce a projection morphism from \(\mathsf{ParSym}\) to \(\mathsf{NSym}\). Let the statistic \(m=m(\pi)\) defined in Theorem 6 be extended so that \(m(\varnothing)=0\). For an \(\otimes\)-irreducible partition diagram \(\pi\), let \[\chi\colon\mathsf{ParSym}\to\mathsf{NSym} \tag{50}\] map \(\mathsf{H}_{\pi}\) to the complete homogeneous basis element \(H_{m(\pi)}\), recalling that \(m(\pi)\) is a well defined value according to the uniqueness property given in Theorem 6. We then let \(\chi\) be extended linearly and so as to be compatible with (9). **Example 9**.: Returning to Example 6, if we were to replace each expression of the form \(\mathsf{H}_{\pi}\) in Example 6 with \(\chi(\mathsf{H}_{\pi})\), the left-hand side of the equality in Example 6 would yield \(\Delta H_{2}\), and the right-hand side of the equality in Example 6 would yield \(H_{2}\otimes H_{0}+H_{1}\otimes H_{1}+H_{0}\otimes H_{2}\). This illustrates (46) holding, with respect to \(\chi\). Following [13, 46], we express that much about research that concerns both \(\mathsf{NSym}\) and \(\mathsf{NCSym}\) is motivated by the problem of developing a better understanding as to the relationship between \(\mathsf{NSym}\) and \(\mathsf{NCSym}\). This motivates the problem of constructing a Hopf algebra that both contains and projects onto \(\mathsf{NSym}\) in natural and combinatorially inspired ways, in the hope of elucidating the aforementioned problem, with the use of combinatorial structures and properties related to \(\mathsf{NCSym}\). Since the bases of \(\mathsf{NCSym}\) are indexed by set partitions, this motivates our construction of a Hopf algebra \(\mathsf{ParSym}\) that has bases indexed by partition diagrams and that contains and projects onto \(\mathsf{NSym}\). As we recently discussed [17], it is often more expedient to work with analogues of objects from \(\mathsf{Sym}\) in the noncommutative setting given by \(\mathsf{NSym}\); this motivates our lifting \(\mathsf{ParSym}\) of \(\mathsf{NSym}\) and the study of this lifting in relation to \(\mathsf{NCSym}\). **Theorem 9**.: _The mapping in (50) is a combinatorial Hopf morphism._ Proof.: The mapping \(\chi\) is defined to be linear and to preserve the multiplicative operation for \(\mathsf{ParSym}\), and we find that \(\chi(\mathsf{H}_{0})=H_{0}\), so as to provide an algebra morphism. According to Definition 4, we may obtain that: \[(\chi\otimes\chi)\circ\Delta\mathsf{H}_{\pi}=\sum_{G_{1},G_{2}}\chi\left( \mathsf{H}_{G_{1}}\right)\otimes\chi\left(\mathsf{H}_{G_{2}}\right). \tag{51}\] According to Theorem 6, we may let \(\mathsf{H}_{G_{1}}\) and \(\mathsf{H}_{G_{2}}\), respectively, be uniquely decomposed as \(\bullet\)-products of nontrivial and \(\bullet\)-irreducible \(\mathsf{H}\)-generators of lengths \(\ell_{1}\) and \(\ell_{2}\), so that \(\mathsf{H}_{\pi}\) is uniquely decomposable as a \(\bullet\)-product of nontrivial and \(\bullet\)-irreducible \(\mathsf{H}\)-generators of length \(\ell_{1}+\ell_{2}\), which, for fixed \(\pi\), we denote as a fixed value \(\ell=m(\pi)\). Furthermore, and again by Theorem 6, for the same fixed value \(\ell\), and for a given value \(\ell_{1}\) in \(\{0,1,\ldots,\ell\}\), there is exactly one graph \(G_{1}\) such that \(\mathsf{H}_{\pi}=\mathsf{H}_{G_{1}}\bullet\mathsf{H}_{G_{2}}\) and such that \(\chi\left(\mathsf{H}_{G_{1}}\right)=H_{\ell_{1}}\), namely, the graph \(G_{1}\) such that \(\mathsf{H}_{G_{1}}\) is equal to the \(\bullet\)-product of the first \(\ell_{1}\) factors in the unique \(\bullet\)-decomposition of \(\mathsf{H}_{\pi}\), and this gives us that \(\mathsf{H}_{G_{2}}\) equals the \(\bullet\)-product of the remaining factors in the same \(\bullet\)-decomposition of \(\mathsf{H}_{\pi}\), and this is according to our characterization of \(\bullet\)-decompositions, as in Theorem 7. So, from (51), we may obtain that \[\left(\chi\otimes\chi\right)\left(\Delta\mathsf{H}_{\pi}\right) =\sum_{\ell_{1}+\ell_{2}=\ell}H_{\ell_{1}}\otimes H_{\ell_{2}}\] \[=\Delta\left(\chi\left(\mathsf{H}_{\pi}\right)\right).\] To obtain that \(\varepsilon_{\mathsf{NSym}}\circ\chi=\varepsilon_{\mathsf{ParSym}}\), we begin by letting \(\pi\) again be reducible, with \(\ell\) as before, so that we may write \[\varepsilon_{\mathsf{NSym}}\left(\chi\left(\mathsf{H}_{\pi}\right)\right) =\varepsilon_{\mathsf{NSym}}\left(H_{\ell}\right)\] \[=\begin{cases}1&\text{if $\ell=0$,}\\ 0&\text{otherwise,}\end{cases}\] \[=\begin{cases}1&\text{if $\pi=\varnothing$,}\\ 0&\text{otherwise.}\end{cases}\] By again letting \(\pi\) and \(\ell\) be as before, we may find that \[\zeta_{\mathsf{NSym}}\left(\chi\left(\mathsf{H}_{\pi}\right)\right) =\zeta_{\mathsf{NSym}}\left(H_{\ell}\right)\] \[=\begin{cases}1&\text{if $\ell\in\{0,1\}$,}\\ 0&\text{otherwise,}\end{cases}\] \[=\begin{cases}1&\text{if $\pi\in\left\{\varnothing,\atop\text{ $\bigcirc$}\right\}$,}\\ 0&\text{otherwise,}\end{cases}\] so that \(\zeta_{\mathsf{PArSym}}=\zeta_{\mathsf{NSym}}\circ\chi\), as desired. ### Relationship with QSym A fundamental result in the study of CHAs due to Aguiar, Bergeron, and Sottile [2], described as _beautiful_ in [7] due to the way it relates a quasisymmetric func tion to each element in a given CHA, may be formulated as in Theorem 10 below, where \(\mathsf{QSym}\) denotes the Hopf algebra dual to \(\mathsf{NSym}\), and where the monomial basis \(\{M_{\alpha}:\alpha\in\mathcal{C}\}\) is dual to (3), according to the pairing \(\langle\cdot,\cdot\rangle\colon\mathsf{NSym}\times\mathsf{QSym}\to\Bbbk\) such that \(\langle H_{\alpha},M_{\beta}\rangle=\delta_{\alpha,\beta}\). The character \(\zeta_{\mathsf{QSym}}\) is such that \(\zeta_{\mathsf{QSym}}(M_{\alpha})\) agrees with (44). **Theorem 10**.: _[_2_, Theorem 4.1]_ _For a CHA \((\mathcal{H},\zeta)\), there is a unique morphism of CHAs from \((\mathcal{H},\zeta)\) to \((\mathsf{QSym},\zeta_{\mathsf{QSym}})\)._ Since we have proved that \(\chi\colon\mathsf{ParSym}\to\mathsf{NSym}\) is a combinatorial Hopf morphism, and since the proof of Theorem 4.1 from [2] provides a way of constructing a combinatorial Hopf morphism from \(\mathsf{NSym}\) to \(\mathsf{QSym}\), by taking the composition of the former CHA morphism evaluated at this latter CHA morphism, we obtain the unique CHA morphism from \(\mathsf{ParSym}\) to \(\mathsf{QSym}\). ## 4 Diagram subalgebras and Hopf subalgebras A _diagram algebra_ may be broadly understood to refer to a subalgebra of \(\mathbb{C}A_{k}(n)\). By taking subalgebras of the underlying algebra of the bialgebra \(\mathsf{ParSym}\) by restricting the diagrams allowed for index sets of the form indicated in (8), e.g., according to a given family of diagram algebras, this gives rise to CHA structures worthy of study in relation to \(\mathsf{ParSym}\). The work of Colmenarejo et al. in [19] was devoted to the lifting of combinatorial properties associated with the symmetric group algebra \(\mathbb{C}S_{k}\) as a subalgebra of \(\mathbb{C}A_{k}(n)\) to diagram subalgebras of \(\mathbb{C}A_{k}(n)\) apart from \(\mathbb{C}S_{k}\), and this leads us to consider how the partition diagrams indexing the bases of these subalgebras could lead to _Hopf_ subalgebras of \(\mathsf{ParSym}\), as opposed to _diagram_ subalgebras of \(\mathbb{C}A_{k}(n)\). For each of the diagram subalgebras considered in [19], with reference to Table 1 in [19], we consider, as below, a corresponding subalgebra of \(\mathsf{ParSym}\). A remarkable property about our new Hopf algebra \(\mathsf{ParSym}\) is given by how each of the families of partition diagrams associated with what may be considered as the main or most important diagram subalgebras [19] naturally gives rise to a Hopf subalgebra of \(\mathsf{ParSym}\). This nicely illustrates how the morphisms \(\Delta\) and \(S\) we have defined on \(\mathsf{ParSym}\) are natural, in terms of lifting combinatorial objects and properties associated with both \(\mathbb{C}A_{k}(n)\) and \(\mathsf{NSym}\). The _propagation number_ of a partition diagram \(\pi\) refers to the number of components in \(\pi\) that contain at least one upper vertex and at least one lower vertex. So, there is a natural correspondence between the partition diagrams in of propagation number \(i\) and the permutations in the symmetric group \(S_{i}\). So, if we take \(S_{i}\) as a submonoid of \(A_{i}\), and then define \(\Bbbk\)-spaces obtained by replacing the index set \(A_{i}\) in (8) with \(S_{i}\), and then form a graded subalgebra of (10), subject to the same multiplication rule in (9), this leads us to consider the effect of our coproduct operation for \(\mathsf{ParSym}\), restricted to the graded algebra on permutations we have formed. For a permutation \(p\) written as a permuting diagram \(\{\{1,p(1)^{\prime}\},\{2,p(2)^{\prime}\},\ldots,\{k,p(k)^{\prime}\}\}\), by writing this diagram as a concatenation of \(\otimes\)-irreducible diagrams, the resultant factors are given by irreducible, permuting diagrams, and we find that these factors are primitive, since no two bottom nodes of a permuting diagram can be in the same component, so we find that we obtain closure under \(\Delta\), and an equivalent argument applies to obtain closure with repsect to the antipode \(S\). The subalgebra of \(\mathsf{ParSym}\) spanned by permutations is not isomorphic to the Malvenuto-Reutenauer Hopf algebra of permutations, which involves the shuffle product of permutations, as opposed to the concatenation of permutation diagrams, for the definition of its product operation. The number of generators in each degree for the subalgebra spanned by permutations we have defined is given by the Boolean transform of the sequence of factorials, indexed in the OEIS as A003319, and this sequence has often arisen in the context of Hopf algebras and Hopf monoids. This motivates the exploration of diagram superalgebras of \(\mathbb{C}S_{k}\) in \(\mathbb{C}A_{k}(n)\), in relation to \(\mathsf{ParSym}\). _Planar_ diagrams refer to partition diagrams that may be expressed as planar graphs, and the dimension of the planar subalgebra of \(\mathbb{C}A_{k}(n)\) is \(\frac{1}{2k+1}\binom{4k}{2k}\). It seems that the Boolean transform for the corresponding integer sequence, giving \(\frac{1}{4n-1}\binom{4n}{2n}\), has not been considered in the context of Hopf algebras. This motivates the following result. For the sake of convenience, we may write, as below, the expression \(\pi\) in place of \(\mathsf{H}_{\pi}\). **Theorem 11**.: _The graded subalgebra of \(\mathsf{ParSym}\) spanned by planar diagrams forms a Hopf subalgebra._ Proof.: Let \(\pi\) and \(\rho\) be planar diagrams. Since \(\pi\) and \(\rho\) do not have edge crossings, placing \(\rho\) to the right of \(\pi\) would not result in an edge crossing, so \(\pi\otimes\rho\) is planar. Letting \(\pi\) be as before, suppose that \(\pi\) may be expressed so that \(\pi=\rho^{(1)}\bullet\rho^{(2)}\) for nonempty diagrams \(\rho^{(1)}\) and \(\rho^{(2)}\). We may add, if necessary, an edge in \(\pi\) incident with the lower right vertex of \(\rho^{(1)}\) and the lower left vertex of \(\rho^{(2)}\). Since this added edge is on the border of \(\pi\), this would not result in any crossing edges. Mimicking our proof of Theorem 6, we may remove edges from \(\pi\) in such a way so that \(\pi\) may be formed by adding an isthmus incident with the lower right node of \(\rho^{(1)}\) and the lower left node of \(\rho^{(2)}\), and the removal of any edges would not result in any crossing edges. Since \(\pi\) is equivalent to the diagram given by taking \(\rho^{(1)}\), placing \(\rho^{(2)}\) to the right of \(\rho^{(1)}\), and adding an edge at the border, since this process would not create any edge crossings, we may conclude that there are no edge crossings in \(\rho^{(1)}\) and no edge crossings in \(\rho^{(2)}\), by planarity of \(\pi\). So, by expanding \(\Delta\pi\) according to Definition 4, we would find that each resultant term \(\mathsf{H}_{G_{1}}\otimes\mathsf{H}_{G_{2}}\) would be such that \(G_{1}\) and \(G_{2}\) are necessarily planar as diagrams. So, we obtain closure with respect to \(\Delta\), and an equivalent argument gives us closure with respect to \(S\). The specified closure properties yield the desired result. Following the ordering as to how diagram subalgebras are introduced in [19], we proceed to define a _matching_ as a partition diagram \(\pi\) such that all blocks in \(\pi\) are of size at most two. The dimension of the corresponding subalgebra of \(\mathbb{C}A_{k}(n)\), i.e., the Rook-Brauer algebra of order \(k\), is \(\sum_{i=0}^{k}\binom{2k}{2i}(2i-1)!!\). It seems that the Boolean transform of the corresponding integer sequence has not previously been considered, which motivates the following result. **Theorem 12**.: _The graded subalgebra of \(\mathsf{ParSym}\) spanned by matchings forms a Hopf subalgebra._ Proof.: For matchings \(\pi\) and \(\rho\), by placing \(\rho\) to the right of \(\pi\), this does not alter the cardinalities of the blocks in \(\pi\) or the blocks in \(\rho\), so the concatenation \(\pi\otimes\rho\) is such that each block is of size one or two. Letting \(\pi\) be as before, suppose that \(\pi=\rho^{(1)}\bullet\rho^{(2)}\). By definition of the operation denoted as \(\bullet\), the lower right vertex of \(\rho^{(1)}\) is in the same component as the lower left vertex of \(\rho^{(2)}\), in the diagram \(\pi\). Since \(\pi\) is a matching, we may deduce that the aforementioned component consists of two vertices. So, with regard to our proof of Theorem 6, we would not have to remove any edges to form an isthmus. So, the diagram \(\pi\) may be obtained by taking \(\rho^{(1)}\), placing \(\rho^{(2)}\) beside \(\rho^{(1)}\), and then forming an isthmus at the border that forms a connected component of size two. So, this process would not result in any connected components of size greater than two, so \(\rho^{(1)}\) and \(\rho^{(2)}\) are matchings. Mimicking a line of reasoning from our proof of Theorem 11, we obtain closure with respect to \(\Delta\) and \(S\). A _perfect matching_ is a matching such that each block is of size two. The diagram subalgebra of \(\mathbb{C}A_{k}(n)\) spanned/indexed by perfect matchings is the famous Brauer algebra, which is of dimension \((2k-1)!!\). The Boolean transform, in this case, agrees with the OEIS sequence A000698, which is associated with many enumerative interpretations. **Theorem 13**.: _The graded subalgebra of \(\mathsf{ParSym}\) spanned by perfect matchings forms a Hopf subalgebra._ Proof.: We may mimic \(\otimes\)-closure argument in our proof of Theorem 12 to demonstrate the desired \(\otimes\)-closure property for perfect matchings. Now, for a _perfect_ matching \(\pi\), we assume, by way of contradiction, that there exist nonempty diagrams such that \(\pi=\rho^{(1)}\bullet\rho^{(2)}\). Again, by definition of the operation \(\bullet\), the lower right vertex of \(\rho^{(1)}\) is in the same component as the lower left vertex of \(\rho^{(2)}\), in \(\pi\). So, since each connected component in \(\pi\) is of size two, we may deduce that there is a component in \(\pi\) consisting entirely of the lower right vertex of \(\rho^{(1)}\) and the lower left vertex of \(\rho^{(2)}\). Furthermore, we may deduce that the upper right vertex of \(\rho^{(1)}\) cannot be in the same component as the upper left vertex of \(\rho^{(2)}\), in \(\pi\), because, otherwise, since each component of \(\pi\) is of size two, these two upper vertices would form an edge, say \(\{i,i+1\}\), but then both \(\{i,i+1\}\) and \(\{i^{\prime},(i+1)^{\prime}\}\) would be components in \(\pi\), which is impossible, since \(\bullet\) would only have the effect of adding a bottom isthmus forming a component of size two, and this could not have any effect in terms of adding an upper edge. So, by assumption that \(\rho^{(1)}\) and \(\rho^{(2)}\) are nonempty, each of \(\rho^{(1)}\) and \(\rho^{(2)}\) has at least two nodes, but since a given partition diagram has an even number of nodes, there would be an odd number of nodes "available" in \(\rho^{(1)}\) apart from the lower right node that would be adjacent, in \(\pi\), with the lower left node of \(\rho^{(2)}\), but then we could not form a perfect matching from these odd number of "remaining" nodes. So, for a perfect matching \(\pi\), we may deduce that \(\pi\) is primitive in \(\mathsf{ParSym}\), so that we obtain closure with respect to \(\Delta\). An equivalent argument applies to closure with respect to \(S\). A _patial permutation_ is a partition diagram \(\pi\) such that each block of \(\pi\) is of size one or two and such that each block that is of size two in \(\pi\) is propagating, i.e., so that each such block has at least one upper vertex and at least one lower vertex. The diagram subalgebra of \(\mathbb{C}A_{k}(n)\) spanned or indexed by partial permutations is of dimension \(\sum_{i=0}^{k}\binom{k}{i}^{2}i!\), and it seems that the Boolean transform for such expressions has not previously been considered. **Theorem 14**.: _The graded subalgebra of \(\mathsf{ParSym}\) spanned by perfect matchings forms a Hopf subalgebra._ Proof.: This may be proved in effectively the same way as in with the case for permuting diagrams. Mimicking our above proof on perfect matchings, we may obtain corresponding closure properties with respect to planar perfect matchings, which are associated with Temperley-Lieb algebras, and similarly with respect to the other major diagram algebras, as in with the Motzkin algebra and the planar rook algebra. ## 5 Conclusion We conclude by briefly describing some future areas of research related to \(\mathsf{ParSym}\). If we were to construct a CHA generated by \(\otimes\)-irreducible partition diagrams, but with a _commutative_ operation given by the disjoint union of graphs in place of the noncommutative \(\otimes\) operation, then this would give rise to a free commutative algebra and an analogue of partition diagrams, given by the equivalence classes given by identifying two partition diagrams if such diagrams may be obtained by permuting the positions of \(\otimes\)-irreducible factors. We encourage the study of this free commutative algebra and its relation to Schur-Weyl duality. Given how partition diagrams are derived using the centralizer algebra in (6), what would be the appropriate analogy of this Schur-Weyl duality corresponding to the "commutative" version of partition diagrams we have suggested? By analogy with how the stack-sorting map was lifted from permutations to partition diagrams in [16], how could the shuffle product of permutations be lifted to partition diagrams, to form a new CHA on partition diagrams, but with a shuffle-type product being used in place of concatenation? This would be of interest in terms of the problem of lifting the Malvenuto-Reutenauer Hopf algebra of permutations, with the use of partition diagrams. Instead of constructing a CHA via the horizontal concatenation of partition diagrams, as above, how could a subalgebra of \(\mathbb{C}A_{k}(n)\), or a related structure such as the rook monoid algebra, that does not have a group algebra structure be endowed with a Hopf algebra structure, i.e., with a vertically defined diagram multiplication operation, and by analogy with the Hopf algebra structure on the group algebra \(\mathbb{C}S_{k}\)? What is the Hopf algebra \(\mathsf{ParSym}^{*}\) that is dual to \(\mathsf{ParSym}\), i.e., how can its product and coproduct operations be defined in an explicit, combinatorial way, by analogy with how the CHA \(\mathsf{QSym}\) of quasisymmetric functions is dual to \(\mathsf{NSym}\)? Partition algebras are considered to have two distinguished bases: The diagram basis and the orbit basis. If we think of a given \(\mathsf{H}\)-basis element \(\mathsf{H}_{\pi}\) as being in correspondence with the diagram basis element \(d_{\pi}\), then one may thus construct an analogue of the orbit basis for \(\mathsf{ParSym}\). How could this orbit-like basis be applied in relation to the CHA structure on \(\mathsf{ParSym}\)? We have introduced liftings of the \(H\)- and \(E\)- bases of \(\mathsf{NSym}\), as in with the H- and E-bases of \(\mathsf{ParSym}\). How could the other distinguished bases of \(\mathsf{NSym}\) be lifted to \(\mathsf{ParSym}\) in a way that is applicable to the study of the structure of partition algebras? The statistic \(m=m(\pi)\) involved in Theorem 6 may be of interest in its own right, in view of its uses, as above, in the context of the study of the structure of \(\mathsf{ParSym}\). How can \(m(\pi)\) be computed in an efficient way, in view of the algorithmic nature of our proof of Theorem 6, and what interesting properties can be derived from or associated with the number of diagrams \(\pi\) of a fixed order \(n\) such that \(m(\pi)=k\) for fixed \(k\)? Recall that we have offered a reformulation as to how \(\mathsf{NSym}\) may be defined or constructed, as in Section 3.1. This is of interest due to how, in contrast to how \(\mathsf{NSym}\) may be defined as a free algebra with one generator in _each_ positive integer degree, our reformulation of \(\mathsf{NSym}\) is given by how it is generated by a _single_ element, subject to the two operations \(\circ\) and \(\bullet\) indicated in Section 3.1. It seems that \(\mathsf{NSym}\) has not been previously been considered in this way, i.e., as being freely generated by a _single_ object. How could this be explored in a category-theoretic way and with regard to how \(\mathsf{NSym}\) is universal in the category of CHAs, and in regard to the connection between \(\mathsf{NSym}\) and Grothendieck rings for finitely generated projective representations [10]? It appears that: Endowed with the products \(\otimes\) and \(\bullet\), we have that \(\mathsf{ParSym}\) has the structure of a matching associative algebra, according to the definition of this term given in [53], so that, for partition diagrams \(x\), \(y\), and \(z\), we have that \[(x\bullet y)\bullet z =x\bullet(y\bullet z),\] \[(x\bullet y)\otimes z =x\bullet(y\otimes z),\] \[(x\otimes y)\bullet z =x\otimes(y\bullet z),\] \[(x\otimes y)\otimes z =x\otimes(y\otimes z).\] It seems that \(\mathsf{ParSym}\) is freely generated by partition diagrams that are both \(\otimes\)- and \(\bullet\)-irreducible, and we encourage the exploration of this. ### Acknowledgements The author is grateful to acknowledge support from a Killam Postdoctoral Fellowship. ## Competing interests statement The author has no competing interests to declare.
2305.12076
Accelerated DC Algorithms for the Asymmetric Eigenvalue Complementarity Problem
We are interested in solving the Asymmetric Eigenvalue Complementarity Problem (AEiCP) by accelerated Difference-of-Convex (DC) algorithms. Two novel hybrid accelerated DCA: the Hybrid DCA with Line search and Inertial force (HDCA-LI) and the Hybrid DCA with Nesterov's extrapolation and Inertial force (HDCA-NI), are established. We proposed three DC programming formulations of AEiCP based on Difference-of-Convex-Sums-of-Squares (DC-SOS) decomposition techniques, and applied the classical DCA and 6 accelerated variants (BDCA with exact and inexact line search, ADCA, InDCA, HDCA-LI and HDCA-NI) to the three DC formulations. Numerical simulations of 7 DCA-type methods against state-of-the-art optimization solvers IPOPT, KNITRO and FILTERSD, are reported.
Yi-Shuai Niu
2023-05-20T03:24:49Z
http://arxiv.org/abs/2305.12076v1
# Accelerated dc algorithms for the asymmetric eigenvalue complementarity problem ###### Abstract. We are interested in solving the Asymmetric Eigenvalue Complementarity Problem (AEiCP) by accelerated Difference-of-Convex (DC) algorithms. Two novel hybrid accelerated DCA: the Hybrid DCA with Line search and Inertial force (HDCA-LI) and the Hybrid DCA with Nesterov's extrapolation and Inertial force (HDCA-NI), are established. We proposed three DC programming formulations of AEiCP based on Difference-of-Convex-Sums-of-Squares (DC-SOS) decomposition techniques, and applied the classical DCA and 6 accelerated variants (BDCA with exact and inexact line search, ADCA, InDCA, HDCA-LI and HDCA-NI) to the three DC formulations. Numerical simulations of 7 DCA-type methods against state-of-the-art optimization solvers IPOPT, KNITRO and FILTERSD, are reported. Key words and phrases:Accelerated DC algorithms, asymmetric eigenvalue complementarity problem, difference-of-convex-sums-of-square decomposition 2020 Mathematics Subject Classification: Primary 65F15, 90C33, 90C30; Secondary 90C26, 93B60, 47A75 The author was supported by the Natural Science Foundation of China (Grant No: 11601327). ## 1. Introduction Asymmetric Eigenvalue Complementarity Problem (AEiCP) consists of finding complementary eigenvectors \(x\in\mathbb{R}^{n}\setminus\{0\}\) and complementary eigenvalues \(\lambda\in\mathbb{R}\) such that (AEiCP) \[\begin{cases}w=\lambda Bx-Ax,\\ x^{\top}w=0,\\ 0\neq x\geq 0,w\geq 0.\end{cases}\] where \(x^{\top}\) is the transpose of \(x\), \(A\in\mathbb{R}^{n\times n}\) is an asymmetric real matrix, and \(B\) is a _positive definite_ (PD) matrix (unnecessarily to be symmetric). If \(A\) and \(B\) are both symmetric, then the problem is called _symmetric EiCP_ (SEiCP). Throughout the paper, we will denote the solution set of problem (AEiCP) (resp. SEiCP) by \(\text{AEiCP}(A,B)\) (resp. \(\text{SEiCP}(A,B)\)). The Eigenvalue Complementarity Problem appeared in the study of static equilibrium states of mechanical systems with unilateral friction [38], and found interest in the Spectral Theory of Graphs [12, 41]. In [19, Theorem 2.1], it is known that (AEiCP) can be expressed as a finite variational inequality on the unit simplex \(\Omega:=\{x\in\mathbb{R}^{n}:e^{\top}x=1,x\geq 0\}\), which involves finding \(\bar{x}\in\Omega\) satisfying: \[\left\langle\left(\frac{x^{\top}Ax}{x^{\top}Bx}B-A\right)x,x-\bar{x}\right\rangle \geq 0,\quad\forall x\in\Omega.\] If \(B\) is a _strictly copositive_ (SC) matrix (i.e., \(x^{\top}Bx>0\) for all \(x\geq 0\) and \(x\neq 0\)), then the variational inequality is guaranteed to have a solution [10]. Hence, (AEiCP) has a solution since \(B\in\mathrm{PD}\) (a special case of \(B\in\mathrm{SC}\)). Moreover, (AEiCP) has at most \(n2^{n-1}\) distinct \(\lambda\)-solutions [39, Proposition 3], and it has a positive complementary eigenvalue if \(A\) is a copositive matrix (i.e., \(x^{\top}Ax\geq 0,\forall x\geq 0\)) and \(-A\) is a \(R_{0}\) matrix (i.e., \(M\in R_{0}\) if and only if \([x\geq 0,Mx\geq 0,x^{\top}Mx=0]\Rightarrow x=0\)) [19, Theorem 2.2]. In particular, when both \(A,B\in\mathrm{PD}\), then all complementary eigenvalues of (AEiCP) are positive. If \(A\) is not \(\mathrm{PD}\), then we can find some \(\mu>0\) such that \(A+\mu B\in\mathrm{PD}\) and, as shown in [25, Theorem 2.1], we have: \[(x,\lambda)\in\mathrm{EiCP}(A,B)\Leftrightarrow(x,\lambda+\mu)\in\mathrm{EiCP }(A+\mu B,B).\] Without loss of generality, throughout the paper, we assume that: **Hypothesis 1.1**.: Both \(A\) and \(B\) are \(\mathrm{PD}\) matrices for (AEiCP). In fact, solving the SEiCP is much easier than the AEiCP. For the SEiCP, there are three well-known equivalent formulations (see e.g., [25]), namely the Rayleigh quotient formulation (RP) \[\max\left\{\frac{x^{\top}Ax}{x^{\top}Bx}:x\in\Omega\right\},\] the logarithmic formulation (LnP) \[\max\left\{\ln(x^{\top}Ax)-\ln(x^{\top}Bx):x\in\Omega\right\},\] and the quadratic formulation (QP) \[\max\{x^{\top}Ax:x^{\top}Bx\leq 1,x\geq 0\}.\] These formulations are equivalent to SEiCP in the sense that if \(\bar{x}\) is a stationary point of (QP), (RP) or (LnP), then \[(\bar{x},(\bar{x}^{\top}A\bar{x})/(\bar{x}^{\top}B\bar{x}))\in\mathrm{SEiCP}( A,B).\] However, they are no longer equivalent to (AEiCP). A simple convincing counterexample is given below: Let \(n=2\), \(B=I_{2}\) and \[A=\begin{bmatrix}-1&1\\ -2&2\end{bmatrix}.\] Then, the vector \(\bar{x}=[0,1]^{\top}\) is a stationary point of (QP), (RP) and (LnP), but \[(\bar{x},(\bar{x}^{\top}A\bar{x})/(\bar{x}^{\top}B\bar{x}))=([0,1]^{\top},2) \notin\mathrm{AEiCP}(A,B)\] because \[\bar{w}=(\bar{x}^{\top}A\bar{x})/(\bar{x}^{\top}B\bar{x})B\bar{x}-A\bar{x}= \begin{bmatrix}-1\\ 0\end{bmatrix}\leq 0.\] In fact \(([0,1]^{\top},2)\in\mathrm{SEiCP}(A+A^{\top},B+B^{\top})\), but \(\mathrm{SEiCP}(A+A^{\top},B+B^{\top})\) and \(\mathrm{AEiCP}(A,B)\) are not equivalent for (AEiCP). Hence, we can not get a solution of (AEiCP) by searching a stationary point of (QP), (RP) or (LnP). Concerning solution algorithms for (AEiCP), a number of algorithms have been designed such as Enumerative method [19, 11], Semismooth Newton method [1], Descent algorithm for minimizing the regularized gap function [7], Difference-of-Convex (DC) algorithm [28, 26, 27, 25], Splitting algorithm [15], Active-set method [6], Projected-Gradient algorithm [17], and Alternating Direction Method of Multipliers (ADMM) [16]. Most of these methods can be applied to both AEiCP and SEiCP. However, despite the theoretical ability of the Enumerative method to always find a solution to (AEiCP), its practical application often proves computationally demanding or even intractable for larger instances within reasonable CPU time. The other algorithms face the same drawback, that is they attempt to solve nonconvex optimization formulations using local optimization approaches, which may fail for many instances in (AEiCP), particularly when solution accuracy is on demand and at least one of the matrices \(A\) and \(B\) is ill-conditioned. In this paper, we will focus on solving three nonlinear programming (NLP) formulations of (AEiCP) via several accelerated DCA. First, we present in Section 2 the classical DCA and three accelerated variants of DCA (BDCA [3, 29], ADCA [36] and InDCA [9, 43]) for solving the convex constrained DC programming problem with polynomial objective function over a closed convex set. Then, the spotlight is on establishing two novel hybrid accelerated DCA, namely the _Hybrid DCA with Line search and Inertial force_ (HDCA-LI) and the _Hybrid DCA with Nesterov's extrapolation and Inertial force_ (HDCA-NI). A rigorous convergence analysis for both HDCAs is furnished. In Section 3, we delve into establishing three DC programming formulations for (AEiCP) by leveraging the DC-SOS decomposition techniques [23], acclaimed for generating superior quality DC decompositions of polynomials. Following this, we discuss in Sections 4 and 5 the application of the aforementioned accelerated variants of DCA in solving the three DC formulations of (AEiCP). Finally, in Section 6, we report numerical simulations comparing the proposed 7 DCA-type algorithms with cutting-edge NLP solvers, namely IPOPT, KNITRO, and FILTERSD, and demonstrate their numerical performance on some large-scale and ill-conditioned instances. ## 2. Accelerated DC Algorithms Consider the convex constrained DC (Difference-of-Convex) program defined by (P) \[\min_{x\in\mathcal{C}}\{f(x):=g(x)-h(x)\},\] where \(\mathcal{C}\) is a nonempty closed and convex subset of a finite dimensional Euclidean space endowed with an inner product \(\langle\cdot,\cdot\rangle\) and an induced norm \(\|\cdot\|\). Both \(g\) and \(h\) are convex polynomials such that \(g\) (resp. \(h\)) is \(\rho_{g}\)-convex with \(\rho_{g}\geq 0\) (resp. \(\rho_{h}\)-convex with \(\rho_{g}\geq 0\)) over \(\mathcal{C}\), i.e., \(g(\cdot)-\frac{\rho_{g}}{2}\|\cdot\|^{2}\) (resp. \(h(\cdot)-\frac{\rho_{h}}{2}\|\cdot\|^{2}\)) is convex over \(\mathcal{C}\). If \(\rho_{g}>0\) (resp. \(\rho_{h}>0\)), then \(g\) (resp. \(h\)) is called \(\rho_{g}\) (resp. \(\rho_{h}\)) strongly convex over \(\mathcal{C}\). In this section, we briefly introduce the standard DCA and three existing accelerated variants of DCA, namely BDCA, ADCA, and InDCA, for solving (P). Following that, we propose two novel hybrid accelerated DCAs: the _Hybrid DCA with Line search and Inertial force_ (HDCA-LI) and the _Hybrid DCA with Nesterov's extrapolation and Inertial force_ (HDCA-NI). We also provide their convergence analysis. Some characteristics of these DCA-type algorithms are summarized in Table 1, where the columns Linesearch, Inertial, and Nesterov indicate whether the algorithm requires a line search, heavy-ball inertial force, and Nesterov's extrapolation, respectively. The monotone column represents the monotonicity of the generated sequence \(\{f(x^{k})\}\). ### Dca One of the most renowned algorithm for solving (P) is called DCA, which is first introduced by Pham Dinh Tao in 1985 as an extension of the subgradient method [35], and extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994 (see [32, 33, 34, 22] and the references therein). DCA applied to (P) consists of constructing a sequence \(\{x^{k}\}\) by solving the convex subproblems: (DCA) \[\boxed{x^{k+1}\in\operatorname{argmin}\{g(x)-\langle x,\nabla h(x^{k})\rangle: x\in\mathcal{C}\},}\] that is, minimizing the convex majorant (cf. surrogate) of the DC function \(f\) at iterate \(x^{k}\) in form of \(g(x)-[h(x^{k})+\langle\nabla h(x^{k}),x-x^{k}\rangle]\) by linearizing \(h\) at \(x^{k}\). DCA is a descent method enjoys the next convergence properties: **Theorem 2.1** (Convergence theorem of DCA for (P), see e.g., [32, 24, 29]).: _Let \(\{x^{k}\}\) be a well-defined sequence generated by DCA for problem (P) from an initial point \(x^{0}\in\mathbb{R}^{n}\). If \(\inf\{f(x):x\in\mathcal{C}\}>-\infty\) and the sequence \(\{x^{k}\}\) is bounded, then_ * _(monotonicity and convergence of_ \(\{f(x^{k})\}\)_) the sequence_ \(\{f(x^{k})\}\) _is non-increasing and convergent._ * _(subficiently descent property)_ \[f(x^{k})-f(x^{k+1})\geq\frac{\rho_{g}+\rho_{h}}{2}\|x^{k}-x^{k+1}\|^{2}, \quad\forall k\geq 1.\] * _(square summable property)_ _if_ \(\rho_{g}+\rho_{h}>0\)_, then_ \[\sum_{k\geq 0}\|x^{k+1}-x^{k}\|^{2}<\infty.\] * _(subsequential convergence of_ \(\{x^{k}\}\)_) any cluster point_ \(x^{*}\) _of the sequence_ \(\{x^{k}\}\) _is a critical point of_ (P)_, i.e.,_ \(\nabla h(x^{*})\in\partial(g+\chi_{\mathcal{C}})(x^{*})\)_._ * _(convergence of_ \(\{x^{k}\}\)_) if_ \(f\) _is a KL function,_ \(\rho_{g}+\rho_{h}>0\)_,_ \(h\) _has locally Lipschitz continuous gradient over_ \(\mathcal{C}\)_, and_ \(\mathcal{C}\) _is a semi-algebraic set, then the sequence_ \(\{x^{k}\}\) _is convergent._ _Remark 2.2_.: * DCA is a monotone algorithm in the sequence \(\{f(x^{k})\}\). * The symbol \(\partial\) denotes the subdifferential of the convex function [40] and \(\chi_{\mathcal{C}}\) is the indicator function defined as \(\chi_{\mathcal{C}}(x)=0\) if \(x\in\mathcal{C}\) and \(\chi_{\mathcal{C}}(x)=\infty\) otherwise. * \(\inf\{f(x):x\in\mathcal{C}\}>-\infty\) does not necessarily imply that the sequence \(\{x^{k}\}\) is bounded. For instance, consider \(g(x)=e^{x}\), \(h(x)=0\) and \(\mathcal{C}=\mathbb{R}\). Clearly, \(\inf\{f(x):x\in\mathcal{C}\}=0>-\infty\). Then, DCA starting from any initial point \(x^{0}\in\mathbb{R}\) \begin{table} \begin{tabular}{l|c c c c} \hline Algorithm & Linesearch & Inertial & Nesterov & Monotone \\ \hline DCA [32] & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) \\ BDCA [29] & \(\bigvee\) & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) \\ ADCA [36] & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) \\ InDCA [9, 43] & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) \\ HDCA-LI & \(\bigvee\) & \(\bigstar\) & \(\bigstar\) & \(\bigstar\) \\ HDCA-NI & \(\bigstar\) & \(\bigvee\) & \(\bigvee\) & \(\bigstar\) \\ \hline \end{tabular} \end{table} Table 1. Some characteristics of the DCA-type algorithms. will generate a sequence \(\{x^{k}\}\) that tends to \(\infty.\) Conversely, it is obvious that the existence of a bounded sequence \(\{x^{k}\}\) does not imply that \(\inf_{x\in\mathcal{C}}f(x)>-\infty.\) * Without the assumption \(\inf_{x\in\mathcal{C}}f(x)>-\infty,\) then the convergence of \(\{f(x^{k})\}\) may not be true; without the boundedness assumptions of \(\{x^{k}\},\) then a cluster point of \(\{x^{k}\}\) may not exist; without the assumption \(\rho_{g}+\rho_{h}>0,\) then \(\|x^{k+1}-x^{k}\|\to 0\) may not be true, and thus a limit point of \(\{x^{k}\}\) may not be a critical point of (P) (see the corresponding counterexamples in [24]). * The strong convexity assumption in \(g\) or \(h\) is not restrictive and can be easily guaranteed in practice, since we can introduce the regularity term \(\frac{\rho}{2}\|x\|^{2}\) (for any \(\rho>0\)) into \(g\) and \(h\) for getting a satisfactory DC decomposition as \[\underbrace{(g(x)+\frac{\rho}{2}\|x\|^{2})}_{=\tilde{g}(x)}-\underbrace{(h(x )+\frac{\rho}{2}\|x\|^{2})}_{=\tilde{h}(x)},\] where both \(\tilde{g}\) and \(\tilde{h}\) are \(\rho\)-strongly convex. * The KL (Kurdyka-Lajasiewicz) assumption plays an important role to guarantee the convergence of the whole sequence \(\{x^{k}\},\) which is an immediate consequence of [24, Theorem 5] (see also [21]). The KL function is a function satisfying the well-known KL property (see e.g., [24, Definition 3]), which is ubiquitous in optimization and its applications, e.g., the semialgebraic, subanalytic, log and exp are KL functions (see [20, 5, 4] and the references therein). In particular, for a closed convex semi-algebraic set \(\mathcal{C}\) (i.e., \(\mathcal{C}\) is a set of polynomial equations and inequalities) and the objective function \(f\) has a polynomial DC decomposition \(g-h\) where \(g\) and \(h\) are \(\rho_{g}\)- and \(\rho_{h}\)-convex polynomials over \(\mathcal{C}\) with \(\rho_{g}+\rho_{h}>0,\) then the convergence of \(\{x^{k}\}\) is guaranteed. ### Bdca BDCA is an accelerated DCA by introducing the (exact or inexact) line search into DCA for acceleration. It was first proposed by Artacho et al. in 2018 [3] for unconstrained smooth DC program with the Armijo line search, then extended by Niu et al. in 2019 [29] for general convex constrained smooth and nonsmooth DC programs with both exact and inexact line searches. The general idea of BDCA in [29] is to introduce a line search along the _DC descent direction_ (a feasible and descent direction generated by two consecutive iterates of DCA) \(d^{k}:=z^{k}-x^{k}\) to find a better candidate \(x^{k+1},\) where \[z^{k}\in\operatorname{argmin}\{g(x)-\langle y^{k},x\rangle\},\quad y^{k}\in \partial h(x^{k}).\] The line search can be performed either exactly (\(=\)) or inexactly (\(\approx\)) to find an optimal solution or an approximate solution of the line search problem \[\alpha_{k}=\operatorname{or}\approx\operatorname{argmin}\{f(z^{k}+\alpha d^{ k}):z^{k}+\alpha d^{k}\in\mathcal{C},\alpha\geq 0\}\] with \(f(z^{k}+\alpha d^{k})\leq f(z^{k})\). Then we update \(x^{k+1}\) by \[x^{k+1}=x^{k}+\alpha_{k}d^{k}.\] BDCA for problem (P) is summarized in Algorithm 1. ``` \(\mathcal{C}\), \(\mathcal{C}\ is not a polyhedral set. In fact, it was proven in [29] that \(d^{k}\) is a 'potentially' descent direction of \(f\) at \(z^{k}\) such that * if \(f^{\prime}(z^{k};d^{k})<0\), then \(d^{k}\) is a descent direction of \(f\) at \(z^{k}\); * if \(\mathcal{C}\) is polyhedral, then \(\mathcal{A}(z^{k})\subset\mathcal{A}(x^{k})\) is a necessary and sufficient condition for \(d^{k}\) to be a feasible direction at \(z^{k}\); * if \(\mathcal{C}\) is not polyhedral, then \(\mathcal{A}(z^{k})\subset\dot{\mathcal{A}}(x^{k})\) is just a necessary (but not always sufficient) condition for \(d^{k}\) to be a feasible direction at \(z^{k}\). * The line search procedure LineSearch(\(z^{k}\),\(d^{k}\),\(\bar{\alpha}\)) in line 6 can be either exact or inexact, depending on the problem size and structure. Generally speaking, finding an exact solution for \(\alpha_{k}\) is computationally expensive or intractable when the problem size is too large or the objective function \(f\) is too complicated. In this case, we suggest finding an approximate solution for \(\alpha_{k}\) using an inexact line search procedure such as Armijo-type, Goldstein-type, or Wolfe-type [30, Chapter 3]. The third parameter \(\bar{\alpha}\) denotes a given upper bound for the line search stepsize such that \(\alpha_{k}\in[0,\bar{\alpha}]\) verifying \(f(x^{k+1})\leq f(z^{k})\) with \[x^{k+1}=z^{k}+\alpha_{k}d^{k},\quad k=0,1,2,\ldots.\] Note that for exact line search with unbounded set \(\mathcal{C}\), if without \(\bar{\alpha}\) (i.e., \(\bar{\alpha}=\infty\)), then the sequence \(\{\alpha_{k}\}\) may be unbounded. Consequently, we impose \(\bar{\alpha}\) to ensure the boundedness of the sequence \(\{\alpha_{k}\}\), which is crucial for the well-definedness of the sequence \(\{x^{k}\}\) and the convergence of the algorithm BDCA. See successful examples of BDCA with inexact Armijo-type line search in [29] and with exact line search in [44] for higher-order moment portfolio optimization problem, as well as BDCA with exact line search in [25] for symmetric eigenvalue complementarity problem. BDCA enjoys the next convergence theorem: **Theorem 2.3** (Convergence theorem of BDCA Algorithm 1, see [24, 29]).: _Let \(\{x^{k}\}\) be a well-defined and bounded sequence generated by BDCA Algorithm 1 for problem (P) from an initial point \(x^{0}\in\mathbb{R}^{n}\). If \(\rho_{g}+\rho_{h}>0\) and the solution set of (P) is non-empty, then_ * _(monotonicity and convergence of_ \(\{f(x^{k})\}\)_) the sequence_ \(\{f(x^{k})\}\) _is non-increasing and convergent._ * _(convergence of_ \(\{\|x^{k}-z^{k}\|\}\) _and_ \(\{\|x^{k}-x^{k-1}\|\}\)_)_ \[\|x^{k}-z^{k}\|\xrightarrow[k]{k\to\infty}0\quad\text{ and }\quad\|x^{k}-x^{k-1}\| \xrightarrow[k]{k\to\infty}0.\] * _(subsequential convergence of \(\{x^{k}\}\)) any cluster point of the sequence \(\{x^{k}\}\) is a critical point of (P)._ * _(convergence of \(\{x^{k}\}\)) furthermore, if \(f\) is a KL function, \(\mathcal{C}\) is a semi-algebraic set, and \(h\) has locally Lipschitz continuous gradient over \(\mathcal{C}\), then the sequence \(\{x^{k}\}\) is convergent._ _Remark 2.4_.: BDCA, like the standard DCA, is a _descent method_. Theorem 2.3 is an immediate consequence of the general convergence theorem of BDCA for convex constrained nonsmooth DC programs established in [29]. Remark 2.2 for standard DCA is adopted here as well. ### Adca ADCA is an accelerated DCA by introducing the Nesterov's acceleration into DCA, which is introduced by Phan et al. in 2018 [36]. The basic idea of ADCA is to compute \(\nabla h(v^{k})\) instead of \(\nabla h(x^{k})\) at iteration \(k\), where \(v^{k}\) is a more promising point than \(x^{k}\) in the sense that \(v^{k}\) is better than one of the last \(q\) iterates \(\{x^{k-q},\ldots,x^{k}\}\) in terms of objective function, i.e., \[f(v^{k})\leq\max_{t=\max\{0,k-q\},\ldots,k}f(x^{t}). \tag{2.1}\] The candidate \(v^{k}\) is computed by Nesterov's extrapolation: \[v^{k}=x^{k}+\frac{\theta_{k}-1}{\theta_{k+1}}\left(x^{k}-x^{k-1}\right),\] where \[\theta_{0}=1,\quad\theta_{k+1}=\frac{1+\sqrt{1+4\theta_{k}^{2}}}{2},\quad \forall k\geq 1.\] If the condition (2.1) is not satisfied, then we set \(v^{k}=x^{k}\) as in DCA. ADCA applied to problem (P) is described below: ``` 0:\(x^{0}\in\mathbb{R}^{n}\), \(q\in\mathbb{N}\); 1:Initialization:\(x^{-1}=x^{0}\), \(\theta_{0}=1\); 2:for\(k=0,1,\ldots\)do 3:\(\theta_{k+1}\leftarrow\frac{1+\sqrt{1+4\theta_{k}^{2}}}{2}\); 4:\(v^{k}\gets x^{k}+\frac{\theta_{k}-1}{\theta_{k+1}}\left(x^{k}-x^{k-1} \right);\) 5:if\(f(v^{k})>\max\{f(x^{t}):t=\max\{0,k-q\},\ldots,k\}\)then 6:\(v^{k}\gets x^{k}\); 7:endif 8:\(x^{k+1}\in\operatorname*{argmin}\{g(x)-\langle x,\nabla h(v^{k})\rangle:x\in \mathcal{C}\}\); 9:endfor ``` **Algorithm 2** ADCA for (P) _Remark 2.5_.: ADCA is a _non-monotone_ algorithm due to the introduction of Nesterov's extrapolation. It acts as a descent method when \(q=0\). In this case, we choose a better candidate in terms of the objective value between \(v^{k}\) and \(x^{k}\) to compute \(x^{k+1}\). If \(q>0\), then ADCA can increase the objective function and consequently escape from a potential bad local minimum. A higher value of \(q\) increases the chance of escaping bad local minima. ADCA enjoys the next convergence theorem: **Theorem 2.6** (Convergence theorem of ADCA Algorithm 2, see [36]).: _Let \(\{x^{k}\}\) be the sequence generated by ADCA Algorithm 2 for problem (P) from any initial point \(x^{0}\in\mathbb{R}^{n}\) and with \(q=0\). If \(\rho_{g}+\rho_{h}>0\), \(\inf\{f(x):x\in\mathcal{C}\}>-\infty\) and the sequence \(\{x^{k}\}\) is bounded, then any cluster point of \(\{x^{k}\}\) is a critical point of (P)._ ### InDCA Inertial DCA (cf. InDCA) is an accelerated DCA by incorporating the momentum (heavy-ball type inertial force) into standard DCA. This method is first introduce by de Oliveira et al. in 2019 [9] and a refined version (RInDCA) is established by Niu et al. in [43] with enlarged inertial stepsize for faster convergence. The basic idea of InDCA is to introduce an inertial force \(\gamma(x^{k}-x^{k-1})\) (for some inertial stepsize \(\gamma>0\)) into \(y^{k}\in\partial h(x^{k})\) to get the next convex subproblem: \[x^{k+1}\in\operatorname{argmin}\{g(x)-\langle y^{k}+\gamma(x^{k}-x^{k-1}),x \rangle:x\in\mathcal{C}\},\quad y^{k}\in\partial h(x^{k}).\] InDCA applied to problem (P) is described below: ``` 0:\(x^{0}\in\mathbb{R}^{n}\); 1:Initialization:\(x^{-1}=x^{0}\), \(\gamma\in[0,\frac{\rho_{g}+\rho_{h}}{2})\); 2:for\(k=0,1,\ldots\)do 3:\(x^{k+1}\in\operatorname{argmin}\{g(x)-\langle\nabla h(x^{k})+\gamma(x^{k}-x^ {k-1}),x\rangle:x\in\mathcal{C}\}\); 4:endfor ``` **Algorithm 3** InDCA for (P) _Remark 2.7_.: Compared to the stepsize interval \([0,\rho_{h}/2)\) for \(\gamma\) suggested in [9], an enlarge stepsize interval \([0,(\rho_{g}+\rho_{h})/2)\) was proven to be adequate for the convergence of InDCA [43]. In practice, a larger stepsize leads to faster convergence. _Remark 2.8_.: Like ADCA, the sequence \(\{f(x^{k})\}\) generated by InDCA is not necessarily monotone. Numerical comparison between ADCA and InDCA has been performed on Image Denoising [43] and Nonnegative Matrix Factorization [37]. Both experiments showed that ADCA outperformed InDCA with smaller stepsize in \([0,\rho_{h}/2)\). But InDCA with an enlarged stepsize can outperform ADCA (both speed and quality) on some instances of the Image Denoising dataset (see [43]). The convergence of InDCA to problem (P) is described below: **Theorem 2.9** (Convergence theorem of InDCA Algorithm 3, see [43, 9]).: _Let \(\{x^{k}\}\) be the sequence generated by InDCA Algorithm 3 for problem (P) from any initial point \(x^{0}\in\mathbb{R}^{n}\). If \(\gamma\in[0,\frac{\rho_{g}+\rho_{h}}{2})\), \(\rho_{g}+\rho_{h}>0\), \(\inf\{f(x):x\in\mathcal{C}\}>-\infty\) and the sequence \(\{x^{k}\}\) is bounded, then_ * _(sufficiently descent property)_ \[f(x^{k+1})+\frac{\rho_{g}+\rho_{h}-\gamma}{2}\|x^{k+1}-x^{k}\|^{2}\leq f(x^{k})+\frac{\rho_{g}+\rho_{h}-\gamma}{2}\|x^{k}-x^{k-1}\|^{2}\] \[-\frac{\rho_{g}+\rho_{h}-2\gamma}{2}\|x^{k}-x^{k-1}\|^{2}.\] * _(convergence of_ \(\{\|x^{k}-x^{k-1}\|\}\)_)_ \(\|x^{k}-x^{k-1}\|\xrightarrow{k\to\infty}0\)_._ * _(subsequential convergence of_ \(\{x^{k}\}\)_) any cluster point of the sequence_ \(\{x^{k}\}\) _is a critical point of (_P_)._ * _(convergence of_ \(\{x^{k}\}\)_) furthermore, if_ \(f\) _is a KL function and_ \(\mathcal{C}\) _is a semi-algebraic set, then the sequence_ \(\{x^{k}\}\) _is convergent._ ### Hdca We can combine the line search, the heavy-ball inertial force and the Nesterov's extrapolation with DCA to obtain some enhanced hybrid accelerated DCA algorithms. Here, we propose two hybrid methods: the _Hybrid DCA with Line search and Inertial force_ (HDCA-LI) and the _Hybrid DCA with Nesterov's extrapolation and Inertial force_ (HDCA-NI). The reason to do so is that: the inertial force intends to accelerate the gradient \(\nabla h(x^{k})\) by an inertial force \(\gamma(x^{k}-x^{k-1})\); while both line search and Nesterov's extrapolation play a similar rule by accelerating \(x^{k}\) using a 'potentially' better candidate in form of \(x^{k}+\beta_{k}(x^{k}-x^{k-1})\) for some \(\beta_{k}\geq 0\). Hence, by combining the inertial force with either line search or Nesterov's extrapolation, we may enhance the acceleration of DCA. **HDCA-LI**. The Hybrid DCA with Line search and Inertial force accelerations (HDCA-LI) is described below: ``` 0:\(x^{0}\in\mathbb{R}^{n}\), \(\bar{\alpha}>0\); 1:Initialization:\(x^{-1}=x^{0}\), \(\gamma\in[0,\frac{\rho_{g}+\rho_{h}}{1+(1+\bar{\alpha})^{2}})\); 2:for\(k=0,1,\ldots\)do 3:\(z^{k}\in\operatorname*{argmin}\{g(x)-\langle\nabla h(x^{k})+\gamma(x^{k}-x^{ k-1}),x\rangle:x\in\mathcal{C}\}\); 4:\(d^{k}\gets z^{k}-x^{k}\); 5:if\(\mathcal{A}(z^{k})\subset\mathcal{A}(x^{k})\) and \(\langle\nabla f(z^{k}),d^{k}\rangle<0\)then 6:\(x^{k+1}\leftarrow\operatorname*{LineSearch}(z^{k},d^{k},\bar{\alpha})\); 7:endif 8:endfor ``` **Algorithm 4** HDCA-LI for (P) Some comments on HDCA-LI include: * HDCA-LI is _non-monotone_ if \(\gamma\neq 0\) and it reduces to BDCA if \(\gamma=0\). * The upper bound of the inertial stepsize \(\gamma\) is set as \((\rho_{g}+\rho_{h})/(1+(1+\bar{\alpha})^{2})\), differing from \(\rho_{h}/2\) in InDCA [9] and \((\rho_{g}+\rho_{h})/2\) in its refined version [43]. The computation of this upper bound will be discussed in the convergence analysis of HDCA-LI (Theorem 2.10). This upper bound also reveals a trade-off between the inertial stepsize and the line search stepsize, that is a larger upper bound for the line search stepsize leads to a smaller upper bound for the inertial stepsize. Now, we establish the convergence theorem of HDCA-LI (similar to InDCA [43] and BDCA [29]) based on the Lyapunov analysis. The reader is also referred to [24] for a general framework to establish convergence analysis of DCA-type algorithm. **Theorem 2.10** (Convergence theorem of HDCA-LI Algorithm 4).: _Let \(\{x^{k}\}\) be the sequence generated by HDCA-LI Algorithm 4 for problem (P) from any initial point \(x^{0}\in\mathbb{R}^{n}\). Suppose that \(\rho_{g}+\rho_{h}>0\), \(\gamma\in[0,\frac{\rho_{g}+\rho_{h}}{1+(1+\bar{\alpha})^{2}})\), \(\inf\{f(x):x\in\mathcal{C}\}>-\infty\) and the sequence \(\{x^{k}\}\) is bounded. Let_ \[E_{k}:=f(x^{k})+\frac{\rho_{g}+\rho_{h}-\gamma}{2(1+\bar{\alpha})^{2}}\|x^{k}- x^{k-1}\|^{2},\forall k=1,2,\ldots.\] _Then_ * _(sufficiently descent property)_ \[E_{k+1}\leq E_{k}-\frac{\rho_{g}+\rho_{h}-\gamma(1+(1+\bar{\alpha})^{2})}{2(1 +\bar{\alpha})^{2}}\|x^{k}-x^{k-1}\|^{2},\forall k=1,2,\ldots.\] * _(convergence of_ \(\{\|x^{k}-x^{k-1}\|\}\) _and_ \(\{\|d^{k}\|\}\)_)_ \[\|x^{k}-x^{k-1}\|\to 0\quad\text{and}\quad\|d^{k}\|\to 0\text{ as }k\to\infty.\] * _(subsequential convergence of_ \(\{x^{k}\}\)_) any cluster point_ \(x^{*}\) _of the sequence_ \(\{x^{k}\}\) _is a critical point of (_P_) _(i.e.,_ \(\nabla h(x^{*})\in\nabla g(x^{*})+N_{\mathcal{C}}(x^{*})\)_)._ Proof.: **(sufficiently descent property):** By the first order optimality condition to the convex problem \[z^{k}\in\operatorname{argmin}\{g(x)-\langle\nabla h(x^{k})+\gamma(x^{k}-x^{k- 1}),x\rangle:x\in\mathcal{C}\},\] we get \[\nabla h(x^{k})+\gamma(x^{k}-x^{k-1})\in\partial(g+\chi_{\mathcal{C}})(z^{k}) =\nabla g(z^{k})+N_{\mathcal{C}}(z^{k}),\] where \(N_{\mathcal{C}}(z^{k})\) stands for the normal cone of \(\mathcal{C}\) at \(z^{k}\). Hence, \[\nabla h(x^{k})+\gamma(x^{k}-x^{k-1})-\nabla g(z^{k})\in N_{\mathcal{C}}(z^{k})\] implies that \[\langle\nabla h(x^{k})+\gamma(x^{k}-x^{k-1})-\nabla g(z^{k}),x^{k}-z^{k} \rangle\leq 0. \tag{2.2}\] Then by the \(\rho_{g}\)-convexity of \(g\), we get \[g(x^{k}) \geq g(z^{k})+\langle\nabla g(z^{k}),x^{k}-z^{k}\rangle+\frac{\rho_{g}}{ 2}\|z^{k}-x^{k}\|^{2}\] \[\stackrel{{\eqref{eq:def_def_def_def}}}{{\geq}} g(z^{k})+\langle\nabla h(x^{k})+\gamma(x^{k}-x^{k-1}),x^{k}-z^{k}\rangle+ \frac{\rho_{g}}{2}\|z^{k}-x^{k}\|^{2},\] that is \[g(x^{k})\geq g(z^{k})+\langle\nabla h(x^{k})+\gamma(x^{k}-x^{k-1}),x^{k}-z^{k }\rangle+\frac{\rho_{g}}{2}\|z^{k}-x^{k}\|^{2}. \tag{2.3}\] On the other hand, it follows from the \(\rho_{h}\)-convexity of \(h\) that \[h(z^{k})\geq h(x^{k})+\langle\nabla h(x^{k}),z^{k}-x^{k}\rangle+\frac{\rho_{h }}{2}\|z^{k}-x^{k}\|^{2}. \tag{2.4}\] Summing (2.3) and (2.4), we get \[f(x^{k})\geq f(z^{k})+\gamma\langle x^{k}-x^{k-1},x^{k}-z^{k}\rangle+\frac{ \rho_{g}+\rho_{h}}{2}\|z^{k}-x^{k}\|^{2}. \tag{2.5}\] By applying \(\langle x^{k}-x^{k-1},x^{k}-z^{k}\rangle\geq-(\|x^{k}-x^{k-1}\|^{2}+\|x^{k}-z^{ k}\|^{2})/2\) to (2.5), \[f(z^{k})+\frac{\rho_{g}+\rho_{h}-\gamma}{2}\|z^{k}-x^{k}\|^{2}\leq f(x^{k})+ \frac{\gamma}{2}\|x^{k}-x^{k-1}\|^{2}. \tag{2.6}\] The (exact or inexact) line search procedure ensures that \(f(z^{k})\geq f(x^{k+1})\) and \[x^{k+1}=z^{k}+\alpha_{k}(z^{k}-x^{k}),\forall k=1,2,\dots, \tag{2.7}\] under the assumption that \(\alpha_{k}\in[0,\bar{\alpha}]\). Then \[\|x^{k+1}-x^{k}\|\stackrel{{\eqref{eq:def_def_def}}}{{=}}\|z^{k} +\alpha_{k}(z^{k}-x^{k})-x^{k}\|=(1+\alpha_{k})\|z^{k}-x^{k}\|\leq(1+\bar{ \alpha})\|z^{k}-x^{k}\|.\] It follows from \(\|x^{k+1}-x^{k}\|\leq(1+\bar{\alpha})\|z^{k}-x^{k}\|\), \(f(z^{k})\geq f(x^{k+1})\) and \(\rho_{g}+\rho_{h}>\gamma\) that \[f(z^{k})+\frac{\rho_{g}+\rho_{h}-\gamma}{2}\|z^{k}-x^{k}\|^{2}\geq f(x^{k+1})+ \frac{\rho_{g}+\rho_{h}-\gamma}{2(1+\bar{\alpha})^{2}}\|x^{k+1}-x^{k}\|^{2}. \tag{2.8}\] Therefore, we get from (2.6) and (2.8) that \[f(x^{k+1})+\frac{\rho_{g}+\rho_{h}-\gamma}{2(1+\bar{\alpha})^{2}}\|x ^{k+1}-x^{k}\|^{2}\leq f(x^{k})+\frac{\gamma}{2}\|x^{k}-x^{k-1}\|^{2}\] \[= f(x^{k})+\frac{\rho_{g}+\rho_{h}-\gamma}{2(1+\bar{\alpha})^{2}} \|x^{k}-x^{k-1}\|^{2}\] \[-\frac{\rho_{g}+\rho_{h}-\gamma(1+(1+\bar{\alpha})^{2})}{2(1+\bar{ \alpha})^{2}}\|x^{k}-x^{k-1}\|^{2}.\] Taking \(E_{k}=f(x^{k})+\frac{\rho_{g}+\rho_{h}-\gamma}{2(1+\bar{\alpha})^{2}}\|x^{k}- x^{k-1}\|^{2}\). Then \[\boxed{E_{k+1}\leq E_{k}-\frac{\rho_{g}+\rho_{h}-\gamma(1+(1+\bar{\alpha})^{2} )}{2(1+\bar{\alpha})^{2}}\|x^{k}-x^{k-1}\|^{2},\forall k=1,2,\dots.} \tag{2.9}\] **(convergence of \(\{\|x^{k}-x^{k+1}\|\}\)):** For all \(\gamma\in[0,\frac{\rho_{g}+\rho_{h}}{1+(1+\bar{\alpha})^{2}})\), we have \[\frac{\rho_{g}+\rho_{h}-\gamma(1+(1+\bar{\alpha})^{2})}{2(1+\bar{\alpha})^{2} }>0.\] Then, it follows from (2.9) that the sequence \(\{E_{k}\}\) is non-increasing. The assumption \(\inf\{f(x):x\in\mathcal{C}\}>-\infty\) and \(E_{k}=f(x^{k})+\frac{\rho_{g}+\rho_{h}-\gamma}{2(1+\bar{\alpha})^{2}}\|x^{k}- x^{k-1}\|^{2}\geq f(x^{k})\) ensure that the sequence \(\{E_{k}\}\) is lower bounded. Consequently, the non-increasing and lower bound of the sequence \(E_{k}\) ensure its convergence. Let \(E_{k}\to E^{*}\) as \(k\to\infty\). Summing (2.9) for \(k\) from \(1\) to \(\infty\), we get \[\sum_{k=1}^{\infty}\|x^{k}-x^{k-1}\|^{2}\leq\frac{2(1+\bar{\alpha})^{2}}{\rho _{g}+\rho_{h}-\gamma(1+(1+\bar{\alpha})^{2})}(E^{1}-E^{*})<\infty.\] Therefore, \[\boxed{\|x^{k}-x^{k-1}\|\xrightarrow{k\to\infty}0.}\] **(convergence of \(\{\|d^{k}\|\}\)):** It follows immediately from \[0\leq\|d^{k}\|=\|z^{k}-x^{k}\|\leq(1+\alpha_{k})\|z^{k}-x^{k}\|=\|x^{k+1}-x^{k} \|\xrightarrow{k\to\infty}0\] that \[\boxed{\|d^{k}\|\xrightarrow{k\to\infty}0.}\] **(subsequential convergence of \(\{x^{k}\}\)):** The boundedness of the sequence \(\{x^{k}\}\) implies that the set of its cluster points is non-empty, and for any cluster point \(x^{*}\) there exists a convergent subsequence denoted by \(\{x^{k_{j}}\}_{j\in\mathbb{N}}\subset\mathcal{C}\) converging to \(x^{*}\). The closedness of \(\mathcal{C}\) indicates that the limit point \(x^{*}\) belongs to \(\mathcal{C}\). Then, we get from \(x^{k_{j}}\to x^{*}\), \(\|z^{k}-x^{k}\|\xrightarrow{k\to\infty}0\) and \(\|x^{k}-x^{k-1}\|\xrightarrow{k\to\infty}0\) that \[z^{k_{j}}\xrightarrow{j\to\infty}x^{*}\quad\text{ and }\quad x^{k_{j}-1} \xrightarrow{j\to\infty}x^{*}.\] It follows from the first order optimality condition of the convex subproblem \[\nabla h(x^{k})+\gamma(x^{k}-x^{k-1})\in\nabla g(z^{k})+N_{\mathcal{C}}(z^{k}),\] the closedness of the graph of \(\partial\chi_{\mathcal{C}}=N_{\mathcal{C}}\), and the continuity of \(\nabla g\) and \(\nabla h\) that \[\lim_{j\to\infty}[\nabla h(x^{k_{j}})+\gamma(x^{k_{j}}-x^{k_{j}-1})]\in\nabla g (\lim_{j\to\infty}z^{k_{j}})+N_{\mathcal{C}}(\lim_{j\to\infty}z^{k_{j}}).\] That is, \[\boxed{\nabla h(x^{*})\in\nabla g(x^{*})+N_{\mathcal{C}}(x^{*}).}\] Hence, any cluster point of the sequence \(\{x^{k}\}\) is a critical point of (P). _Remark 2.11_.: The sequential convergence of the sequence \(\{x^{k}\}\) can also be established under certain regularity conditions, notably the _Lojasiewicz subgradient inequality_ or the _Kurdyka-Lojasiewicz (KL) property_. Gratifyingly, both of these conditions are naturally met for the DC program (P) with polynomial convex components \(g\) and \(h\), as well as a semi-algebraic convex set \(\mathcal{C}\). For specific examples illustrating the establishment of the convergence of \(\{x^{k}\}\) and the rate of convergence of both \(\{f(x^{k})\}\) and \(\{x^{k}\}\), the reader is directed to [24, Theorem 2, Lemma 3, Theorem 7, Theorem 8]. Here, we admit theses results (i.e., the convergence of \(\{x^{k}\}\) and the rate of convergence of \(\{f(x^{k})\}\) and \(\{x^{k}\}\) under the KL property) and omit their proofs. **HDCA-NI**. The _Hybrid DCA with Nesterov's extrapolation and Inertial force accelerations_ (HDCA-NI) is described in Algorithm 5. ``` 0:\(x^{0}\in\mathbb{R}^{n}\), \(q\in\mathbb{N}\), \(\bar{\beta}\in(0,1)\); 1:Initialization:\(x^{-1}=x^{0}\), \(\theta_{0}=1\), \(\beta_{0}=0\), \(\delta=(1-\bar{\beta}^{2})(\rho_{g}+\rho_{h})/4\); 2:for\(k=0,1,\ldots\)do 3:\(\theta_{k+1}\leftarrow(1+\sqrt{1+4\theta_{k}^{2}})/2\); 4:\(\beta_{k}\leftarrow(\theta_{k}-1)/\theta_{k+1}\); 5:if\(\beta_{k}>\bar{\beta}\)then 6:\(\theta_{k}\gets 1\); 7:\(\beta_{k}\leftarrow\bar{\beta}\); 8:endif 9:\(v^{k}\gets x^{k}+\beta_{k}\left(x^{k}-x^{k-1}\right)\); 10:\(\gamma_{k}\in[0,((\rho_{g}+\rho_{h})(1-\beta_{k}^{2})-4\delta)/(3-\beta_{k}^ {2})]\); 11:if\(v^{k}\notin\mathcal{C}\) or \(f(v^{k})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k}-x^{k-1}\|^{2}>\max\{f(x ^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t-1}\|^{2}:t=\max\{0,k- q\},\ldots,k\}\)then 12:\(v^{k}\gets x^{k}\); 13:endif 14:\(x^{k+1}\in\operatorname*{argmin}\{g(x)-\langle\nabla h(v^{k})+\gamma_{k}(x^{k }-x^{k-1}),x\rangle:x\in\mathcal{C}\}\); 15:endfor ``` **Algorithm 5** HDCA-NI for (P) It should be noted that if \(\beta_{k}>\bar{\beta}\), we reset \(\theta_{k}=1\) to ensure that the sequence \(\{\beta_{k}\}_{k}\subset[0,\bar{\beta}]\) for any provided upper bound \(\bar{\beta}<1\) (typically chosen close to \(1\), such as \(\bar{\beta}=0.9\)). The inertial step-size \(\gamma_{k}\) can be any value within \([0,\frac{(\rho_{g}+\rho_{h})(1-\beta_{k}^{2})-4\delta}{3-\beta_{k}^{2}}]\) and is suggested to be equal to \(\frac{(\rho_{g}+\rho_{h})(1-\beta_{k}^{2})-4\delta}{3-\beta_{k}^{2}}\) for better inertial force acceleration. The convergence analysis of HDCA-NI is established in a manner analogous to that of HDCA-LI (as seen in Theorem 2.10) and ADCA [36] as below: **Theorem 2.12** (Convergence theorem of HDCA-NI Algorithm 5).: _Let \(\{x^{k}\}\) be the sequence generated by HDCA-NI Algorithm 5 for problem (P) from any initial point \(x^{0}\in\mathbb{R}^{n}\). Let_ \[\begin{cases}\phi(k):=\operatorname{argmin}\{\|x^{t}-x^{t-1}\|^{2}:t=k,\ldots,k+q \},\\ c_{k}:=\frac{(\beta_{k}^{2}-3)\gamma_{k}+(1-\beta_{k}^{2})(\rho_{g}+\rho_{h})}{4}, \\ E_{k}:=\max\{f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t-1}\|^{2 }:t=\max\{0,k-q\},\ldots,k\},\end{cases}\] _for all \(k\in\mathbb{N}\). Suppose that \(\rho_{g}+\rho_{h}>0\), \(\bar{\beta}\in(0,1)\), \(\delta=(1-\bar{\beta}^{2})(\rho_{g}+\rho_{h})/4>0\), \(\inf\{f(x):x\in\mathcal{C}\}>-\infty\) and the sequence \(\{x^{k}\}\) is bounded. Then_ * _(sufficiently descent property)_ \[E_{k+1+q}\leq E_{k}-\delta\|x^{\phi(k)}-x^{\phi(k)-1}\|^{2},\forall k=0,1,2,\ldots.\] * _(convergence of_ \(\{\|x^{\phi(k)}-x^{\phi(k)-1}\|\}\) _and_ \(\{\|v^{\phi(k)}-x^{\phi(k)-1}\|\}\)__ \[\|x^{\phi(k)}-x^{\phi(k)-1}\|\xrightarrow{k\to\infty}0\quad\text{and}\quad \|v^{\phi(k)}-x^{\phi(k)-1}\|\xrightarrow{k\to\infty}0.\] * _(subsequential convergence of_ \(\{x^{k}\}\)_) Let_ \(q=0\)_. Then any cluster point_ \(x^{*}\) _of the sequence_ \(\{x^{k}\}\) _is a critical point of (_P_) _(i.e.,_ \(\nabla h(x^{*})\in\nabla g(x^{*})+N_{\mathcal{C}}(x^{*})\)_)._ Proof.: **(sufficiently descent property):** By the first order optimality condition to the convex subproblem \[x^{k+1}\in\operatorname{argmin}\{g(x)-\langle\nabla h(v^{k})+\gamma_{k}(x^{k} -x^{k-1}),x\rangle:x\in\mathcal{C}\},\] we get \[\nabla h(v^{k})+\gamma_{k}(x^{k}-x^{k-1})\in\nabla g(x^{k+1})+N_{\mathcal{C}} (x^{k+1}).\] Hence, \[\nabla h(v^{k})+\gamma_{k}(x^{k}-x^{k-1})-\nabla g(x^{k+1})\in N_{\mathcal{C }}(x^{k+1}),\] which implies that \[\langle\nabla h(v^{k})+\gamma_{k}(x^{k}-x^{k-1})-\nabla g(x^{k+1}),v^{k}-x^{k +1}\rangle\leq 0, \tag{2.10}\] where \(v^{k},x^{k+1}\in\mathcal{C}\) by their definitions. Then by the \(\rho_{g}\)-convexity of \(g\) over \(\mathcal{C}\) and \(v^{k},x^{k+1}\in\mathcal{C}\), we get \[g(v^{k}) \geq g(x^{k+1})+\langle\nabla g(x^{k+1}),v^{k}-x^{k+1}\rangle+\frac{ \rho_{g}}{2}\|v^{k}-x^{k+1}\|^{2}\] \[\stackrel{{\eqref{eq:convex}}}{{\geq}} g(x^{k+1})+\langle\nabla h(v^{k})+\gamma_{k}(x^{k}-x^{k-1}),v^{k}-x^{k+1} \rangle+\frac{\rho_{g}}{2}\|v^{k}-x^{k+1}\|^{2},\] that is \[g(v^{k})\geq g(x^{k+1})+\langle\nabla h(v^{k})+\gamma_{k}(x^{k}-x^{k-1}),v^{k} -x^{k+1}\rangle+\frac{\rho_{g}}{2}\|v^{k}-x^{k+1}\|^{2}. \tag{2.11}\] On the other hand, it follows from the \(\rho_{h}\)-convexity of \(h\) over \(\mathcal{C}\) that \[h(x^{k+1})\geq h(v^{k})+\langle\nabla h(v^{k}),x^{k+1}-v^{k}\rangle+\frac{\rho_ {h}}{2}\|v^{k}-x^{k+1}\|^{2}. \tag{2.12}\] Summing (2.11) and (2.12), we get \[f(v^{k})\geq f(x^{k+1})+\gamma_{k}\langle x^{k}-x^{k-1},v^{k}-x^{k+1}\rangle+ \frac{\rho_{g}+\rho_{h}}{2}\|v^{k}-x^{k+1}\|^{2}. \tag{2.13}\] By applying \(\langle x^{k}-x^{k-1},v^{k}-x^{k+1}\rangle\geq-(\|x^{k}-x^{k-1}\|^{2}+\|v^{k}- x^{k+1}\|^{2})/2\) to (2.13), we obtain \[f(x^{k+1})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{2}\|v^{k}-x^{k+1}\|^{2}\leq f(v ^{k})+\frac{\gamma_{k}}{2}\|x^{k}-x^{k-1}\|^{2},\forall k=1,\ldots. \tag{2.14}\] Let \(\beta_{k}=\frac{\theta_{k}-1}{\theta_{k+1}}\). As \(v^{k}\) can take either \(x^{k}\) or \(x^{k}+\beta_{k}(x^{k}-x^{k-1})\), then we have \[\|v^{k}-x^{k+1}\|^{2}=\begin{cases}\|x^{k}-x^{k+1}\|^{2},&\text{if $v^{k}=x^{k}$;}\\ \|x^{k}-x^{k+1}+\beta_{k}(x^{k}-x^{k-1})\|^{2},&\text{if $v^{k}=x^{k}+\beta_{k}(x^{k}-x^ {k-1})$.}\end{cases}\] We get from the inequalities \[\|x^{k}-x^{k+1}+\beta_{k}(x^{k}-x^{k-1})\|^{2}\geq\frac{1}{2}\|x^{k}-x^{k+1}\|^ {2}-\|\beta_{k}(x^{k}-x^{k-1})\|^{2}\] and \[\|x^{k}-x^{k+1}\|^{2}\geq\frac{1}{2}\|x^{k}-x^{k+1}\|^{2}-\|\beta_{k}(x^{k}-x^ {k-1})\|^{2}\] that \[\|v^{k}-x^{k+1}\|^{2}\geq\frac{1}{2}\|x^{k}-x^{k+1}\|^{2}-\beta_{k}^{2}\|x^{k }-x^{k-1}\|^{2}. \tag{2.15}\] It follows from (2.14), (2.15) and \(\rho_{g}+\rho_{h}-\gamma_{k}>0\) (see Lemma 2.15-(i)) that \[f(x^{k+1})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k+1}-x^{k} \|^{2}\leq f(v^{k})+\left(\frac{\gamma_{k}}{2}-\frac{\beta_{k}^{2}(\rho_{g}+ \rho_{h}-\gamma_{k})}{4}\right)\|x^{k}-x^{k-1}\|^{2}\] \[= f(v^{k})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k}-x^{k-1}\| ^{2}\] \[-c_{k}\|x^{k}-x^{k-1}\|^{2},\] where \[c_{k}:=\frac{(\beta_{k}^{2}-3)\gamma_{k}+(1-\beta_{k}^{2})(\rho_{g}+\rho_{h}) }{4}\geq\delta>0 \tag{2.16}\] due to Lemma 2.15-(ii). Observing that \(E_{k}\geq f(v^{k})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k}-x^{k-1}\|^{2}\) for all \(k=0,1,2,\ldots\) (see Lemma 2.15-(v)) and \(c_{k}\geq\delta\), we get for all \(k=0,1,2,\ldots\), \[f(x^{k+1})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k+1}-x^{k}\|^{2}\leq E_{k}-\delta\|x^{k}-x^{k-1}\|^{2}. \tag{2.17}\] Now, we can prove by induction that for all \(t=0,\ldots,q\), the next inequality holds \[f(x^{k+1+t})+\frac{\rho_{g}+\rho_{h}-\gamma_{k+t}}{4}\|x^{k+1+t}-x^{k+t}\|^{2} \leq E_{k}-\delta\|x^{k+t}-x^{k+t-1}\|^{2}. \tag{2.18}\] First, it follows from (2.17) that the claim holds for \(t=0\). Suppose that it holds for \(t=0,\ldots,p-1\) with \(1\leq p\leq q\). Then, we get from \(\delta>0\) that \[f(x^{k+1+t})+\frac{\rho_{g}+\rho_{h}-\gamma_{k+t}}{4}\|x^{k+1+t}-x^{k+t}\|^{2} \leq E_{k},\forall t=0,\ldots,p-1. \tag{2.19}\] Replacing \(k\) by \(k+p\) in (2.17), then \[f(x^{k+1+p})+\frac{\rho_{g}+\rho_{h}-\gamma_{k+p}}{4}\|x^{k+1+p}- x^{k+p}\|^{2}\leq E_{k+p}-\delta\|x^{k+p}-x^{k+p-1}\|^{2}\] \[\leq E_{k}-\delta\|x^{k+p}-x^{k+p-1}\|^{2},\] where the second inequality is due to \(E_{k+p}\leq E_{k}\) with \(1\leq p\leq q\) and with (2.19) (see Lemma 2.15-(vi)). Hence, we proved by induction that (2.18) holds for all \(t=0,\ldots,q\). Therefore, \[E_{k+1+q} = \max\{f(x^{k+1+t})+\frac{\rho_{g}+\rho_{h}-\gamma_{k+t}}{4}\|x^{k+1 +t}-x^{k+t}\|^{2}:t=0,\ldots,q\}\] \[\stackrel{{\eqref{eq:2.18}}}{{\leq}} E_{k}-\delta\min\big{\{}\|x^{k+t}-x^{k+t-1}\|^{2}:t=0,\ldots,q\big{\}}\] \[= E_{k}-\delta\|x^{\phi(k)}-x^{\phi(k)-1}\|^{2},\] That is the required inequality \[\boxed{E_{k+1+q}\leq E_{k}-\delta\|x^{\phi(k)}-x^{\phi(k)-1}\|^{2},\forall k= 0,1,2,\ldots.} \tag{2.20}\] **(convergence of \(\{\|x^{\phi(k)}-x^{\phi(k)-1}\|\}\)):** Summing (2.20) for \(k\) from \(0\) to \(N\) (with \(N\geq q+1\)), we get \[\sum_{k=0}^{N}\|x^{\phi(k)}-x^{\phi(k)-1}\|^{2} \leq \frac{1}{\delta}\sum_{t=0}^{q}(E_{t}-E_{N+t+1})\] \[\leq \frac{q+1}{\delta}(E_{q}-\inf\{f(x):x\in\mathcal{C}\}),\] where the second inequality is derived from \(E_{t}\leq E_{q},\forall t=0,\ldots,q\) (see Lemma 2.15-(iv)) and \(E_{k}\geq\inf\{f(x):x\in\mathcal{C}\}>-\infty,\forall k=1,2,\ldots\) (see Lemma 2.15-(iii)). Passing \(N\) to \(\infty\), then \[\sum_{k=1}^{\infty}\|x^{\phi(k)}-x^{\phi(k)-1}\|^{2}\leq\frac{q+1}{\delta}(E_ {q}-\inf\{f(x):x\in\mathcal{C}\})<\infty.\] Therefore, \[\boxed{\|x^{\phi(k)}-x^{\phi(k)-1}\|\stackrel{{ k\to\infty}}{{\longrightarrow}}0.} \tag{2.21}\] **(convergence of \(\{\|v^{\phi(k)}-x^{\phi(k)-1}\|\}\)):** By the definition of \(v^{k}\), we have \[\|v^{k}-x^{k-1}\|^{2}=\begin{cases}\|x^{k}-x^{k-1}\|^{2},&\text{if $v^{k}=x^{k}$};\\ (1+\beta_{k})^{2}\,\|x^{k}-x^{k-1}\|^{2},&\text{if $v^{k}=x^{k}+\beta_{k}(x^{k}-x^{ k-1})$}.\end{cases} \tag{2.22}\] Since \[\left(1+\beta_{k}\right)^{2}\leq(1+\bar{\beta})^{2}<4,\forall k=0,1,2,\ldots. \tag{2.23}\] It follows from (2.21), (2.22) and (2.23) that \[\boxed{\|v^{\phi(k)}-x^{\phi(k)-1}\|\stackrel{{ k\to\infty}}{{ \longrightarrow}}0.}\] **(subsequential convergence of \(\{x^{k}\}\)):** Let \(q=0\). Then we have \[\phi(k)=k\quad\text{ and }\quad E_{k}=f(x^{k})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{ 4}\|x^{k}-x^{k-1}\|^{2}.\] Hence, the previously established sufficiently descent property turns to \[E_{k+1}\leq E_{k}-\delta\|x^{k}-x^{k-1}\|^{2},\quad k=0,1,2,\ldots,\] and we have \[\|x^{k}-x^{k-1}\|\stackrel{{ k\to\infty}}{{\longrightarrow}}0 \quad\text{ and }\quad\|v^{k}-x^{k-1}\|\stackrel{{ k\to\infty}}{{ \longrightarrow}}0.\] The boundedness of the sequence \(\{x^{k}\}_{k\geq 1}\subset\mathcal{C}\) implies that its set of cluster points is nonempty. The closedness of \(\mathcal{C}\) indicates that all cluster points of \(\{x^{k}\}\) belong to \(\mathcal{C}\). Then for any cluster point \(x^{*}\) of the sequence \(\{x^{k}\}\), there exists a convergent subsequence denoted by \(\{x^{k_{j}}\}_{j\in\mathbb{N}}\) converging to \(x^{*}\). We get from \(x^{k_{j}}\xrightarrow{j\to\infty}x^{*}\), \(\|x^{k}-x^{k-1}\|\xrightarrow{k\to\infty}0\) and \(\|v^{k}-x^{k-1}\|\xrightarrow{k\to\infty}0\) that \[x^{k_{j}-1}\xrightarrow{j\to\infty}x^{*},\quad x^{k_{j}+1}\xrightarrow{j\to \infty}x^{*}\text{ and }v^{k_{j}}\xrightarrow{j\to\infty}x^{*}.\] It follows from the first order optimality condition of the convex subproblem \[\nabla h(v^{k})+\gamma_{k}(x^{k}-x^{k-1})\in\nabla g(x^{k+1})+N_{\mathcal{C}} (x^{k+1}),\] the closedness of the graph of \(N_{\mathcal{C}}\), the continuity of \(\nabla g\) and \(\nabla h\), and the boundedness of \(\gamma_{k}\) (\(0\leq\gamma_{k}<\rho_{g}+\rho_{h}\)) that \[\lim_{j\to\infty}\nabla h(v^{k_{j}})+\gamma_{k_{j}}(x^{k_{j}}-x^{k_{j}-1})\in \nabla g(\lim_{j\to\infty}x^{k_{j}+1})+N_{\mathcal{C}}(\lim_{j\to\infty}x^{k_{ j}+1}).\] That is, \[\boxed{\nabla h(x^{*})\in\nabla g(x^{*})+N_{\mathcal{C}}(x^{*}).}\] Hence, any cluster point \(x^{*}\) of \(\{x^{k}\}\) is a critical point of (P). _Remark 2.13_.: The convergence of \(\{x^{k}\}\) and the rate of convergence for both \(\{f(x^{k})\}\) and \(\{x^{k}\}\) under the Kurdyka-Lojasiewicz property can be established in a similar way as in [24]. Therefore, we admit their results and omit their discussions. _Remark 2.14_.: We only establish the subsequential convergence of \(\{x^{k}\}\) for the case where \(q=0\), however, we observe in practice that the sequence \(\{x^{k}\}\) seems converges as well for \(q>0\), and often benefits from a better acceleration and superior computed solutions compared to the case when \(q=0\), despite the convergence analysis for \(q>0\) remains an open challenge. Therefore, a pragmatic suggestion is to initially set \(q>0\) to take advantage of better acceleration, then switch to \(q=0\) after some iterations to ensure the convergence. For example, we can switch to \(q=0\) when \(\beta_{k}>\bar{\beta}\) (with \(\bar{\beta}=0.9\)) is met for the first time. The next lemma is required in the proof of Theorem 2.12: **Lemma 2.15**.: _Under the assumptions of Theorem 2.12, we have_ 1. \(\gamma_{k}<\rho_{g}+\rho_{h}\) _for all_ \(k=0,1,2,\ldots\)_._ 2. \(c_{k}\geq\delta>0\) _for all_ \(k=0,1,2,\ldots\)_._ 3. \(E_{k}\geq\inf\{f(x):x\in\mathcal{C}\}>-\infty,\) _for all_ \(k=1,2,\ldots\)_._ 4. \(E_{k}\leq E_{q}\)_, for all_ \(k=0,\ldots,q\)_._ 5. \(E_{k}\geq f(v^{k})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k}-x^{k-1}\|^{2},\) _for all_ \(k=0,1,2,\ldots\)_._ 6. _Suppose that_ \(E_{k}\geq f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t-1}\|^{2}\) _for all_ \(t=k+1,\ldots,k+p\) _with_ \(1\leq p\leq q\)_. Then_ \(E_{k+p}\leq E_{k}\)_._ Proof.: (i) For all \(k=0,1,2,\ldots\), we get from the definition of \(\delta=\frac{(1-\bar{\beta}^{2})(\rho_{g}+\rho_{h})}{4}\) and the inequalities \(0\leq\beta_{k}\leq\bar{\beta}<1\) that \[\gamma_{k}\leq\frac{(\rho_{g}+\rho_{h})(1-\beta_{k}^{2})-4\delta}{3-\beta_{k}^ {2}}=\frac{(\rho_{g}+\rho_{h})(\bar{\beta}^{2}-\beta_{k}^{2})}{3-\beta_{k}^{2} }<\frac{(\rho_{g}+\rho_{h})(1-\beta_{k}^{2})}{3-\beta_{k}^{2}}<\rho_{g}+\rho_ {h}.\] (ii) For all \(k=0,1,2,\ldots\), we get from the definition of \(c_{k}\) that \[c_{k}=\frac{(\beta_{k}^{2}-3)\gamma_{k}+(1-\beta_{k}^{2})(\rho_{g}+\rho_{h})}{ 4}.\] Then, taking any \(\gamma_{k}\in[0,\frac{(\rho_{g}+\rho_{h})(1-\beta_{k}^{2})-4\delta}{3-\beta_{k}^{ 2}}]\) and bearing in mind that \(0\leq\beta_{k}\leq\bar{\beta}<1\), we have \[\frac{(\beta_{k}^{2}-3)\gamma_{k}+(1-\beta_{k}^{2})(\rho_{g}+\rho_{h})}{4}\geq \frac{(\beta_{k}^{2}-3)\frac{(\rho_{g}+\rho_{h})(1-\beta_{k}^{2})-4\delta}{3- \beta_{k}^{2}}+(1-\beta_{k}^{2})(\rho_{g}+\rho_{h})}{4}=\delta>0.\] Hence, \[c_{k}\geq\delta>0,\quad k=0,1,2,\ldots.\] (iii) For all \(k=1,2,\ldots\), we have \[E_{k}= \max\{f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t- 1}\|^{2}:t=\max\{0,k-q\},\ldots,k\}\] \[\geq \max\{f(x^{t}):t=\max\{0,k-q\},\ldots,k\}\] \[\geq \inf\{f(x):x\in\mathcal{C}\}>-\infty.\] Note that \(k=0\) is not considered because \(x^{0}\) may not be a point in \(\mathcal{C}\). Hence the second-to-last inequality may not hold when \(k=0\). (iv) For all \(k=0,\ldots,q\), we have \[E_{q}= \max\{f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t -1}\|^{2}:t=0,\ldots,q\}\] \[\geq \max\{f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t -1}\|^{2}:t=0,\ldots,k\}=E_{k}.\] (v) There are two possible cases for \(v^{k}\): \(\bullet\)\(v^{k}=x^{k}\), then \[f(v^{k})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k}-x^{k-1}\|^{2}=f(x^{k})+ \frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k}-x^{k-1}\|^{2}\leq E_{k}.\] \(\bullet\)\(v^{k}=x^{k}+\frac{\theta_{h}-1}{\theta_{k+1}}\left(x^{k}-x^{k-1}\right)\), this will only occur when \(f(v^{k})+\frac{\rho_{g}+\rho_{h}-\gamma_{k}}{4}\|x^{k}-x^{k-1}\|^{2}\leq E_{k}\). (vi) For any \(p=1,\ldots,q\), we get from the definition of \(E_{k+p}\) that \[E_{k+p}= \max\{f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t -1}\|^{2}:t=\max\{0,k+p-q\},\ldots,k+p\}\] \[\leq \max\{f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t -1}\|^{2}:t=\max\{0,k-q\},\ldots,k+p\}\] \[= \max\{E_{k},\max\{f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4} \|x^{t}-x^{t-1}\|^{2}:t=k+1,\ldots,k+p\}\}=E_{k},\] where the last equality is due to the assumption that \(E_{k}\geq f(x^{t})+\frac{\rho_{g}+\rho_{h}-\gamma_{t}}{4}\|x^{t}-x^{t-1}\|^{2}\) for all \(t=k+1,\ldots,k+p\). ## 3. DC formulations for (AEiCP) In this section, we will present four equivalent DC formulations for (AEiCP) using a novel DC decomposition technique for polynomials, the difference-of-convex-sums-of-squares (DC-SOS) decomposition, introduced in [23]. This entails representing any polynomial as difference of convex sums-of-square polynomials. ### First DC formulation Consider the nonlinear programming (NLP) formulation of (AEiCP) presented in [19, 28] as (NLP1) \[0=\min\{f_{1}(x,y,w,z):=\|y-zx\|^{2}+x^{\top}w:(x,y,w,z)\in\mathcal{C}_{1}\},\] where \[\mathcal{C}_{1}:=\{(x,y,w,z)\in\mathbb{R}_{+}^{n}\times\mathbb{R}_{+}^{n} \times\mathbb{R}_{+}^{n}\times\mathbb{R}_{+}:w=Bx-Ay,e^{\top}x=1,e^{\top}y=z\}\] and \(e\) denotes the vector of ones. The gradient of \(f_{1}\) is computed by \[\begin{cases}\nabla_{x}f_{1}(x,y,w,z)=2z(zx-y)+w,\\ \nabla_{y}f_{1}(x,y,w,z)=2(y-zx),\\ \nabla_{w}f_{1}(x,y,w,z)=x,\\ \nabla_{z}f_{1}(x,y,w,z)=2x^{\top}(zx-y).\end{cases} \tag{3.1}\] It is known in [19, Theorem 3.1] that for any global optimal solution \((\bar{x},\bar{y},\bar{w},\bar{z})\) of (NLP1) with zero optimal value, we have \(\bar{y}=\bar{z}\bar{x}\) and \[(\bar{x},\frac{1}{\bar{z}})\in\text{AEiCP}(A,B).\] Unlike the SEiCP, a stationary point of (NLP1) is not necessarily to be a solution of \(\text{AEiCP}(A,B)\). A further discussion in [19, Theorem 3.2] shows that a stationary point of (NLP1) is a solution of \(\text{AEiCP}(A,B)\) if and only if the Lagrange multipliers associated with the linear equalities \(e^{\top}x=1\) and \(e^{\top}y=z\) equal \(0\). A DC-SOS decomposition for \(\|y-zx\|^{2}\) is given by \(g(x,y,z)-h(x,y,z)\) where \[\begin{cases}g(x,y,z)=&\|y\|^{2}+\frac{((z+1)^{2}+\|y-x\|^{2})^{2}+((z-1)^{2}+ \|y+x\|^{2})^{2}}{16}+\frac{(z^{2}+\|x\|^{2})^{2}}{2},\\ h(x,y,z)=&\frac{((z+1)^{2}+\|y+x\|^{2})^{2}+((z-1)^{2}+\|y-x\|^{2})^{2}}{16}+ \frac{z^{4}+\|x\|^{4}}{2},\end{cases} \tag{3.2}\] and a DC-SOS decomposition for \(x^{\top}w\) reads \[x^{\top}w=\frac{\|x+w\|^{2}}{4}-\frac{\|x-w\|^{2}}{4}.\] Hence, \(f_{1}\) has a DC-SOS decomposition \(G_{1}-H_{1}\) where \[\begin{cases}G_{1}(x,y,w,z)=&\|y\|^{2}+\frac{((z+1)^{2}+\|y-x\|^{2})^{2}+((z-1) ^{2}+\|y+x\|^{2})^{2}}{16}+\frac{(z^{2}+\|x\|^{2})^{2}}{2}+\frac{\|x+w\|^{2}}{ 2},\\ H_{1}(x,y,w,z)=&\frac{((z+1)^{2}+\|y+x\|^{2})^{2}+((z-1)^{2}+\|y-x\|^{2})^{2}}{1 6}+\frac{z^{4}+\|x\|^{4}}{2}+\frac{\|x-w\|^{2}}{4}.\end{cases} \tag{3.3}\] Thus, problem (NLP1) has a DC formulation as (DCP1) \[0=\min\{G_{1}(x,y,w,z)-H_{1}(x,y,w,z):(x,y,w,z)\in\mathcal{C}_{1}\}.\] The gradient of \(H_{1}\) is computed by \[\begin{cases}\nabla_{x}H_{1}(x,y,w,z)=\frac{((z+1)^{2}+\|y+x\|^{2})(x+y)}{4}+ \frac{((z-1)^{2}+\|y-x\|^{2})(x-y)}{4}+\frac{x-w}{2}+2\|x\|^{2}x,\\ \nabla_{y}H_{1}(x,y,w,z)=\frac{((z+1)^{2}+\|y+x\|^{2})(x+y)}{4}-\frac{((z-1)^{ 2}+\|y-x\|^{2})(x-y)}{4},\\ \nabla_{w}H_{1}(x,y,w,z)=\frac{w-x}{2},\\ \nabla_{z}H_{1}(x,y,w,z)=\frac{((z+1)^{2}+\|y+x\|^{2})(z+1)}{4}+\frac{((z-1)^{ 2}+\|y-x\|^{2})(z-1)}{4}+2z^{3}.\end{cases} \tag{3.4}\] ### Second DC formulation Replacing \(w\) by \(Bx-Ay\) in (NLP1), we get another NLP formulation (NLP2) \[0=\min\{f_{2}(x,y,z):=\|y-zx\|^{2}+x^{\top}(Bx-Ay):(x,y,z)\in\mathcal{C}_{2}\},\] where \[\mathcal{C}_{2}:=\{(x,y,z)\in\mathbb{R}_{+}^{n}\times\mathbb{R}_{+}^{n}\times \mathbb{R}_{+}:Bx-Ay\geq 0,e^{\top}x=1,e^{\top}y=z\}.\] The gradient of \(f_{2}\) is computed by \[\begin{cases}\nabla_{x}f_{2}(x,y,z)=2z(zx-y)+(B+B^{\top})x-Ay,\\ \nabla_{y}f_{2}(x,y,z)=2(y-zx)-A^{\top}x,\\ \nabla_{z}f_{2}(x,y,z)=2x^{\top}(zx-y).\end{cases} \tag{3.5}\] In virtue of the equivalence between (NLP1) and (NLP2), it follows immediately from [19, Theorem 3.1] that for any global optimal solution \((\bar{x},\bar{y},\bar{z})\) of (NLP2) with zero optimal value, we have \(\bar{y}=\bar{z}\bar{x}\) and \[(\bar{x},\frac{1}{\bar{z}})\in\text{AEiCP}(A,B).\] A DC-SOS decomposition for \(x^{\top}Ay\) reads \[x^{\top}Ay=\frac{\|x+Ay\|^{2}}{4}-\frac{\|x-Ay\|^{2}}{4}, \tag{3.6}\] a DC-SOS decomposition for \(\|y-zx\|^{2}\) is given in (3.2), and \(x^{\top}Bx\) is convex since \(B\in\text{PD}\). Hence, \(f_{2}\) has a DC-SOS decomposition \(G_{2}-H_{2}\) where \[\begin{cases}G_{2}(x,y,z)=&\|y\|^{2}+\frac{((z+1)^{2}+\|y-x\|^{2})^{2}+((z-1)^ {2}+\|y+x\|^{2})^{2}}{16}+\frac{(z^{2}+\|x\|^{2})^{2}}{2}+x^{\top}Bx+\frac{\| x-Ay\|^{2}}{4},\\ H_{2}(x,y,z)=&\frac{((z+1)^{2}+\|y+x\|^{2})^{2}+((z-1)^{2}+\|y-x\|^{2})^{2}}{16 }+\frac{z^{4}+\|x\|^{4}}{2}+\frac{\|x+Ay\|^{2}}{4}.\end{cases} \tag{3.7}\] Then problem (NLP2) has a DC formulation as (DCP2) \[0=\min\{G_{2}(x,y,z)-H_{2}(x,y,z):(x,y,z)\in\mathcal{C}_{2}\}.\] The gradient of \(H_{2}\) is computed by \[\begin{cases}\nabla_{x}H_{2}(x,y,z)&=\frac{((z+1)^{2}+\|y+x\|^{2})(x+y)}{4}+ \frac{((z-1)^{2}+\|y-x\|^{2})(x-y)}{4}+\frac{x+Ay}{2}+2\|x\|^{2}x,\\ \nabla_{y}H_{2}(x,y,z)&=\frac{((z+1)^{2}+\|y+x\|^{2})(x+y)}{4}-\frac{((z-1)^{2 }+\|y-x\|^{2})(x-y)}{4}+\frac{A^{\top}(Ay+x)}{2},\\ \nabla_{z}H_{2}(x,y,z)&=\frac{((z+1)^{2}+\|y+x\|^{2})(z+1)}{4}+\frac{((z-1)^{2 }+\|y-x\|^{2})(z-1)}{4}+2z^{3}.\end{cases} \tag{3.8}\] ### Third DC formulation An NLP formulation proposed in [28] reads (NLP3) \[0=\min\{f_{3}(x,y,w):=\|y\|^{2}+x^{\top}w-\frac{(x^{\top}y)^{2}}{\|x\|^{2}}:(x, y,w)\in\mathcal{C}_{3}\},\] where \[\mathcal{C}_{3}:=\{(x,y,w)\in\mathbb{R}_{+}^{n}\times\mathbb{R}_{+}^{n}\times \mathbb{R}_{+}^{n}:w=Bx-Ay,e^{\top}x=1\}.\] The gradient of \(f_{3}\) is computed by \[\begin{cases}\nabla_{x}f_{3}(x,y,w)=w-2\frac{x^{\top}y}{\|x\|^{2}}y+2\frac{(x^ {\top}y)^{2}}{\|x\|^{4}}x,\\ \nabla_{y}f_{3}(x,y,w)=2y-2\frac{x^{\top}y}{\|x\|^{2}}x,\\ \nabla_{w}f_{3}(x,y,w)=x.\end{cases} \tag{3.9}\] **Theorem 3.1**.: _Under Hypothesis 1.1, let \((\bar{x},\bar{y},\bar{w})\) be a global optimal solution of (NLP3) with zero optimal value, then_ \[(\bar{x},\frac{\|\bar{x}\|}{\|\bar{y}\|})\in\text{AEiCP}(A,B).\] Proof.: Let \((\bar{x},\bar{y},\bar{w})\) be a global optimal solution of (NLP3) with zero optimal value, we have \[\begin{cases}\|\bar{y}\|^{2}+\bar{x}^{\top}\bar{w}-\frac{(\bar{x}^{\top}\bar{y })^{2}}{\|\bar{x}\|^{2}}=0,\\ \bar{w}=B\bar{x}-A\bar{y}\geq 0,e^{\top}\bar{x}=1,\bar{x}\geq 0,\bar{y}\geq 0. \end{cases} \tag{3.10}\] Then \[0=\|\bar{y}\|^{2}+\bar{x}^{\top}\bar{w}-\frac{(\bar{x}^{\top}\bar{y})^{2}}{\| \bar{x}\|^{2}}\geq\|\bar{y}\|^{2}+\bar{x}^{\top}\bar{w}-\frac{\|\bar{x}\|^{2}\| \bar{y}\|^{2}}{\|\bar{x}\|^{2}}=\bar{x}^{\top}\bar{w}\geq 0,\] where the first inequality is due to Cauchy-Schwartz \(|\bar{x}^{\top}\bar{y}|\leq\|\bar{x}\|\|\bar{y}\|\), and the last inequality is due to \(\bar{x}\geq 0\) and \(\bar{w}\geq 0\). Hence, \[\bar{x}^{\top}\bar{w}=0\quad\text{ and }\quad|\bar{x}^{\top}\bar{y}|=\|\bar{x} \|\|\bar{y}\|. \tag{3.11}\] It follows from \(|\bar{x}^{\top}\bar{y}|=\|\bar{x}\|\|\bar{y}\|,\bar{x}\geq 0\), \(\bar{y}\geq 0\) and \(e^{\top}\bar{x}=1\) that there exists a positive scalar \(\bar{z}>0\) such that \[\bar{y}=\bar{z}\bar{x},\quad\bar{x}\neq 0. \tag{3.12}\] Therefore, \[\bar{w}=B\bar{x}-A\bar{y}=B\bar{x}-\bar{z}A\bar{x}\geq 0. \tag{3.13}\] Combining (3.10), (3.11) and (3.13), we proved that \[(\bar{x},\frac{1}{\bar{z}})\in\text{AEiCP}(A,B).\] \(\bar{z}\) is computed by injecting (3.12) to \(|\bar{x}^{\top}\bar{y}|=\|\bar{x}\|\|\bar{y}\|\) as \(\bar{z}=\|\bar{y}\|/\|\bar{x}\|\). Consider the objective function of (NLP3): \[f_{3}(x,y,w)=\|y\|^{2}+x^{\top}w-\frac{(x^{\top}y)^{2}}{\|x\|^{2}}.\] Let \(\varphi(x,y):=(x^{\top}y)^{2}/\|x\|^{2}\). Then \(\varphi\) is a smooth non-convex function over \(\Omega\times\mathbb{R}^{n}_{+}\), with its gradient and Hessian computed by \[\nabla_{x}\varphi(x,y)=2\frac{x^{\top}y}{\|x\|^{2}}y-2\frac{(x^{\top}y)^{2}}{ \|x\|^{4}}x,\quad\nabla_{y}\varphi(x,y)=2\frac{x^{\top}y}{\|x\|^{2}}x,\] and \[\nabla^{2}\varphi(x,y)=\begin{bmatrix}\nabla_{x,x}^{2}\varphi(x,y)&\nabla_{x,y}^{2}\varphi(x,y)\\ \nabla_{y,x}^{2}\varphi(x,y)&\nabla_{y,y}^{2}\varphi(x,y)\end{bmatrix},\] where \[\begin{cases}\nabla_{x,x}^{2}\varphi(x,y)=\frac{2}{\|x\|^{2}}(yy^{\top})+ \frac{8(x^{\top}y)^{2}}{\|x\|^{6}}(xx^{\top})-\frac{2(x^{\top}y)^{2}}{\|x\|^{ 4}}I_{n}-\frac{4x^{\top}y}{\|x\|^{4}}(xy^{\top}+yx^{\top}),\\ \nabla_{x,y}^{2}\varphi(x,y)=(\nabla_{y,x}^{2}\varphi(x,y))^{\top}=\frac{2}{\|x \|^{2}}(xy^{\top})+\frac{2x^{\top}y}{\|x\|^{2}}I_{n}-\frac{4(x^{\top}y)}{\|x\|^ {4}}(xx^{\top}),\\ \nabla_{y,y}^{2}\varphi(x,y)=\frac{2}{\|x\|^{2}}(xx^{\top}).\end{cases}\] The next proposition gives a DC decomposition for \(\varphi(x,y)\). **Proposition 3.2**.: _The function \(\varphi(x,y)=(x^{\top}y)^{2}/\|x\|^{2}\) has a DC decomposition over \(\mathcal{C}_{3}\) as_ \[\varphi(x,y)=\Big{[}\frac{\eta}{2}\|(x,y)\|^{2}+\varphi(x,y)\Big{]}-\frac{\eta} {2}\|(x,y)\|^{2},\] _for any_ \[\eta\geq 3.2+20nM^{2}, \tag{3.14}\] _where \(M\) is computed by solving the linear program_ \[M=\max\{e^{\top}y:(x,y,w)\in\mathcal{C}_{3}\}. \tag{3.15}\] Proof.: We can compute a large enough \(\eta\) by estimating an upper bound of the spectral radius of \(\nabla^{2}\varphi(x,y)\) over \(\Omega\times\mathbb{R}^{n}_{+}\). Since \((y^{\top}x)\) is the only possible nonzero eigenvalue of the rank-one matrix \(xy^{\top}\) with the associated eigenvector \(x\), and \(\|xy^{\top}\|=\|x\|\|y\|\). Then \[\|\nabla^{2}_{x,x}\varphi(x,y)\|\leq 20\frac{\|y\|^{2}}{\|x\|^{2}},\quad\| \nabla^{2}_{x,y}\varphi(x,y)\|\leq 8\frac{\|y\|}{\|x\|},\quad\|\nabla^{2}_{y,y} \varphi(x,y)\|=2.\] Hence \[\rho(\nabla^{2}\varphi(x,y)) =\|\nabla^{2}\varphi(x,y)\|\] \[\leq\sqrt{\|\nabla^{2}_{x,x}\varphi(x,y)\|^{2}+2\|\nabla^{2}_{x,y }\varphi(x,y)\|^{2}+\|\nabla^{2}_{y,y}\varphi(x,y)\|^{2}}\] \[\leq 20\frac{\|y\|^{2}}{\|x\|^{2}}+3.2.\] The term \(\|y\|/\|x\|\) over \(\mathcal{C}_{3}\) is upper bounded by \[\frac{\|y\|}{\|x\|}\leq\frac{e^{\top}y}{\|x\|}\leq\sqrt{n}e^{\top}y\leq\sqrt{n }\max_{(x,y,z)\in\mathcal{C}_{3}}e^{\top}y.\] Then, taking any \[\eta\geq 3.2+20n\left(\max_{(x,y,z)\in\mathcal{C}_{3}}e^{\top}y\right)^{2},\] we get the desired DC decomposition for \(\varphi(x,y)\) over \(\mathcal{C}_{3}\). We derive from Proposition 3.2 a DC-SOS decomposition for \(f_{3}\) as \(G_{3}-H_{3}\): \[\begin{cases}G_{3}(x,y,w)=(1+\frac{\eta}{2})\|y\|^{2}+\frac{\eta}{2}\|x\|^{2}+ \frac{\|x+w\|^{2}}{4},\\ H_{3}(x,y,w)=\frac{\eta}{2}\|(x,y)\|^{2}+\frac{(x^{\top}y)^{2}}{\|x\|^{2}}+ \frac{\|x-w\|^{2}}{4}.\end{cases} \tag{3.16}\] Then problem (NLP3) has a DC formulation as (DCP3) \[0=\min\{G_{3}(x,y,w)-H_{3}(x,y,w):(x,y,w)\in\mathcal{C}_{3}\}.\] The gradient of \(H_{3}\) is computed by \[\begin{cases}\nabla_{x}H_{3}(x,y,w)&=\eta x+2\frac{x^{\top}y}{\|x\|^{2}}y-2 \frac{(x^{\top}y)^{2}}{\|x\|^{4}}x+\frac{(x-w)}{2},\\ \nabla_{y}H_{3}(x,y,w)&=\eta y+2\frac{x^{\top}y}{\|x\|^{2}}x,\\ \nabla_{w}H_{3}(x,y,w)&=\frac{w-x}{2}.\end{cases} \tag{3.17}\] ## 4. BDCA and DCA for solving (AEiCP) The DC formulations proposed in the previous section, namely (DCP1), (DCP2) and (DCP3), belong to the convex constrained DC program (P). In this section, we will discuss how to apply BDCA and DCA for solving these DC formulations. The standard DCA is regarded as a special case of BDCA without line search. The solution methods for convex subproblems required in these DC algorithms and the line search procedure (exact and inexact) required in BDCA will be discussed. ### BDCA and DCA for (Dcp1) BDCA for (DCP1) is summarized in Algorithm 6. ``` 0:\(x^{0}\in\Omega\), \(y^{0}\in\mathbb{R}_{+}^{n}\), \(\bar{\alpha}>0\); 1: initialize \(X^{0}\leftarrow(x^{0},y^{0},Bx^{0}-Ay^{0},e^{\top}y^{0})\); 2:for\(k=0,1,\ldots\)do 3: solve (CP1) to get a solution \(V^{k}\); 4: initialize \(X^{k+1}\gets V^{k}\); 5: set \(D^{k}\gets V^{k}-X^{k}\); 6:if\(\mathcal{A}_{1}(V^{k})\subset\mathcal{A}_{1}(X^{k})\) and \(\langle\nabla f_{1}(V^{k}),D^{k}\rangle<0\)then 7: set \(X^{k+1}\leftarrow\)LineSearch\((V^{k},D^{k},\bar{\alpha})\); 8:endif 9:endfor ``` **Algorithm 6** BDCA for (DCP1) Some comments are given below: \(\bullet\) DCA for (DCP1) is just BDCA without the codes between line 5 to 8. \(\bullet\) The convex subproblem in line 3 is defined by (CP1) \[V^{k}\in\operatorname*{argmin}_{(x,y,w,z)\in\mathcal{C}_{1}}\{G_{1}(x,y,w,z)- \langle(x,y,w,z),\nabla H_{1}(x^{k},y^{k},w^{k},z^{k})\rangle\},\] where \(G_{1}\), \(H_{1}\) and \(\nabla H_{1}\) are given in (3.3) and (3.4) respectively. We can of course introduce a strongly quadratic term \(\frac{\rho}{2}\|(x,y,w,z)\|^{2}\) (for any \(\rho>0\)) into both \(G_{1}\) and \(H_{1}\) to ensure that the DC components are \(\rho\) strongly convex. The problem (CP1) can be obviously solved via many first- and second-order nonlinear optimization approaches such as gradient-type methods, Newton-type methods, interior-point methods. Some NLP solvers are available such as IPOPT, KNITRO, FILTERSD, CVX and MATLAB FMINCON, but these solvers may not be really efficient when dealing with some large-scale and ill-conditioned instances. To enhance the efficiency of solving the subproblem, especially for better handling large-scale cases, we propose reformulating (CP1) as a quadratic programming (QP) problem, which can be addressed using more efficient QP or Second-Order Cone Programming (SOCP) solvers such as MOSEK [2], GUROBI [31] and CPLEX [14]. The QP formulation is presented below. **QP formulation for (CP1):** By introducing the additional variable \[u:=(u_{x},u_{z},u_{z+1},u_{z-1},u_{y+x},u_{y-x})\in\mathbb{R}^{6} \tag{4.1}\] and 6 associated quadratic constraints \[\mathcal{Q}\mathcal{C}_{1}=\begin{cases}\|x\|^{2}\leq u_{x},\\ \|z\|^{2}\leq u_{z},\\ (z+1)^{2}\leq u_{z+1},\\ (z-1)^{2}\leq u_{z-1},\\ \|y+x\|^{2}\leq u_{y+x},\\ \|y-x\|^{2}\leq u_{y-x},\end{cases} \tag{4.2}\] then (CP1) is formulated as the quadratic program: (QP1) \[\min_{(x,y,w,z,u)\in\mathcal{C}_{1}\cap\mathcal{Q}\mathcal{C}_{1}}\bar{G}_{1}( x,y,w,u)-\langle(x,y,w,z),\nabla H_{1}(x^{k},y^{k},w^{k},z^{k})\rangle,\] where \[\bar{G}_{1}(x,y,w,u):=\|y\|^{2}+\frac{(u_{z+1}+u_{y-x})^{2}+(u_{z-1}+u_{y+x})^{ 2}}{16}+\frac{(u_{z}+u_{x})^{2}}{2}+\frac{\|x+w\|^{2}}{4}\] is derived from \(G_{1}\) by replacing some squares by the corresponding terms of \(u\) in (4.2). For the strongly convex DC formulation with additional term \(\frac{\rho}{2}\|(x,y,w,z)\|^{2}\) in both \(G_{1}\) and \(H_{1}\), its convex subproblem has a similar QP formulation as \[\min_{(x,y,w,z,u)\in\mathcal{C}_{1}\cap\mathcal{Q}\mathcal{C}_{1}}\bar{G}_{1}( x,y,w,u)+\frac{\rho}{2}\|(x,y,w,z)\|^{2}-\langle(x,y,w,z),\xi^{k}\rangle, \tag{4.3}\] where \(\xi^{k}=\rho(x^{k},y^{k},w^{k},z^{k})+\nabla H_{1}(x^{k},y^{k},w^{k},z^{k})\). Note that the constraint \(\mathcal{C}_{1}\cap\mathcal{Q}\mathcal{C}_{1}\) and the function \(\bar{G}_{1}\) in (QP1) and (4.3) does not depend on the iteration \(k\). Hence, we can generate them before starting the first iteration. The equivalence between (CP1) and (QP1) is established below: **Theorem 4.1**.: _Let \((\bar{x},\bar{y},\bar{w},\bar{z},\bar{u})\) be an optimal solution of (QP1), then \((\bar{x},\bar{y},\bar{w},\bar{z})\) is an optimal solution of (CP1). Conversely, let \((\bar{x},\bar{y},\bar{w},\bar{z})\) be an optimal solution of (CP1), then \((\bar{x},\bar{y},\bar{w},\bar{z},\bar{u})\) with_ \[\bar{u}=(\|\bar{x}\|^{2},\|\bar{z}\|^{2},(\bar{z}+1)^{2},(\bar{z}-1)^{2},\| \bar{y}+\bar{x}\|^{2},\|\bar{y}-\bar{x}\|^{2})\] _is an optimal solution of (QP1)._ Proof.: We just need to show that for any optimal solution of (QP1), we have equalities for all quadratic constraints in \(\mathcal{Q}\mathcal{C}_{1}\). By contradiction, let \((\bar{x},\bar{y},\bar{w},\bar{z},\bar{u})\) be an optimal solution of (QP1) such that there exists a constraint in \(\mathcal{Q}\mathcal{C}_{1}\) with strictly inequality, e.g., suppose that \(\|\bar{x}\|^{2}<\bar{u}_{x}\), then we can take \(\bar{u}_{x}=\|\bar{x}\|^{2}\) and keep the same values for all other variables, which leads to a better feasible solution with a smaller value of \(\bar{G}_{1}\), hence the optimality assumption is violated. \(\bullet\)\(\mathcal{A}_{1}(X)\) denotes the active set of \(\mathcal{C}_{1}\) at \(X:=(x,y,w,z)\) defined as \[\mathcal{A}_{1}(X)=\{i\in\{1,\ldots,3n+1\}:X_{i}=0\}.\] Since \(\mathcal{C}_{1}\) is polyhedral convex, then \(\mathcal{A}_{1}(V^{k})\subset\mathcal{A}_{1}(X^{k})\) is a necessary and sufficient condition for \(D^{k}\) to be a descent direction at \(V^{k}\). \(\bullet\)\(\text{LineSearch}(V^{k},D^{k},\bar{\alpha})\) in line 7 can be either exact or inexact. Let us denote \[V^{k}=(V^{k}_{x},V^{k}_{y},V^{k}_{w},V^{k}_{z})\quad\text{ and }\quad D^{k}=(D^{k}_{x},D^{k}_{y},D^{k}_{w},D^{k}_{z}).\] **Exact line search:** We can simplify \[f_{1}(V^{k}+\alpha D^{k})=\frac{a_{1}}{4}\alpha^{4}+\frac{a_{2}}{3}\alpha^{3}+ \frac{a_{3}}{2}\alpha^{2}+a_{4}\alpha+a_{5},\] where \[\begin{cases}a_{1}=&4(D_{z}^{k})^{2}\|D_{x}^{k}\|^{2},\\ a_{2}=&-6\langle D_{z}^{k}D_{x}^{k},D_{y}^{k}-D_{z}^{k}V_{x}^{k}-V_{z}^{k}D_{x }^{k}\rangle,\\ a_{3}=&2\left(\|D_{y}^{k}-D_{z}^{k}V_{x}^{k}-V_{z}^{k}D_{x}^{k}\|^{2}+\langle D _{w}^{k},D_{x}^{k}\rangle-2\langle D_{z}^{k}D_{x}^{k},V_{y}^{k}-V_{z}^{k}V_{x }^{k}\rangle\right),\\ a_{4}=&2\langle V_{y}^{k}-V_{z}^{k}V_{x}^{k},D_{y}^{k}-D_{z}^{k}V_{x}^{k}-V_{z }^{k}D_{x}^{k}\rangle+\langle V_{w}^{k},D_{x}^{k}\rangle+\langle V_{x}^{k},D_{ w}^{k}\rangle,\\ a_{5}=&\langle V_{x}^{k},V_{w}^{k}\rangle+\|V_{y}^{k}-V_{z}^{k}V_{x}^{k}\|^{2}. \end{cases} \tag{4.4}\] The exact line search (with upper bounded stepsize \(\bar{\alpha}\)) at \(V^{k}\) along \(D^{k}\) is to solve \[\alpha_{k}=\operatorname{argmin}\{f_{1}(V^{k}+\alpha D^{k}):0\leq\alpha\leq \bar{\alpha}\}. \tag{4.5}\] This problem is equivalent to \[\alpha_{k}=\operatorname{argmin}\{f_{1}(V^{k}+\alpha D^{k}):\alpha\in\{0, \min\{\bar{\alpha}_{k},\bar{\alpha}\}\}\cup\mathcal{Z}\}, \tag{4.6}\] where \[\bar{\alpha}_{k}=\min\left\{-(V^{k})_{i}/(D^{k})_{i},i\in\mathcal{I}^{k}\right\} \text{ with }\mathcal{I}^{k}=\{i\in\{1,\ldots,3n+1\}:(D^{k})_{i}<0\} \tag{4.7}\] under the assumption that \(\min\emptyset=\infty\) where \((D^{k})_{i}\) is the \(i\)-th component of the vector \(D^{k}\), and \(\mathcal{Z}\) is the set of all real roots of the cubic polynomial \[q(\alpha)=\frac{\mathrm{d}f_{1}(V^{k}+\alpha D^{k})}{\mathrm{d}\alpha}=a_{1} \ \alpha^{3}+a_{2}\ \alpha^{2}+a_{3}\ \alpha+a_{4}.\] Note that \(\mathcal{Z}\) has at most 3 distinct real roots which can be all computed by the renowned Cardano-Tartaglia formula. Hence problem (4.6) can be explicitly solved by computing all real roots of \(q(\alpha)\) and checking the values of \(f_{1}(V^{k}+\alpha D^{k})\) at 5 different values of \(\alpha\) at most (possibly three real roots of \(q(\alpha)\), 0 and \(\min\{\bar{\alpha}_{k},\bar{\alpha}\}\)). **Inexact line search:** The Armijo-type inexact line search is described below, where the initial stepsize \(\alpha\) is suggested to be \(\min\{\bar{\alpha}_{k},\bar{\alpha}\}\) with \(\bar{\alpha}_{k}\) defined in (4.7). ### BDCA and DCA for (Dcp2) BDCA for (Dcp2) is very similar to Algorithm 6. We outline the differences as follows: \(\bullet\) The initialization in line 1 of Algorithm 6 is changed to \[X^{0}\leftarrow(x^{0},y^{0},e^{\top}y^{0}).\] \(\bullet\) The convex subproblem in line 3 of Algorithm 6 is changed to (CP2) \[V^{k}\in\operatorname*{argmin}_{(x,y,z)\in\mathcal{C}_{2}}\{G_{2}(x,y,z)- \langle(x,y,z),\nabla H_{2}(x^{k},y^{k},z^{k})\rangle\},\] which has a similar structure as (CP1). So it can also be solved by QP approach. A QP formulation for (CP2) is given by (QP2) \[\min\{\bar{G}_{2}(x,y,u)-\langle(x,y,z),\nabla H_{2}(x^{k},y^{k},z^{k})\rangle: (x,y,z,u)\in\mathcal{C}_{2}\cap\mathcal{Q}\mathcal{C}_{1}\},\] where \(u\) is defined in (4.1), \(\mathcal{Q}\mathcal{C}_{1}\) is given by (4.2), and \[\bar{G}_{2}(x,y,u)=\|y\|^{2}+\frac{(u_{z+1}+u_{y-x})^{2}+(u_{z-1}+u_{y+x})^{2}} {16}+\frac{(u_{z}+u_{x})^{2}}{2}+x^{\top}Bx+\frac{\|x-Ay\|^{2}}{4}.\] A similar QP formulation for the strongly convex DC decomposition by adding \(\frac{\rho}{2}\|(x,y,z)\|^{2}\) into \(G_{2}\) and \(H_{2}\) can be established accordingly. The equivalence between (CP2) and (QP2) is described in Theorem 4.2, whose proof shares similarities with that of Theorem 4.1 and is therefore omitted. **Theorem 4.2**.: _Let \((\bar{x},\bar{y},\bar{z},\bar{u})\) be an optimal solution of (QP2), then \((\bar{x},\bar{y},\bar{z})\) is an optimal solution of (CP2). Conversely, let \((\bar{x},\bar{y},\bar{z})\) be an optimal solution of (CP2), then \((\bar{x},\bar{y},\bar{z},\bar{u})\) with_ \[\bar{u}=(\|\bar{x}\|^{2},\|\bar{z}\|^{2},(\bar{z}+1)^{2},(\bar{z}-1)^{2},\| \bar{y}+\bar{x}\|^{2},\|\bar{y}-\bar{x}\|^{2})\] _is an optimal solution of (QP2)._ \(\bullet\) Two conditions checked in line 6 of Algorithm 6 are changed to \[\mathcal{A}_{2}(V^{k})\subset\mathcal{A}_{2}(X^{k})\quad\text{and}\quad\langle \nabla f_{2}(V^{k}),D^{k}\rangle<0,\] where the active set \(\mathcal{A}_{2}(X)\) of \(\mathcal{C}_{2}\) at \(X:=(x,y,z)\) is defined by \[\mathcal{A}_{2}(X):=\{i\in\{1,\ldots,2n+1\}:X_{i}=0\}\cup\{2n+1+i:(Bx-Ay)_{i}= 0,i\in\{1,\ldots,n\}\}.\] \(\bullet\) LineSearch\((V^{k},D^{k},\bar{\alpha})\) will be either exact or inexact. The exact line search is similarly computed as follows: Let \(V^{k}=(V^{k}_{x},V^{k}_{y},V^{k}_{z})\) and \(D^{k}=(D^{k}_{x},D^{k}_{y},D^{k}_{z})\). Then \[\alpha_{k}=\text{argmin}_{\alpha}\{f_{2}(V^{k}+\alpha D^{k}):\alpha\in\{0,\min \{\bar{\alpha},\bar{\alpha}_{k}\}\}\cup\mathcal{Z}\}, \tag{4.8}\] where \[\bar{\alpha}_{k}=\min\left\{-(V^{k})_{i}/(D^{k})_{i},\forall i\in\mathcal{I}^ {k},-(BV^{k}_{x}-AV^{k}_{y})_{j}/(BD^{k}_{x}-AD^{k}_{y})_{j},\forall j\in \mathcal{J}^{k}\right\} \tag{4.9}\] with \(\mathcal{I}^{k}=\{i\in\{1,\ldots,2n+1\}:(D^{k})_{i}<0\}\) and \(\mathcal{J}^{k}=\{j\in\{1,\ldots,n\}:(BD^{k}_{x}-AD^{k}_{y})_{j}<0\}\), where the assumption \(\min\emptyset=\infty\) is adopted and \(\mathcal{Z}\) is the set of all real roots of the cubic polynomial \[q(\alpha)=\frac{\text{d}f_{2}(V^{k}+\alpha D^{k})}{\text{d}\alpha}=a_{1}\ \alpha^{3}+a_{2}\ \alpha^{2}+a_{3}\ \alpha+a_{4},\] with coefficients \(a_{1},a_{2},a_{3},a_{4}\) given in (4.4) by changing \(D_{w}^{k}\) (resp. \(V_{w}^{k}\)) to \(BD_{x}^{k}-AD_{y}^{k}\) (resp. \(BV_{x}^{k}-AV_{y}^{k}\)). The Armijo-type inexact line search is the same as for (DCP1) by changing all \(f_{1}\) to \(f_{2}\). ### BDCA and DCA for (Dcp3) BDCA for (DCP3) is also similar to Algorithm 6. The differences are summarized below: \(\bullet\) The initialization in line 1 of Algorithm 6 is changed to \[X^{0}\leftarrow(x^{0},y^{0},Bx^{0}-Ay^{0}).\] \(\bullet\) The convex subproblem in line 3 of Algorithm 6 is changed to (CP3) \[V^{k}\in\operatorname*{argmin}_{(x,y,w)\in\mathcal{C}_{3}}\{G_{3}(x,y,w)- \langle(x,y,w),\nabla H_{3}(x^{k},y^{k},w^{k})\rangle\},\] which is a convex QP and can be solved by some efficient QP solvers such as MOSEK, GUROBI and CPLEX. \(\bullet\) The parameter \(\eta\) required in \(G_{3}\) and \(H_{3}\) verifies the inequality \(\eta\geq 3.2+20nM^{2}\) where \(M\) is computed by solving the linear program \[M=\max\{e^{\top}y:(x,y,w)\in\mathcal{C}_{3}\}\] via a linear programming solver such as MOSEK, GUROBI and CPLEX. \(\bullet\) Two conditions checked in line 6 of Algorithm 6 are changed to \[\mathcal{A}_{3}(V^{k})\subset\mathcal{A}_{3}(X^{k})\quad\text{and}\quad \langle\nabla f_{3}(V^{k}),D^{k}\rangle<0,\] where the active set \(\mathcal{A}_{3}(X)\) of \(\mathcal{C}_{3}\) at \(X:=(x,y,w)\) is defined by \[\mathcal{A}_{3}(X):=\{i\in\{1,\dots,3n\}:X_{i}=0\}.\] \(\bullet\) LineSearch\((V^{k},D^{k},\bar{\alpha})\) is suggested to be the Armijo inexact line search by substituting \(f_{1}\) with \(f_{3}\). The exact line search is too complicated and thus not recommended due to the non-polynomial term \((x^{\top}y)^{2}/\|x\|^{2}\) in \(f_{3}\) (In fact, performing the exact line search here amounts to finding all real roots of a quintic equation - a problem without closed-form formula due to the well-known Abel-Ruffini theorem). ## 5. ADCA, InDCA and HDCA for solving (AeiCP) Comparing the convex subproblems required in BDCA and DCA Algorithm 1 line 2: \[z^{k}\in\operatorname*{argmin}\{g(x)-\langle x,\nabla h(x^{k})\rangle:x\in \mathcal{C}\}\] with ADCA Algorithm 2 line 8: \[x^{k+1}\in\operatorname*{argmin}\{g(x)-\langle x,\nabla h(v^{k})\rangle:x\in \mathcal{C}\},\] InDCA Algorithm 3 line 3: \[x^{k+1}\in\operatorname*{argmin}\{g(x)-\langle x,\nabla h(x^{k})+\gamma(x^{k} -x^{k-1})\rangle:x\in\mathcal{C}\},\] as well as HDCA-LI Algorithm 4 line 3: \[z^{k}\in\operatorname*{argmin}\{g(x)-\langle x,\nabla h(x^{k})+\gamma(x^{k}-x ^{k-1})\rangle:x\in\mathcal{C}\},\] and HDCA-NI Algorithm 5 line 14: \[x^{k+1}\in\operatorname*{argmin}\{g(x)-\langle x,\nabla h(v^{k})+\gamma_{k}(x ^{k}-x^{k-1})\rangle:x\in\mathcal{C}\},\] we can observe that the difference among these subproblems is only related to the coefficient vector of \(x\) in the scalar product \(\langle x,\cdot\rangle\). Hence, all of these subproblems can be solved using the same method previously discussed in Section 4. Moreover, in each of these DCA-type algorithms (DCA, BDCA, ADCA, InDCA, HDCA-LI and HDCA-NI), solving the convex subproblem is the most computationally demanding step. Consequently, the computational time per iteration of these algorithms should be fairly comparable. Hence, in numerical simulations, we can focus on comparing the quality of solutions obtained by these methods for each DC formulation with a fixed number of iterations. ## 6. Numerical Simulations In this section, we conduct numerical experiments for 7 DCA-type algorithms: the classical DCA, two BDCA variants (BDCAe with exact line search and BDCAa with Armijo inexact line search), ADCA, InDCA, HDCA-LI, and HDCA-NI, to solve three DC formulations (DCP1), (DCP2) and (DCP3) of (AEiCP). Our codes (available on GitHub1) are implemented in MATLAB 2022b and tested on a laptop equipped with a 64-bit Windows 10, i7-10870H 2.20GHz CPU, and 32 GB of RAM. Footnote 1: [https://github.com/niuyishuai/HDCA](https://github.com/niuyishuai/HDCA) We first compare the performance of these DCA-type algorithms. Then, we compare the best-performing DCA-type algorithm with the state-of-the-art optimization solvers IPOPT v3.12.9 [42], KNITRO v11.1.0 [8] and FILTERSD v1.0 [13] on three NLP formulations (NLP1), (NLP2) and (NLP3). **AEiCP datasets:** Two datasets are considered. * In the first dataset, we generate random AEiCP test problems in a similar way to [18]. The matrix \(A\) is asymmetric and positive definite, generated by \[A=T+\mu I_{n}\] where \(T\) is randomly generated with elements uniformly distributed in the interval \([-1,1]\) and \(\mu>|\min\{0,\lambda_{\min}(T+T^{\top})\}|.\) These matrices \(A\) exhibit good conditioning, as their condition numbers are less than 4. The matrix \(B\) is a symmetric, strictly diagonally dominant matrix with elements of the form \[\begin{cases}B_{i,i}=10,i=1,\ldots,n\\ B_{i,j}=-1,i=1,\ldots,n,j=i+1,\ldots,\min\{i+4,n\},\\ B_{i,j}=-1,i=1,\ldots,n,j=\max\{1,i-4\},\ldots,i-1.\end{cases}\] We generate 10 random problems for each \(n\in\{10,100,500\}\) and denote RAND(n) as the set of 10 problems. * In the second dataset, the matrix \(A\) is taken from the _Matrix Market_ repository NEP (Non-Hermitian Eigenvalue Problem) collection. We choose 22 asymmetric matrices with orders \(n\) ranging from 62 to 968, where \(n\) is indicated in the problem name (e.g., \(n=968\) for rdb968). These matrices originate from various fields of real applications, and most of them are ill-conditioned (see [https://math.nist.gov/MatrixMarket](https://math.nist.gov/MatrixMarket) for more information). The matrix \(B\) is set as the identity matrix, and we transform \(A\) to PD by adding \(\mu B\) such that \(A+\mu B\in\mathrm{PD}\) with \(\mu=|\min\{0,\lambda_{\min}(A+A^{\top})\}|+1\). **Experimental setup:** The setups for DCA-type algorithms and the compared optimization solvers are summarized below: * Initialization: * For DCA-type algorithms, we set \(x^{0}=\texttt{rand}(n,1)\) and normalize \(x^{0}\) by \(x^{0}=x^{0}/\texttt{sum}(x^{0})\) to obtain a vector on the simplex \(\Omega\). Then we set \(y^{0}=\texttt{rand}(n,1)\), \(w^{0}=Bx^{0}-Ay^{0}\) and \(z^{0}=\texttt{sum}(y^{0})\). Note that all compared DCA-type algorithms use the same initial point for fairness. * The optimization solvers IPOPT, KNITRO, FILTERSD employ the same initial point as in DCA-type algorithms. * Other settings: * We employ MOSEK to solve the convex subproblems and the linear problem (3.15) for computing \(M\). MOSEK's termination is achieved by setting the tolerances MSK_DPAR_INTPNT_Q_TOL_REL_GAP, MSK_DPAR_INTPNT_Q_TOL_PFEAS, and MSK_DPAR_INTPNT_Q_0_TOL_DFEAS to \(10^{-8}\). * The parameter \(\eta=3.2+20nM^{2}\) is used in (DCP3). * The parameter \(q=10\) is used for both ADCA and HDCA-NI. * An additional strongly convex regularization, \(\frac{\rho}{2}\|\cdot\|^{2}\), is introduced to each DC formulation. We set \(\rho=0.1\) by default, ensuring that \(\rho\leq\min\{\rho_{g},\rho_{h}\}\). * The parameter \(\bar{\beta}=0.99\) for HDCA-NI, and thus \(\delta=(1-\bar{\beta}^{2})\rho/2\approx 9.95\times 10^{-4}\). * The stepsize for the line search is upper bounded by \(\bar{\alpha}=10\). * The stepsize for the inertial force is \(\gamma=\rho\) for InDCA, \(\gamma=2\rho/(1+(1+\bar{\alpha})^{2})\) for HDCA-LI and \(\gamma_{k}=(2\rho(1-\beta_{k}^{2})-4\delta)/(3-\beta_{k}^{2})\) for HDCA-NI. ### Numerical results of DCA-type algorithms **Tests on the** RAND(n) **dataset:** We terminate all DCA-type algorithms (DCA, BDCAe, BDCAa, InDCA, ADCA, HDCA-LI and HDCA-NI) for a fixed number of iterations MaxIT (say 200), and evaluate each DC formulation and DCA-type algorithm by comparing the trend of the average objective value for 10 test problems in each dataset RAND(n) where \(n=10,100,500\). The numerical results are shown in Figures 1 to 3. The left column depicts the trend of the average objective value versus the number of iterations, while the right column presents the average CPU time (in seconds) for each DCA-type algorithm. Note that the line search in HDCA-LI for (DCP1) and (DCP2) is exact, whereas the line search in HDCA-LI for (DCP3) is inexact. Next, we summarize some observations as follows: * For the model (DCP1), we observe from Figure 1 that: * The accelerated variants of DCA (HDCA-NI, HDCA-LI, BDCAe, BDCAa, ADCA, InDCA) consistently outperform the classical DCA. * The hybrid method HDCA-NI yields the best numerical result in terms of the average objective value for the majority of tested cases, with the exception on the dataset RAND(500), where ADCA emerges as the best performer. * HDCA-LI secures the second-best average objective value for the dataset RAND(10), while ADCA holds the position for the dataset RAND(100). * For accelerated DCA without hybridization (i.e., BDCAe, BDCAa, InDCA and ADCA), it appears that ADCA outperforms BDCAe, which in turn outperforms BDCAa, while InDCA ranks as the second-worst algorithm. * As anticipated, the average CPU time per iteration remains nearly identical for all tested DCA-type algorithms. * For (DCP2), we once again observe in Figure 2 that all accelerated variants of DCA outshine the classical DCA in terms of the average objective value. Among the top performers, HDCA-NI, HDCA-LI and ADCA consistently stand out, then followed by BDCAe and BDCAa. The performance of InDCA, however, varies significantly, as illustrated by the contrasting performance of InDCA on RAND(10) and RAND(500), where it performs remarkably on RAND(500) but merely matches the classical DCA on RAND(10). * For (DCP3), we observe a similar result in Figure 3, with all accelerated DCA variants outperforming the classical DCA. The best result is consistently provided by HDCA-NI, followed by ADCA, HDCA-LI, BDCAe, and InDCA. It is worth noting that the choice of parameter \(\bar{\alpha}=10\) is a conservative setting; it seems that increasing this value often leads to better numerical results for BDCAe and BDCAa. Figure 4 illustrates an example of the impact of \(\bar{\alpha}\) on the computed objective value for BDCAa when solving (DCP3) within 200 iterations on the dataset RAND(100). We observe that the optimal result is achieved with \(\bar{\alpha}=200\). Beyond this value, the performance of BDCAa plateaus, as \(\bar{\alpha}_{k}<200\) for all \(k\in\mathbb{N}\). Notably, this observation is consistent across all DC formulations. Figure 1. Numerical results of DCA, BDCAe, BDCAa, ADCA, InDCA, HDCA-LI and HDCA-NI for solving (DCP1) on the test datasets RAND(n) with \(n\in\{10,100,500\}\). **Tests on the NEP dataset:** The numerical results of the DCA-type algorithms on the NEP dataset are summarized in Tables 2 to 4 for the three DC formulations. In these tables, we adopt the following notations: * the condition number of the matrix \(A\); * the objective value; The smaller the value of \(f\), the higher the quality of the computed solution; * the feasibility measure of the computed solution defined by \[c:=-\log(\|[x]_{-}\|+\|[w]_{-}\|+|w^{\top}x|),\] where \([x]_{-}=\min\{x,0\}\) and \(w=\frac{x^{\top}Ax}{x^{\top}Bx}Bx-Ax\). The larger the value of \(c\), the higher the quality of the computed solution; * average results regarding to \(f\) and \(c\). Figure 2. Numerical results of DCA, BDCAe, BDCAa, ADCA, InDCA, HDCA-LI and HDCA-NI for solving (DCP2) on the datasets RAND(n) with \(n\in\{10,100,500\}\). _Remark 6.1_.: The reasoning behind reporting both \(f\) and \(c\) is that a small value in \(f\) may not necessarily imply a large value in \(c\), especially if either the matrix \(A\) or \(B\) is ill-conditioned. For example, in (DCP1), let's assume that we obtain an approximate solution \(x+\Delta x\) for the exact solution \(x\). When the value of \(f\) is small, then the error \(\Delta x\) is also small. We get from the relation \[w=\frac{x^{\top}Ax}{x^{\top}Bx}Bx-Ax\] Figure 3. Numerical results of DCA, BDCAa, ADCA, InDCA, HDCA-LI and HDCA-NI for solving (DCP3) on the datasets RAND(n) with \(n\in\{10,100,500\}\). for the exact solution \((x,w)\) that, if \(A\) is ill-conditioned and \(B\) is well-conditioned, then \(B(x+\Delta x)\approx Bx\) and \[\frac{(x+\Delta x)^{\top}A(x+\Delta x)}{(x+\Delta x)^{\top}B(x+\Delta x)}B(x+ \Delta x)-A(x+\Delta x)\approx w+\underbrace{\frac{2x^{\top}A\Delta x+\Delta x ^{\top}A\Delta x}{x^{\top}Bx}Bx-A\Delta x}_{\approx\Delta w}.\] Consequently, when dealing with an ill-conditioned \(A\), we will get a large \(A\Delta x\), indicating that \(\Delta w\) could be substantial as well. This, in turn, could result in a considerable \(\|[w+\Delta w]_{-}\|\) and ultimately lead to a small \(c\). As seen in Table 2 for (DCP1), the best average value in \(f\) is achieved by HDCA-NI, followed by HDCA-LI, InDCA, BDCAe, BDCAa, and DCA. The best average value in \(c\) is also obtained by HDCA-NI, with the subsequent ranking being HDCA-LI, DCA, InDCA, BDCAe, ADCA, and BDCAa. Similar results are observed in Table 3 for (DCP2). It is worth noting that the inertial-based methods (InDCA, \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Prob} & \multirow{2}{*}{\(\text{cond}(A)\)} & \multicolumn{2}{c|}{DCA} & \multicolumn{2}{c|}{BDCAe} & \multicolumn{2}{c|}{BDCAa} & \multicolumn{2}{c|}{ADCA} & \multicolumn{2}{c|}{InDCA} & \multicolumn{2}{c|}{HDCA-LI} & \multicolumn{2}{c}{HDCA-NI} \\ & & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) \\ \hline l/o398a & 7.58e-043 & 1.30e-04 & 1.32 & 7.86e-05 & 1.40 & 1.13e-04 & 1.34 & 1.42e-06 & 2.35 & 1.88e-06 & 1.93 & 8.87e-07 & 2.20 & 9.42e-07 & 2.21 \\ l/o4fa2a & 1.48e-03 & 4.30e-06 & 1.55 & 1.57e-06 & 1.80 & 3.26e-06 & 1.61 & 9.88e-07 & 2.20 & 8.85e-07 & 1.94 & 6.42e-07 & 2.10 & 4.06e-07 & 2.23 \\ l/o782a & 4.62e-03 & 1.98e-04 & 0.91 & 1.27e-04 & 1.14 & 1.38e-04 & 0.96 & 3.45e-06 & 2.07 & 2.77e-06 & 1.77 & 1.30e-06 & 1.89 & 3.55e-07 & 2.04 \\ l/o8m200 & 3.93e-03 & 2.16e-04 & 2.07 & 2.09e-08 & 4.27 & 3.08e-08 & -4.27 & 3.06e-08 & -2.47 & 3.06e-08 & -2.47 & 3.03e-08 & -2.47 & 4.01e-08 & -2.36 \\ d/o812 & 3.78e-04 & 4.65e-07 & 0.57 & 2.12e-05 & 1.29 & 3.79e-04 & 0.60 & 1.94e-06 & 1.44 & 2.33e-08 & 1.42 & 1.50e-06 & 1.49 & 8.21e-08 & 2.44 \\ d/o812 & 4.00e-00 & 3.56e-04 & 1.65 & 2.25e-05 & 1.52 & 3.51e-04 & 1.57e-05 & 1.51 & 1.64e-05 & 2.55 & 2.34e-06 & 2.67 & 1.17e-07 & 3.01 \\ l/o163 & 3.24e-07 & 3.06e-04 & 0.94 & 4.50e-06 & 1.92 & 2.78e-04 & 0.96 & 2.42e-06 & 2.94 & 3.47e-05 & 1.58 & 1.62e-05 & 1.68 & 2.38e-07 & 2.32 \\ ml/d164 & 2.14e-05 & 1.85e-08 & 0.18 & 4.55e-08 & 0.18 & 3.62e-08 & 0.18 & 4.56e-09 & 0.18 & 8.75e-09 & 0.24 & 3.34e-08 & 0.27 & 1.67e-08 & 1.62 \\ ml/d16b & 5.05e-09 & 5.96e-08 & 3.87 & 9.73e-08 & 3.77 & 9.72e-08 & 3.77 & 1.20e-07 & 3.57 & 6.76e-05 & 2.57 & 2.71e-08 & 5.01 & 3.29e-09 & 0.55 \\ odp400a & 8.31e-05 & 3.50e-04 & 0.10 & 2.53e-04 & 0.15 & 3.35e-04 & 0.10 & 1.17e-05 & 0.74 & 9.26e-05 & 0.41 & 4.41e-05 & 0.54 & 2.98e-08 & 1.97 \\ d/o100 & 7.78e-04 & 2.00e-08 & -1.20 & 1.65e-08 & -1.20 & 2.83e-08 & -1.20 & 2.38e-08 & -1.20 & -1.34e-08 & -1.49 & 4.43e-08 & -1.49 & 3.41e-08 & -1.55 \\ al/o200 & 7.65e-05 & 1.58e-08 & 2.31e-08 & 1.28e-08 & 2.20e-08 & 2.14e-08 & -2.51 & 2.61e-08 & -2.51 & 2.61e-08 & -2.58e-06 & 2.48e-06 & 2.91 & 1.12e-08 & -2.51 \\ rh848 & 1.35e-05 & 2.74e-06 & -1.17 & 2.83e-08 & -1.17 & 2.84e-08 & -1.17 & 3.10e-08 & -1.17 & 6.81e-08 & -2.09 & 3.88e-09 & 2.14e-08 & -1.85 \\ rh800 & 1.63e-05 & 4.01e-08 & 1.57 & 3.96e-08 & 1.57 & 3.99e-08 & -1.57 & 3.51e-08 & -1.57 & 4.37e-08 & -2.08 & 3.51e-08 & -2.08 & 9.70e-08 & -1.98 \\ rh8200 & 8.32e-02 & 1.26e-05 & 0.78 & 1.28e-05 & 0.78 & 1.24e-05 & -0.78 & 1.22e-05 & -0.78 & 3.48e-07 & -0.71 & 2.46e-07 & -0.11 & 2.46e-07 & -0.15 \\ rh8420 & 1.16e-04 & 2.49e-06 & -1.01 & 2.47e-06 & -1.12 & 2.52e-06 & -1.10 & 2.49e-06 & -0.11 & 8.67e-09 & 0.49 & 8.44e-08 & -0.49 & 1.21e-07 & -0.55 \\ rh806 & 2.91e-01 & 9.80e-04 & 0.81 & 5.82e-04 & 9.14 & 9.86e-04 & 4.17 & 7.76e-06 & -1.03 & 3.06e-08 & -2.03 & 2.68e-06 & -2.01 & 3.41e-07 & 0.05 \\ rw136 & 1.49e-05 & 2.96e-06 & 1.48 & 2.95e-05 & 1.48 & 2.95e-05 & 1.48 & 2.77e-05 & 1.48 & 2.25e-05 & 1.68 & 5.52e-06 & 1.89 & 1.29e-06 & 2.01 \\ rw26 & 1.14e-10 & 6.52e-06 & 1.78 & 6.62e-06 & 1.78 & 7.31e-06 & 1.78 & 6.59e-06 & -1.78 & 1.95e-06 & 1.84 & 3.42e-06 & 2.13 & 1.67e-08 & 2.85 \\ td340 & 2.35e-05 & 3.50e-04 & 4.14 & 1.08 HDCA-LI, and HDCA-NI) perform poorly for (DCP3) when \(A\) is ill-conditioned, as seen in Table 4. For example, in the case tols340 where cond(\(A\))=2.35e+05, the value of \(c\) is \(-2.85\) for HDCA-NI, \(-2.90\) for InDCA, \(-0.48\) for HDCA-BI, \(4.79\) for DCA, \(4.76\) for BDCAa, and \(5.26\) for ADCA. To some extent, this can be alleviated by using a 'conservative' inertial strategy, which introduces the inertial force \(\gamma(x^{k}-x^{k-1})\) when \(k\geq 2\) instead of \(k\geq 0\). Then, we will get improvement in \(c\) as \(4.57\) for InDCA, \(5.00\) for HDCA-LI, and \(4.76\) for HDCA-NI. The numerical results of InDCA, HDCA-LI, and HDCA-NI using the conservative inertial strategy are shown in Table 5. \begin{table} \begin{tabular}{l|c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Prob} & \multirow{2}{*}{cond(\(A\))} & \multicolumn{2}{c|}{DCA} & \multicolumn{2}{c|}{BDCA} & \multicolumn{2}{c|}{ADCA} & \multicolumn{2}{c|}{InDCA} & \multicolumn{2}{c|}{HDCA-LI} & \multicolumn{2}{c}{HDCA-NI} \\ & & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) \\ \hline bfr398a & 7.58e+03 & 1.18e-02 & 0.85 & 9.35e-03 & 0.85 & 2.03e-03 & 0.86 & 1.18e-02 & 0.85 & 9.24e-03 & 0.85 & 2.02e-03 & 0.86 \\ bfr62a & 1.48e+03 & 5.22e-03 & 0.78 & 1.54e-03 & 0.79 & 7.53e-04 & 0.88 & 4.98e-03 & 0.78 & 1.54e-03 & 0.79 & 7.65e-04 & 0.87 \\ bfr82a & 4.62e+03 & 1.36e-03 & 0.88 & 1.21e-03 & 0.88 & 6.29e-04 & 0.88 & 1.34e-03 & 0.88 & 1.24e-03 & 0.88 & 6.22e-04 & 0.88 \\ bwr200 & 2.93e+03 & 1.39e-08 & -2.47 & 1.38e-08 & -2.47 & 1.69e-08 & -2.47 & 1.40e-08 & -2.47 & 1.38e-08 & -2.47 & 1.25e-08 & -2.31 \\ dwr512 & 3.72e+04 & 2.16e-05 & 0.97 & 2.15e-05 & 0.97 & 2.15e-05 & 0.97 & 2.13e-05 & 0.98 & 2.15e-05 & 0.97 & 2.13e-05 & 0.98 \\ dbr512 & 4.50e+03 & 3.74e-05 & 1.99 & 3.73e-05 & 1.99 & 3.73e-05 & 1.99 & 3.74e-05 & 1.99 & 3.78e-05 & 1.99 & 3.77e-05 & 1.99 \\ lop163 & 3.42e+07 & 1.75e-04 & 0.94 & 1.65e-04 & 0.95 & 8.51e-05 & 1.03 & 1.72e-04 & 0.95 & 1.61e-04 & 0.96 & 8.27e-05 & 1.04 \\ mhd16a & 2.41e+25 & 4.30e-09 & 0.70 & 3.58e-09 & 0.70 & 1.36e-08 & 0.70 & 2.41e-08 & 0.23 & 1.38e-09 & 0.70 & 2.70e-01 & -0.09 \\ mhd416b & 5.05e+09 & 9.95e-06 & 2.54 & 1.01e-05 & 2.54 & 1.01e-05 & 2.54 & 1.07e-05 & 2.54 & 1.04e-05 & 2.54 & 1.09e-05 & 2.54 \\ odep400a & 8.31e+05 & 9.86e-05 & 0.30 & 9.84e-05 & 0.30 & 5.28e-05 & 0.44 & 9.81e-05 & 0.30 & 9.81e-05 & 0.30 & 4.23e-05 & 0.48 \\ dim100 & 2.78e+04 & 4.58e-09 & -1.20 & 1.34e-08 & -1.20 & 4.43e-09 & -1.20 & 3.28e-09 & -1.41 & 1.51e-08 & -1.16 & 3.38e-09 & -1.51 \\ dim500 & 7.65e+05 & 1.44e-08 & -2.81 & 1.82e-08 & -2.81 & 1.28e-08 & -2.81 & 8.18e-09 & -2.82 & 2.06e-08 & -2.66 & 1.52e-08 & -2.78 \\ rts408a & 1.35e+05 & 1.70e-08 & -1.95 & 2.31e-08 & -1.95 & 1.46e-08 & -1.95 & 2.04e-08 & -2.09 & 2.26e-08 & -1.89 & 1.43e-08 & -2.00 \\ rts480b & 1.63e+05 & 3.55e-08 & -2.01 & 2.05e-08 & -2.01 & 2.41e-08 & -2.01 & 3.11e-08 & -2.08 & 1.99e-08 & -1.95 & 1.82e-08 & -2.01 \\ rd200 & 8.32e+02 & 4.97e-07 & -0.49 & 5.00e-07 & -0.49 & 5.12e-07 & -0.49 & 7.61e-07 & -0.33 & 5.77e-07 & -0.49 & 4.79e-07 & -0.18 \\ rd450 & 1.64e+03 & 2.11e-07 & 0.86 & 2.16e-07 & -0.86 & 2.20e-07 & 0.86 & 1.95e-07 & -0.74 & 29.5e-07 & -0.85 & 9.34e-08 & -0.59 \\ rd968 & 2.91e+01 & 5.61e-06 & -0.47 & 5.62e-06 & -0.47 & 4.80e-06 & -0.44 & 5.37e-06 & -0.46 & 5.61e-06 & -0.47 & 3.48e-06 & -0.37 \\ rts136 & 1.49e+05 & 2.79e-04 & 4.84 & 2.73e-04 & 8.24e-04 & 0.23e-04 & 0.90 & 2.72e-04 & 0.84 & 2.67e-04 & 0.84 & 1.97e-04 & 0.91 \\ rts496 & 1.14e+10 & 1.85e-04 & 0.89 & 1.85e-04 & 0.89 & 1.67e-04 & 0.91 & 1.83e-04 & 0.80 & 1.81e-04 & 0.90 & 1.65e-04 & 0.92 \\ tok340 & 2.35e+05 & 4.18e-08 & 4.79 & 4.81e-08 & 4.76 & 5.42e-09 & 5.26 & 2.00e-09 & -2.90 & 1.46e-08 & -0.48 & 2.13e-09 & -2.85 \\ tols409 & 2.49e+04 & 1.15e-08 & -2.05 & 9.65e-09 & -2.05 & 2.25e-08 & -2.05 & 5.14e-09 & -2.27 & 4.66e-09 & -2.05 & 2.06e-08 & -2.31 \\ tubl00 & 2.36e+04 & 5.71e-09 & -2.67 & 4.84e-09 & -2.67 & 5.75e-09 & -2.67 & 9.87e-09 & -2.67 & 5.16e-09 & -2.67 & 3.54e-09 & -2.50 \\ \hline avg & 8.73e-04 & -0.02 & 5.86e-04 & -0.02 & 1.81e-04 & 0.02 & 8.59e-04 & -0.41 & 5.82e-04 & -0.25 & 1.81e-04 & -0.36 \\ \hline \ are summarized in Table 5, and we can observe that the average value of \(c\) is greatly improved at the cost of a slight increase in the average value of \(f\). We conclude that the best DCA-type algorithm is always given by HDCA-NI, hence in the next subsection, we will compare it with other optimization solvers. ### Numerical results of other optimization solvers In this section, we present the numerical results of the optimization solvers IPOPT, KNITRO, and FILTERSD in comparison with HDCA-NI. Our primary focus is to evaluate their performance based on the objective value \(f\), the feasibility measure \(c\), and the CPU time (in seconds). To determine the CPU time for HDCA-NI, we employ the following stopping criteria: \[|f^{k+1}-f^{k}|\leq(1+|f^{k+1}|)\varepsilon,\] where the tolerance \(\varepsilon\) is set to \(10^{-8}\), and \(f^{k}\) represents the objective value at the \(k\)-th iteration. The other solvers use their default termination settings. The numerical results for solving (NLP1),(NLP2), and (NLP3) are summarized in Tables 6 to 8, respectively. Furthermore, instead of providing the results for all 10 instances within each dataset RAND(n), we only present their averages. * For the results of (NLP1) in Table 6, we observe that IPOPT performs best in terms of average values for \(f\), \(c\), and CPU time. HDCA-NI is the second-best method, followed by KNITRO and FILTERSD. It is important to note that, in contrast to IPOPT, FILTERSD and HDCA-NI, the solver KNITRO demonstrates considerable instability and frequently encounters "LCP solver problem" issues, in ill-conditioned NEP instances. This results in significantly different outcomes in each run. Therefore, we consider the best result for FILTERSD in terms of \(f\) across three runs. Nevertheless, FILTERSD still exhibits poor performance in terms of average \(f\), \(c\), and CPU time. \begin{table} \begin{tabular}{l|c|c|c c|c c} \hline \hline \multirow{2}{*}{Prob} & \multirow{2}{*}{\(\text{cond}(A)\)} & \multicolumn{2}{c|}{\(\text{InDCA}\)} & \multicolumn{2}{c|}{HDCA-LI} & \multicolumn{2}{c}{HDCA-NI} \\ & & \(f\) & \(c\) & \(f\) & \(c\) & \(f\) & \(c\) \\ \hline bfw398a & 7.58e+03 & 1.18e-02 & 0.85 & 9.29e-03 & 0.85 & 2.03e-03 & 0.86 \\ bfw62a & 1.48e+03 & 5.21e-03 & 0.78 & 1.72e-03 & 0.79 & 7.54e-04 & 0.88 \\ bfw782a & 4.62e+03 & 1.36e-03 & 0.87 & 1.20e-03 & 0.88 & 6.27e-04 & 0.88 \\ bwm200 & 2.93e+03 & 1.37e-08 & -2.41 & 1.45e-08 & -2.47 & 3.25e-08 & -2.47 \\ dw512 & 3.72e+04 & 2.16e-05 & 0.97 & 2.15e-05 & 0.97 & 2.15e-05 & 0.97 \\ dwb512 & 4.50e+04 & 3.74e-05 & 1.99 & 3.76e-05 & 1.99 & 3.76e-05 & 1.99 \\ lop163 & 3.42e+07 & 1.75e-04 & 0.94 & 1.63e-04 & 0.95 & 8.48e-05 & 1.03 \\ mhd416a & 2.41e+25 & 5.16e-06 & 0.70 & 5.49e-09 & 0.70 & 1.41e-08 & 0.70 \\ mhd416b & 5.05e+04 & 1.06e-05 & 2.54 & 9.73e-06 & 2.54 & 1.04e-05 & 2.54 \\ odep400a & 8.31e+05 & 9.87e-05 & 0.30 & 9.67e-05 & 0.30 & 5.29e-05 & 0.44 \\ olm100 & 2.78e+04 & 1.74e-08 & -1.20 & 1.75e-08 & -1.20 & 1.31e-08 & -1.20 \\ olm500 & 7.65e+05 & 2.24e-08 & -2.81 & 1.75e-08 & -2.81 & 4.75e-07 & -2.81 \\ rbs68ba & 1.35e+05 & 2.78e-08 & -1.95 & 1.48e-08 & -1.95 & 1.85e-08 & -1.95 \\ rbs480b & 1.63e+05 & 2.38e-08 & -2.01 & 1.92e-08 & -2.01 & 3.15e-08 & -2.01 \\ rdb200 & 8.32e+02 & 5.04e-07 & -0.49 & 5.00e-07 & -0.49 & 5.13e-07 & -0.49 \\ rdb450 & 1.64e+03 & 1.25e-07 & -0.86 & 2.13e-07 & -0.86 & 2.17e-07 & 0.86 \\ rdb968 & 2.91e+01 & 5.61e-06 & -0.47 & 5.61e-06 & -0.47 & 4.81e-06 & -0.43 \\ rvi136 & 1.49e+05 & 2.79e-04 & 0.84 & 2.72e-04 & 0.84 & 2.02e-04 & 0.90 \\ ru496 & 1.14e+10 & 1.85e-04 & 0.89 & 1.85e-04 & 0.89 & 1.69e-04 & 0.91 \\ tds340 & 2.35e+05 & 1.50e-07 & 4.57 & 8.07e-09 & 5.06 & 5.21e-08 & 4.76 \\ tds90 & 2.49e+04 & 1.76e-08 & -2.05 & 7.84e-09 & -2.05 & 2.14e-08 & -2.05 \\ tub100 & 2.36e+04 & 5.37e-09 & -2.67 & 5.73e-09 & -2.67 & 5.49e-09 & -2.67 \\ \hline avg & 8.73e-04 & -0.03 & 5.91e-04 & -0.01 & 1.82e-04 & -0.00 \\ \hline \hline \end{tabular} \end{table} Table 5. Numerical results of InDCA, HDCA-LI and HDCA-NI using the conservative inertial strategy for solving (DCP3) on the NEP dataset with MaxIT=200. * For the results of (NLP2) in Table 7 and (NLP3) in Table 8, IPOPT remains the best solver in terms of average \(f\), \(c\), and CPU time. HDCA-NI often yields good average values for \(f\), but with poor average values for \(c\). FILTERSD consistently provides the worst results for \(f\). In conclusion, the best optimization solver for all NLP formulations of (AEiCP) among all compared methods is IPOPT. HDCA-NI is competitive with IPOPT and outperforms KNITRO and FILTERSD in terms of the objective value. ## 7. Conclusions In this paper, we established three DC programming formulations of (AEiCP) based on the DC-SOS decomposition, which are numerically solved via several accelerated DC programming approaches, including BDCA with exact and inexact line search, ADCA, InDCA and two proposed novel hybrid accelerated DCA (HDCA-LI and HDCA-NI). Numerical simulations were carried out to compare the performance of the proposed DCA-type algorithms against the cutting-edge optimization solvers (IPOPT, KNITRO and FILTERSD). Numerical results indicated noteworthy performance, even on large-scale and ill-conditioned datasets. Future work will prioritize multiple avenues for enhancement. We aim to refine these DCA-type algorithms to improve the quality of computed results in terms of \(c\). A particular focus will be on improving the initialization process, possibly through insightful heuristics, to increase efficiency, especially for large-scale and ill-conditioned instances. Moreover, designing a more efficient algorithm to address the QP formulations of convex subproblems required in all DCA-type methods, instead of relying on a general QP solver will also be a key aspect of our upcoming research efforts. \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{3}{c|}{HDCA-NI} & \multicolumn{3}{c|}{IPOPT} & \multicolumn{3}{c|}{KNITRO} & \multicolumn{3}{c}{FILTERSD} \\ & \(f\) & \(c\) & CPU & \(f\) & \(c\) & CPU & \(f\) & \(c\) & CPU & \(f\) & \(c\) & CPU \\ \hline RAND(10) & 1.40e-02 & 1.88 & 0.451 & 4.71e-03 & 6.19 & 0.022 & 9.14e-03 & 4.17 & 0.008 & 3.79e-03 & 3.17 & 0.006 \\ RAND(100) & 1.41e-03 & 1.03 & 2.793 & 4.71e-04 & 3.45 & 0.816 & 9.15e-04 & 2.32 & 1.646 & 2.81e+01 & 0.15 & 0.670 \\ RAND(500) & 1.41e-04 & 0.55 & 28.157 & 4.79e-05 & 1.50 & 5.854 & 6.61e-04 & 0.37 & 17.710 & 5.69e+03 & -0.39 & 32.241 \\ \hline bfr398a & 1.39e-06 & 2.14 & 8.173 & 1.86e-05 & 2.88 & 0.428 & 1.82e-03 & 1.39 & 0.856 & 6.08e+01 & 1.27 & 1.839 \\ bfr62a & 6.42e-07 & 2.03 & 0.533 & 1.82e-06 & 4.54 & 0.315 & 8.43e-07 & 2.52 & 0.602 & 1.88e-07 & 2.09 & 0.659 \\ bbr8782a & 1.00e-06 & 1.86 & 11.712 & 3.36e-05 & 2.65 & 0.492 & 7.20e-04 & 1.16 & 6.10e-125e+02 & 1.38 & 6.422 \\ bvm200 & 1.07e-08 & -2.45 & 0.408 & 3.98e-08 & -0.79 & 0.029 & 5.50e-06 & -2.01 & 0.885 & 3.13e+01 & -1.23 & 0.091 \\ dbr512 & 8.78e-07 & 1.74 & 4.129 & 1.168e-08 & 2.73 & 0.840 & 1.00e-04 & 1.93 & 2.908 & 8.21e+04 & 0.50 & 68.637 \\ dbr512 & 2.52e-07 & 2.91 & 4.145 & 3.36e-08 & 0.60 & 0.647 & 1.85e-05 & 2.18 & 9.078 & 3.75e-04 & 1.65 & 4.769 \\ lop163 & 6.59e-07 & 2.15 & 6.083 & 6.42e-13 & 4.99 & 0.790 & 8.13e-07 & 2.35 & 0.256 & 4.03e-04 & 0.89 & 2.824 \\ mhd461a & -12.9e-09 & 1.67 & 0.256 & 11.06e-07 & -1.11 & 0.092 & 1.75e-04 & -0.46 & 6.063 & 6.57e-04 & -0.37 & 0.787 \\ mhd416b & 2.06e-07 & 3.12 & 3.537 & 4.52e-07 & 3.42 & 0.681 & 5.10e-03 & 1.89 & 0.936 & 6.85e+01 & 1.79 & 0.522 \\ odep400a & 2.91e-07 & 1.54 & 4.077 & 3.71e-08 & 3.14 & 0.290 & 8.27e-05 & 1.78 & 1.087 & 4.16e-04 & 0.07 & 25.959 \\ olm100 & 2.82e-09 & -1.55 & 0.115 & 3.45e-09 & -0.88 & 0.022 & 1.63e-05 & -0.88 & 0.011 & 1.24e-09 & -1.41 & 0.053 \\ olm500 & -1.51e-08 & -2.91 & 3.567 & 1.62e-06 & -1.73 & 0.081 & 1.10e-03 & -2.11 & 14.042 & 1.03e-10 & -3.68 & 18.787 \\ rbs480a & 2.52e-08 & -1.92 & 2.548 & 7.03e-08 & -0.73 & 0.455 & 1.17e-03 & -0.72 & 0.6043 & 8.18e+01 & -0.85 & 6.370 \\ rbs480b & 2.40e-08 & -2.03 & 1.933 & 8.32e-08 & -1.17 & 0.719 & 1.18e-03 & -1.19 & 6.717 & 4.79e+04 & -2.01 & 25.796 \\ rrh200 & 2.81e-07 & -0.15 & 0.502 & 1.57e-08 & 0.30 & 0.125 & 6.10e-05 & -0.75 & 0.413 & 5.78e+03 & -0.79 & 0.187 \\ rrb450 & 1.33e-07 & -0.53 & 0.681 & 1.54e-08 & 0.50 & 0.055 & 5.00e-04 & 0.38 & 0.197 & 1.84e+04 & -0.99 & 1.954 \\ rdb608 & 4.46e-06 & -0.28 & 2.027 & 8.28e-08 & 1.58 & 0.324 & 3.35e-04 & 1.19 & 1.835 & 1.43e+02 & 0.70 & 11.839 \\ rw136 & 1.43e-06 & 2.00 & 4.554 & 1.54e-13 & 5.32 & 0.865 & 1.78e-07 & 2.57 & 4.571 & 2.97e-05 & 1.48 & 1.797 \\ rw496 & 3.06e-07 & 2.27 & 5.993 & 1.44e-08 & 4.85 & 0.300 & 3.00e-05 & 2.22 & 3.469 & 6.60e-06 & 1.77 & 4.325 \\ tok300 & 2.87e-06 & -0.03 & 1.918 & 6.85e-06 & -2.29 & 0.064 & 1.29e-03 & -2.80 & 4.342 & 2.09e-04 & -2.09 & 1.808 \\ tok90 & 3.51e-09 & -2.32 & 0.086 & 1.68e-07 & -2.04 & 0.036 & 4.36e-05 & -2.19 & 0.257 & 1.25e-15 & -0.34 & 0.161 \\ tush100 & 4.56e-09 & -2.64 & 0.267 & 2.55e-08 & -1.75 & 0.018 & 4.36e-07 & -1.73 & 0.149 & 4.02e-09 & -2.62 & 0.057 \\ \hline avg & 6.24e-06 & 0.29 & 3.827 & 4.32e-06 & 1.18 & 0.544 & 4.35e-04 & 0.28 & 4.304 & 3.13e+03 & -0.13 & 8.715 \\ \hline \hline \end{tabular}
2305.04614
Reducing Onboard Processing Time for Path Planning in Dynamically Evolving Polygonal Maps
Autonomous agents face the challenge of coordinating multiple tasks (perception, motion planning, controller) which are computationally expensive on a single onboard computer. To utilize the onboard processing capacity optimally, it is imperative to arrive at computationally efficient algorithms for global path planning. In this work, it is attempted to reduce the processing time for global path planning in dynamically evolving polygonal maps. In dynamic environments, maps may not remain valid for long. Hence it is of utmost importance to obtain the shortest path quickly in an ever-changing environment. To address this, an existing rapid path-finding algorithm, the Minimal Construct was used. This algorithm discovers only a necessary portion of the Visibility Graph around obstacles and computes collision tests only for lines that seem heuristically promising. Simulations show that this algorithm finds shortest paths faster than traditional grid-based A* searches in most cases, resulting in smoother and shorter paths even in dynamic environments.
Aditya Shirwatkar, Aman Singh, Jana Ravi Kiran
2023-05-08T10:47:24Z
http://arxiv.org/abs/2305.04614v1
# Reducing Onboard Processing Time for Path Planning in Dynamically Evolving Polygonal Maps ###### Abstract Autonomous agents face the challenge of coordinating multiple tasks (perception, motion planning, controller) which are computationally expensive on a single onboard computer. To utilize the onboard processing capacity optimally, it is imperative to arrive at computationally efficient algorithms for global path planning. In this work, it is attempted to reduce the processing time for global path planning in dynamically evolving polygonal maps. In dynamic environments, maps may not remain valid for long. Hence it is of utmost importance to obtain the shortest path quickly in an ever-changing environment. To address this, an existing rapid path-finding algorithm, the Minimal Construct was used. This algorithm discovers only a necessary portion of the Visibility Graph around obstacles and computes collision tests only for lines that seem heuristically promising. Simulations show that this algorithm finds shortest paths faster than traditional grid-based A* searches in most cases, resulting in smoother and shorter paths even in dynamic environments. **Keywords:**_Path Planning, Navigation, Mobile Robots_ ## I Introduction Autonomous agents, such as drones and self-driving cars, have revolutionized many industries, from transportation to agriculture. However, these agents face the challenge of coordinating multiple tasks on a single onboard computer. These tasks include perception, motion planning, and control, which can be accomplished using software components available in the ROS framework [1]. But all of these tasks are are computationally expensive, hence it is critical for mobile robots which operate in dynamic and unpredictable environments to require quick and accurate decision-making capabilities. One of these essential tasks for autonomous agents is global path planning, which involves finding an optimal path from a start location to a goal location while avoiding obstacles. Traditional path planning methods rely on a rectangular grid-based occupancy map, where each cell represents a small spatial unit that can be blocked, free, or unknown. The map is then searched cell-by-cell to find the shortest path using different versions of A* or Dijkstra's algorithms. These can be inefficient for large-scale environments. Moreover, these algorithms require discretization of the environment, leading to sub-optimal paths and limited robot motion. Despite this, most of the current state-of-the-art navigation software still uses these methods [2]. Polygonal maps offer several advantages over grids, which make them a promising alternative for mobile robot navigation in dynamic environments. Unlike grids, geometric arrangements have a compact memory footprint and are well-suited to represent moving obstacles. Additionally, they are not susceptible to the problems of discretization. Most significantly, polygonal maps give rise to the Visibility Graph [3], which comprises edges of the polygons in the scene and additional edges that connect pairs of polygon corners that can "see" each other. The shortest paths can be found in this graph, and once it is constructed, the A* algorithm can quickly find smooth paths of optimal length with a minimal number of line segments. This is in stark contrast to the jagged and sub-optimal paths that emerge from a grid-based approach. A visual representation is provided in Figure. 1 Although the search itself is relatively fast, constructing the Visibility Graph requires significantly more computation time. The most efficient algorithms known today require \(O(n^{2})\) operations, where \(n\) is the number of polygon edges. Therefore, the time required to construct the graph is highly Fig. 1: The figure shows the obstacle-avoiding shortest paths in a polygonal map. The black polygonal areas represent the walls expanded by the robot’s size. The minimal construct algorithm explores only a small fraction of the visibility graph of the map, as shown by the thin grey lines. In comparison, the jagged path found by A* search (shown in blue) in an equivalent grid is sub-optimal in length. On the other hand, the visibility graph-based minimal construct algorithm (shown in red) provides an optimal path with minimal complexity of only four line segments. dependent on the complexity of the map. Additionally, the frequent changes in the environment, such as moving objects, people, or exploration of uncharted areas, can make the graph obsolete. In such cases, the graph needs to be reconstructed or repaired, which slows down the process of finding the shortest path. The Minimal Construct algorithm [3] addresses this problem by only computing the required portion of the Visibility Graph during an A* search, rather than pre-computing it. The algorithm always begins with the smallest possible graph, which is a straight line from the start to the destination. In case this line intersects with an obstacle, only then it goes on to update the graph. This approach reduces the computational expense of the algorithm by limiting the number of line intersection tests to only those lines that appear promising based on heuristics. The features described above make the Minimal Construct algorithm one of the best choices to reduce the computation time for path planning in dynamically evolving polygonal maps. Thus, the goal of this work is to propose a pipeline that employs a fast global path planner to reduce the time required for map re-computation as the map changes. We do this because shorter computation durations reduce the load on a single onboard CPU while simultaneously shortening reaction times. We use Minimal Construct as a global planner in dynamic environments. To track the given global plan, we use the Pure Pursuit Controller [4] as the local planner. And we perform re-computation of the global path, only when there is a change in the environment (eg. obstacle disappeared, appeared, or moved) or the current path collides with any obstacle. The overview of the approach is given in Fig 2. We compare results of or method with the Gird-based A* algorithm for the path planning, in one static environment case and four dynamically evolving environment cases. We found out that the Grid-based A* algorithm give a sub-optimal path from the source to the target. On the other hand, the proposed method gives an optimal route. We also found that the map re-computation time was significantly lesser in the proposed method as compared to the Grid-based A*. ## II Methodology ### _Global Planner_ #### Ii-A1 Preliminaries The Visibility Graph can be built using an \(O(n^{3})\) algorithm, which examines all \(O(n^{2})\) pairs of vertices and checks if the edges between them cross with any of the \(n\) lines in the picture. However, this method necessitates a large number of line intersection computations. In contrast, the Minimal Construct technique does not seek to compute the whole Visibility Graph. Rather, it only explores a small piece of the graph. To avoid running collision checks on line segments that do not contribute to the shortest path, the algorithm delays computing line intersection tests until they are required. The algorithm also makes use of the fact that concave corners and edges that are not tangential at both ends cannot be included in the shortest path. An edge \((v_{i},v_{j})\) is considered tangential at vertex \(v_{j}\) if both adjacent polygon corners, \(v_{j-1}\) and \(v_{j+1}\), lie on the same side of the edge \((v_{i},v_{j})\). A visual example of this is shown in Figure 3. Whenever concave corners or non-tangential edges are discovered, they are immediately discarded. #### Ii-A2 Minimal Construct Algorithm Consider a scenario where the environment consists of a set of non-convex polygons which are disjoint and intersect only at endpoints. Let's represent this set of polygons with \(\mathbf{S}\). The minimal construct algorithm discovers a graph \(\mathbf{G}=(\mathbf{V},\mathbf{E})\) during its search, where \(\mathbf{V}\) is the set of nodes (points) in the graph and Fig. 3: a) An edge \((v_{i},v_{j})\) is considered tangential at vertex \(v_{j}\) if the adjacent corners of the polygon, \(v_{j-1}\) and \(v_{j+1}\), are positioned on the same side of the edge. If the edge \((v_{i},v_{j})\) is not tangential (b), or if vertex \(v_{j}\) is concave (c), then the triangle \(\delta(v_{i},v_{j-1},v_{j+1})\) is entirely visible from vertex vi. Consequently, the edge \((v_{i},v_{j})\) cannot be included in the shortest path. Fig. 2: Overview of the approach \(\mathbf{E}\) is the set of edges (line segment) connecting these nodes. The set of nodes in \(\mathbf{G}\) i.e. set \(\mathbf{V}\) is a subset of the vertices of the polygons in the \(\mathbf{S}\). The Minimal construct algorithm connects pairs of vertices in \(\mathbf{V}\) with edges \(\mathbf{E}\). Please note the start node \(s\), and the target node, \(t\), are considered as two additional points in the graph. All the nodes connected to a node \(P\) through an edge are said to be in \(neighbourhood\) of \(P\). As the algorithm explores the graph, all the nodes that it has explored are called \(closed\)\(nodes\) and all the nodes that can be explored in the next iteration step are called \(open\)\(nodes\). Please note that for a node to be open it should have at least one closed node in its neighbourhood. The closest, closed node in the neighborhood of a node \(P\) is said to be the \(parent\) of the node \(P\). All the vertices of the polygons which form an external angle greater than 180\({}^{\circ}\) are called \(Convex\)\(vertices\). An edge \((v_{i},v_{j})\) is considered \(tangential\) at vertex \(v_{j}\) if the adjacent corners of the polygon, \(v_{j-1}\) and \(v_{j+1}\), are positioned on the same side of the edge. Minimal Construct algorithm uses A* algorithm as the basis. In order to use it, the minimal construct algorithm defines a parent of a node, as described previously. After the search is finished, we use the unidirectional parent relationship to extract the shortest path by following the path from the target parent to the parent to the start. Algorithm 1 uses the functions PARENTOF\((v_{i},v_{j})\), SETPARENT\((v_{i},v_{j})\), and REMOVEPANET\((v_{i})\) to set and remove parent associations between vertices. SETPARENT\((v_{i},v_{j})\) makes vertex \(v_{i}\) as the parent of \(v_{j}\), whereas REMOVEPANET\((v_{i})\) cancels the parent of \(v_{i}\). The A* algorithm employs the functions ISOPEN\((v_{i})\), CLOSE\((v_{i})\), and ISCLOSED\((v_{i})\) to keep track of the vertices that have been opened or closed during the search. Vertices are opened when they are pushed into the priority queue and closed when they are removed. During the search, we can use these functions to query and manipulate the open and closed states of vertices. Here, the pseudo-code of the Minimal Construct Algorithm, as shown in Algorithm 1 is explained. To start the search, the Minimal construct algorithm forms a graph with only two nodes, the start and the target node. It connects the two nodes with a straight edge, making them each other's neighbours. It also marks the start node as closed and sets the start node as the parent of the target node. It then selects the target node as an open node by sending it to the priority queue. Please note, that the algorithm sets the start node as closed in the starting itself, as the start node is considered explored in the start, and it remains closed throughout. After this, the algorithm enters a loop where in each iteration it removes the vertex \(v\) with lowest priority from the priority queue. Here priority represents the heuristic of the vertex, i.e. the sum of the distance from the start node, as travelled through the graph, and the straight line distance to the target node. After removing the vertex it finds its parent, \(u\) and the edge \((u,v)\) is tested for Line Intersection test. It is a Fig. 4: An example to showcase the working of Minimal Construct Algorithm computationally intensive task, and is discussed separately in following section. If the line intersection test on the edge \((u,v)\) resulted in no intersection with polygon, the Algorithm goes ahead with the A* expansion. The algorithm checks if the popped vertex \(v\) is the target vertex. If \(v\) is the target vertex, then the algorithm extracts the path by following the parent pointers back to the start vertex, and returns the path as the output of the algorithm. If \(v\) is not the target vertex, it closes it, meaning that it will not be expanded again during the search. It then expand the neighbors of the popped vertex \(v\). For each neighbor that has not yet been closed, the algorithm checks if it is already in the priority queue. If the neighbor is not in the priority queue, or if its estimated path cost from the start vertex is lower than its current estimated path cost, the algorithm updates its parent, path cost, and priority, and adds it to the priority queue. It updates the path cost \(g\) of the neighbor as the sum of the path cost of the popped vertex and the distance between the two vertices. It then calculates the heuristic estimate \(h\) of the distance from the neighbor to the target vertex, and finally calculates the priority \(f\) of the neighbor as the sum of its path cost \(g\) and its heuristic estimate \(h\). It then pushes the neighbor into the priority queue. ``` 0: Vertex \(v\) 1:\(minPathCost\leftarrow\infty\)\(\triangleright\) Init the min path cost with infinity 2:\(u\leftarrow\) nil\(\triangleright\) Init the new parent with nil 3:for all neighbors \((vi,(vi,v)\in E)\)do\(\triangleright\) For all neighbors of \(v\) 4:if ISCLOSED(\(vi\))then 5:if (\(g(vi)+|vi-v|<minPathCost\))then 6:\(minPathCost\gets g(vi)+|v-vi|\) 7:\(\triangleright\) Update min cost 8:\(u\gets vi\)\(\triangleright\) Remember \(v_{i}\) as new parent 9:endif 10:endif 11:endfor 12:if (\(u\neq\) nil)then 13: SETPARENT(\(u\), \(v\))\(\triangleright\) Set \(u\) as new parent of \(v\) 14:\(g(v)\gets g(u)+|v-vi|\)\(\triangleright\) Update path cost \(g\) 15:\(f(v)\gets g(v)+|v-vi|\)\(\triangleright\) Update priority \(f\) 16:\(h(vi)\gets|v-i|\)\(\triangleright\) Update heuristics \(h\) 17:endif 18:\(f(vi)\gets g(vi)+h(vi)\)\(\triangleright\) Update priority \(f\) 19:endif 20:endif 21:endif 22:endfor 23:else\(\triangleright\) If polygon \(\mathbf{p}\) has intersected 24:\(E\gets E\setminus(v,u)\)\(\triangleright\)\(v\) no longer parent of \(u\) 25: REMOVEPARENT(\(v\))\(\triangleright\) Remove parent of \(v\) 26: FINDPARENT(\(v\))\(\triangleright\) Algorithm 2 27:if (!ISCOSED(\(p\))) then\(\triangleright\) If polygon \(\mathbf{p}\) is not closed 28: CONNECTOBSTACLE(\(p\))\(\triangleright\) Algorithm 3 29:\(\triangleright\) CLOSE(\(p\))\(\triangleright\) Close the intersected polygon 30:endif 31:endif 32:endif 33:endwhile 34:return P\(\triangleright\) Search failed. Return empty path! ``` **Algorithm 2** Find Parent If the line intersection test on the edge between the parent \(u\) and popped vertex \(v\) returns intersection, meaning that the edge intersects a polygon, the algorithm proceeds to handle the intersected polygon \(\mathbf{p}\). First, it removes the edge between \(v\) and its parent \(u\) from the graph by deleting the corresponding entry from the set of edges \(\mathbf{E}\). Then, it removes the parent of \(v\) by setting it to nil. The algorithm then calls the FINDPARENT\((v)\) function, as shown in Algorithm 2, to find a new parent for \(v\).If such a parent is discovered, \(v\)'s priority \(f(v)\) and path cost \(g(v)\) are modified, and it is reinserted into the queue. This re-parenting and reinsertion of \(v\) ensures that the A* search remains complete and returns the graph to the condition it would have been in if the invalid edge \((u,v)\) had not been there. It is important to note that \(v\) may remain without a parent until it is revisited at a later stage. For each corner \(v_{i}\) in the set of vertices of the polygon p, the algorithm checks if the corner is convex or not. If the corner \(v_{i}\) is convex, the algorithm adds it to the set of vertices \(\mathbf{V}\). For each known vertex \(v_{j}\) in the set \(\mathbf{V}\) the algorithm checks if \(v_{i}\) and \(v_{j}\) are tangential using the \(\textbf{isTangential}(vi,vj)\) function. If \(v_{i}\) and \(v_{j}\) are tangential, the algorithm adds an edge \((vi,vj)\) to the set \(\mathbf{E}\) of edges in the graph \(\mathbf{G}\), indicating that the corners \(v_{i}\) and \(v_{j}\) are connected. The algorithm then calls the \(\textbf{FINDPARENT}(v_{i})\) function for the corner vi, which searches for the parent of \(v_{i}\). Finally, the algorithm repeats this process for each corner of the polygon until all corners have been processed. The algorithm then returns the updated graph \(\mathbf{G}\). The polygon is then closed, so that it is not added to graph multiple times. The process is repeated until either a solution is identified or the entire graph has been investigated and no solution is found. #### Iii-A3 Line Intersection Test The Minimal Construct Algorithm uses a Line Intersection test function to determine whether an edge between the popped vertex \(v\) and its parent \(u\) is intersecting any polygon or not. If it intersects, the algorithm makes the graph around the intersecting polygon. Checking line intersection is a computationally intensive task, and its algorithm is discussed as follows. The Line intersection algorithm finds all intersection points of the line segment with the polygon. It then divides the line segment into sub-line segments using the intersection points. For each sub-line segment, it checks if the midpoint of the sub-line segment lies inside the polygon. If the midpoint is inside the polygon, then the line is intersecting the polygon. The algorithm stops there. If the midpoint is not inside the polygon, algorithm moves to the next sub-line segment. It repeats these steps for all sub-line segments. If none of the midpoints are inside the polygon, then the line segment does not intersect the polygon. ### _Local Planner_ After generating the global plan using the Minimal Construct algorithm, the next step is to track the plan using a local planner. For this task, Pure Pursuit controller [4] was chosen. The Pure Pursuit controller is a popular choice for mobile robots due to its simplicity and effectiveness in tracking a reference path. The Pure Pursuit controller is a type of proportional control that tries to steer a vehicle toward a reference path by adjusting the steering angle. The controller calculates the desired steering angle based on the current position of the vehicle and the desired point i.e. the lookahead point on the reference path. This point is calculated by finding the point on the path that is a certain distance ahead of the vehicle and aligns with the vehicle's current heading. This approach is simple and effective when the path is smooth and has gentle curves. However, when the path contains sharp turns, the pure pursuit controller can deviate from the reference path, which can lead to collisions with static obstacles. This is because the controller assumes that the path is continuous and smooth, but in reality, sharp turns require the vehicle to slow down and turn more sharply to follow the path. If the controller continues to follow the reference path without adjusting for the sharp turn, the vehicle may overshoot the turn and collide with obstacles. One way to mitigate this issue is to add safety measures. These can include adding obstacle detection and avoidance capabilities to the system, reducing the speed of the vehicle when sharp turns are detected, and adding a safety margin to the distance between the vehicle and obstacles. These measures can help prevent collisions and improve the safety of of the system. Another pitfall is that too small a lookahead distance may cause the robot to oscillate or overshoot the path, while too large a lookahead distance may cause the robot to cut corners and deviate from the path. A possible solution to this is to set the lookahead distance based on the velocity of the robot and the curvature of the path. ### _Recomputation of Global Path_ Finally, while tracking the given global path using the said local planner, we keep checking whether there has been a change in the environment or whether the current path intersects any of the obstacles. If any of the conditions yields true, we generate the global path again. \begin{table} \begin{tabular}{l c c} \hline \hline & Minimal Construct & Grid based A* \\ \hline Case 1 & \(0.72,0.70\) & \(2.86,2.46\) \\ Case 2 & \(0.72,0.38\) & \(3.42,1.72\) \\ Case 3 & \(0.66,0.37\) & \(2.94,1.98\) \\ Case 4 & \(0.59,0.27\) & \(2.83,1.40\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Recomputation Time (sec) when Global Planner is called two times Fig. 6: Figure comparing the global path generated by Minimal Construct algorithm (in red) and Grid-based A* algorithm (in blue) ## III Results The proposed approach was implemented in MATLAB and can be accessed through open-source code on github. The simulations were performed in two artificially generated maps of different natures i.e static and dynamic. In the case of Figure 6 which is a static map, the path generated by the Minimal Construct is significantly shorter than the grid-based A*. And the graph explored is also minimal. Thereby motivating us to implement it in dynamic environments as well. The simulations in the dynamically evolving map can be seen in Figure 5. It is assumed that the agent knows the changes in the environment. It can be seen that when the map changes the Grid based A* and Minimal Construct Fig. 5: In each subfigure, the two leftmost figures show the global path with grid-based A* algorithm before and after the map changed. Similarly, the two rightmost two figures show the global path with the Minimal Construct algorithm before and after the change in the map Fig. 7: The figure shows the total time taken to reach the goal from the same start point for all the cases. recompute the path. The time taken by Grid-based A* to recompute the new path is around 4 times more than the time taken by the Minimal Construct. A compilation of this recomputation time for different cases can be found in Table I. This reduction in computation time will help in reducing the reaction time for the agent and also optimal utilization of onboard processing capacity. Also, the total time taken by the agent to reach a target in each case was plotted in figure 7. Clearly, Minimal Construct outperforms the grid-based A*. ## IV Conclusion It was shown that the path search algorithm based on the Visibility Graph approach offers a significant reduction in computation time for finding the shortest path in polygonal maps. By adopting the philosophy of constructing only the required portion of the Visibility Graph, the Minimal Construct algorithm can efficiently calculate the shortest paths in cluttered and indoor environments. Polygonal planning and modeling provides an advantage of a non-discrete action set and a continuous representation of the world, resulting in smoother and optimal shortest paths with minimal path segments. Also preliminary results that leverage this polygonal representation where the map evolved dynamically were shown. ## V Future Work The following extensions can be attempted as a part of future work. \(1)\) Incorporating a robust local planner like DWB [6] or TEB-MPC [7][8] for obstacle avoidance, as it can help improve the safety and efficiency of the system. \(2)\) Code-level optimizations in C++ can also help improve the performance of the system and reduce the computational load, which can be particularly important for real-time applications. \(3)\) Implementing the approach in different real-world scenarios can help evaluate the robustness and adaptability of the algorithm, and testing it with maps generated by state-of-the-art mapping algorithms can help validate its effectiveness in practical settings. \(4)\) Predicting potential collision points using mathematical expressions can also be a valuable addition to the system, especially for dynamically evolving maps [9]. This can help ensure that the path planner can respond quickly to changes in the environment and avoid collisions with moving obstacles. \(5)\) Lastly, extending the algorithm to 3D polytopes can open up new applications for path planning, such as for UAVs. This can involve additional challenges such as dealing with complex terrain and varying altitude, but can also lead to exciting new possibilities for autonomous flight.
2303.11829
On Shock Profiles in Four-Field Formulations of Dissipative Relativistic Fluid Dynamics
This paper shows that in second-order hyperbolic systems of partial differential equations proposed in authors' earlier paper (J. Math. Phys. 59 (2018)) for modelling the relativistic dynamics of barotropic fluids in the presence of viscosity and heat conduction, shock waves of arbitrary strength have smooth, monotone dissipation profiles. The results and arguments extend classical considerations of Weyl (Comm. Pure Appl. Math. 2 (1949)) and Gilbarg (Amer. J. Math. 73 (1951)) to the relativistic setting.
Heinrich Freistuhler
2023-03-21T13:14:57Z
http://arxiv.org/abs/2303.11829v1
# On Shock Profiles in Four-Field Formulations ###### Abstract This paper shows that in second-order hyperbolic systems of partial differential equations proposed in authors' earlier paper (J. Math. Phys. 59 (2018)) for modelling the relativistic dynamics of barotropic fluids in the presence of viscosity and heat conduction, shock waves of arbitrary strength have smooth, monotone dissipation profiles. The results and arguments extend classical considerations of Weyl (Comm. Pure Appl. Math. **2** (1949)) and Gilbarg (Amer. J. Math. **73** (1951)) to the relativistic setting. Introduction In the theory of relativity, the state of a barotropic fluid can be described by a 4-vector \(\psi^{\alpha},\alpha=0,1,2,3\), which, as a function of the space-time coordinates \(x^{\beta},\beta=0,1,2,3\), is governed by a system of partial differential equations, \[\frac{\partial}{\partial x^{\beta}}\left(T^{\alpha\beta}+\Delta T ^{\alpha\beta}\right)=0,\quad\alpha=0,1,2,3, \tag{1.1}\] where in case viscosity and / or heat conduction are active, \[\Delta T^{\alpha\beta}=-B^{\alpha\beta\gamma\delta}\frac{\partial \psi_{\gamma}}{\partial x\delta}. \tag{1.2}\] The tensors \[T^{\alpha\beta}\text{ and }B^{\alpha\beta\gamma\delta},\quad \alpha,\beta,\gamma,\delta=0,1,2,3, \tag{1.3}\] are given functions of the four fields, i.e., the components of \(\psi^{\alpha}\). In the absence of viscosity and heat conduction, the equations of motion reduce to the relativistic Euler equations \[\frac{\partial}{\partial x^{\beta}}T^{\alpha\beta}=0,\quad\alpha =0,1,2,3. \tag{1.4}\] The present paper focusses on shock waves, whose ideal version is given by discontinuous solutions to the latter, (1.4), of the (prototypical) form \[\psi_{\alpha}(x)=\begin{cases}\psi_{\alpha}^{-},&x^{\beta}\xi_{ \beta}<0,\\ \psi_{\alpha}^{+},&x^{\beta}\xi_{\beta}>0,\end{cases} \tag{1.5}\] and asks whether they can be properly represented in the dissipative setting. A standard way to achieve such representation is a 'dissipation profile', i.e., a regular solution of (1.1) that depends also only on \(x^{\beta}\xi_{\beta}\) and connects the two states forming the shock, in other words, a solution \(\hat{\psi}\) of the ODE \[\xi_{\beta}\xi_{\delta}B^{\alpha\beta\gamma\delta}(\hat{\psi}) \hat{\psi}_{\gamma}^{\prime}=\xi_{\beta}T^{\alpha\beta}(\hat{\psi})-q^{\alpha },\quad q^{\alpha}:=\xi_{\beta}T^{\alpha\beta}(\psi\pm), \tag{1.6}\] on \(\mathbb{R}\) which is heteroclinic to them, \[\hat{\psi}_{\alpha}(-\infty)=\psi_{\alpha}^{-},\quad\hat{\psi}_ {\alpha}(+\infty)=\psi_{\alpha}^{+}. \tag{1.7}\] Concretely, the state variables of a barotropic fluid are given by \[\psi^{\alpha}=\frac{U^{\alpha}}{\theta},\] where \(U^{\alpha}\) and \(\theta\) are the 4-velocity and the temperature, \[\theta=\left(-\psi_{\alpha}\psi^{\alpha}\right)^{-1/2},\] the fluid is specified by prescribing its pressure as a function of the temperature, \[p=\tilde{p}(\theta),\] and the ideal part of the energy-momentum tensor is given by \[T^{\alpha\beta}=\frac{\partial(\tilde{p}(\theta)\psi^{\beta})}{\partial\psi_{ \alpha}}=\theta^{3}\frac{d\tilde{p}(\theta)}{d\theta}\psi^{\alpha}\psi^{\beta }+\tilde{p}(\theta)g^{\alpha\beta}. \tag{1.8}\] We assume strict causality in the sense that \[\left(\frac{\partial^{2}(\tilde{p}(\theta)\psi^{\beta})}{\partial\psi_{ \alpha}\partial\psi_{\gamma}}T_{\beta}\right)_{\alpha,\gamma=0,1,2,3}\text{ is negative definite, all future non-spacelike directions }T_{\beta}. \tag{1.9}\] As regards the dissipative part we specify, following [8, 9, 10], \(-\Delta T^{\alpha\beta}\) as1 Footnote 1: We use the metric \(g^{\alpha\beta}=(-+++)\) and the projector \(\Pi^{\alpha\beta}=g^{\alpha\beta}+U^{\alpha}U^{\beta}\). \[\begin{array}{rl}-\Delta T^{\alpha\beta}_{\Box}=&\eta\Pi^{\alpha\gamma}\Pi ^{\beta\delta}\left[\frac{\partial U_{\gamma}}{\partial x^{\delta}}+\frac{ \partial U_{\delta}}{\partial x^{\gamma}}-\frac{2}{3}g_{\gamma\delta}\frac{ \partial U^{\epsilon}}{\partial x^{\epsilon}}\right]+\tilde{\zeta}\Pi^{ \alpha\beta}\frac{\partial U^{\gamma}}{\partial x^{\gamma}}\\ &+\sigma\left[U^{\alpha}U^{\beta}\frac{\partial U^{\gamma}}{\partial x^{ \gamma}}-\left(\Pi^{\alpha\gamma}U^{\beta}+\Pi^{\beta\gamma}U^{\alpha}\right)U^ {\delta}\frac{\partial U_{\gamma}}{\partial x^{\delta}}\right]\\ &\qquad\qquad+\chi\bigg{[}\left(U^{\alpha}\frac{\partial\theta}{\partial x_{ \beta}}+U^{\beta}\frac{\partial\theta}{\partial x_{\alpha}}\right)-g^{\alpha \beta}U^{\gamma}\frac{\partial\theta}{\partial x^{\gamma}}\bigg{]},\end{array} \tag{1.10}\] where \[\sigma=((4/3)\eta+\zeta)/(1-c_{s}^{2})-c_{s}^{2}\chi\theta\quad\text{and} \quad\tilde{\zeta}=\zeta+c_{s}^{2}\sigma-c_{s}^{2}(1-c_{s}^{2})\chi\theta \tag{1.11}\] with \(\eta,\zeta,\chi\) the coefficients of shear viscosity, bulk viscosity, and thermal conductivity, and \(0<c_{s}<1\) the speed of sound. The following are our main results. **Theorem 1**.: _Consider a barotropic fluid with viscosity and without heat conduction. Assume that the acoustic mode is genuinely nonlinear. Then any Lax shock has a dissipation profile with respect to \(\Delta T_{\Box}\)._ **Theorem 2**.: _Consider a barotropic fluid with viscosity and with heat conduction. Assume that the acoustic mode is genuinely nonlinear. Then any Lax shock has a dissipation profile with respect to \(\Delta T_{\Box}\)._ Sections 2 and 3 are devoted to the proofs of Theorems 1 and 2, respectively. In Section 4, we contrast these findings with recently established properties of a different formulation that was proposed by Bemfica, Disconzi, and Noronha in [2]. Shock profiles in viscous barotropic fluids without heat conduction To demonstrate Theorem 1, we write the pressure also as a function \[p=\hat{p}(\rho)\] of the energy \[\rho=\theta\tilde{p}^{\prime}(\theta)-\tilde{p}(\theta),\] to which the sound speed is of course connected as \(c_{s}^{2}=\hat{p}^{\prime}(\rho)\), and recall from [4, 3, 15] that genuine nonlinearity of the acoustic mode is characterized by the condition \[(\rho+\hat{p}(\rho))\hat{p}^{\prime\prime}(\rho)+2(1-\hat{p}^{\prime}(\rho)) \hat{p}^{\prime}(\rho)>0. \tag{2.1}\] Consider an ideal shock wave (1.5) and assume w. l. o. g. that the spatiotemporal direction of propagation is \((0,1,0,0)\), i.e., \(\xi_{\beta}=\delta_{\beta 1}\), and \(\psi_{2}=\psi_{3}=0\) (as can always be achieved by a Lorentz transformation). The profile ODE system, \[-(\Delta T_{\Box}^{\alpha 1})^{\prime}=T^{\alpha 1}-q^{\alpha}, \tag{2.2}\] then has two active equations, \(\alpha=0,1\). We first characterize the rest points in terms of their dependence on the free constant \(q^{\alpha}\). **Lemma 1**.: _For every \(q^{1}>0\) there exists a unique \(Q(q^{1})>0\) such that the following holds: The algebraic system_ \[T^{\alpha 1}(\cdot)=q^{\alpha},\quad\alpha=0,1,\] _has more than one solution if and only if_ \[q_{1}^{2}<q_{0}^{2}<q_{1}^{2}+Q(q^{1}), \tag{2.3}\] _in which case it has precisely two solutions._ Proof.: To see this, note first that the equation \(T^{11}(.)=q^{1}\) is equivalent to \[p<q^{1}\quad\text{and}\quad u_{1}^{2}=\frac{q^{1}-p}{\rho+p}.\] Under this condition, the equation \(T^{01}(.)=q^{0}\) is equivalent to the combination of \[(\rho+p)^{2}\frac{q^{1}-p}{\rho+p}\left(1+\frac{q^{1}-p}{\rho+p}\right)=q_{0}^ {2} \tag{2.4}\] and \[u^{1}q^{0}>0. \tag{2.5}\] We write (2.4) as \[g(\rho)\equiv-\rho\hat{p}(\rho)+q^{1}(-\hat{p}(\rho)+\rho)=q_{0}^{2}-q_{1}^{2} \tag{2.6}\] with \(g\) defined on the interval \[I\equiv[0,\bar{\rho}]\quad\mbox{with }\hat{p}(\bar{\rho})=q^{1}.\] Note now that any stationary point of \(g\) is a nondegenerate maximum. This follows as assuming \(0=g^{\prime}(\rho)\) at some point \(\rho\in I\) implies \[q^{1}=\frac{\hat{p}(\rho)+\rho\hat{p}^{\prime}(\rho)}{1-\hat{p}^{\prime}(\rho)}\] and thus, using (2.1), \[g^{\prime\prime}(\rho)=-(\rho+q_{1})p^{\prime\prime}(\rho)-2\hat{p}^{\prime}( \rho)=-\frac{\rho+\hat{p}(\rho)}{1-\hat{p}^{\prime}(\rho)}\hat{p}^{\prime \prime}(\rho)-2\hat{p}^{\prime}(\rho)<0. \tag{2.7}\] As \(g^{\prime}(0)>0\), this means that along \(I\), \(g\) increases from \(g(0)=0\) to \(Q\equiv\max_{I}g>0\) and then decays to \(g(\bar{\rho})=-q_{1}^{2}<0\), the monotonicity being strict in both parts. Equation (2.6) thus has more than one, namely two, positive solutions if and only (2.3) holds. Returning to our shock wave (1.5), we note that it must correspond to certain parameter values \(q^{0},q^{1}\) with the properties recorded in Lemma 1; regarding (2.5), we fix signs2 as Footnote 2: The opposite case \(u^{1},q^{0}<0\) differs only by a transformation \(x^{1}\mapsto-x^{1}\). \[u^{1},q^{0}>0.\] Since \[-(\Delta T^{\alpha 1}_{\Box})^{\prime}=\sigma(u^{\alpha})^{\prime},\] we can rewrite the traveling wave system (2.2) equivalently as \[\sigma u_{\alpha}(u^{\alpha})^{\prime} = u_{\alpha}(T^{\alpha 1}-q^{\alpha}) \tag{2.8}\] \[\sigma v_{\alpha}(u^{\alpha})^{\prime} = v_{\alpha}(T^{\alpha 1}-q^{\alpha}) \tag{2.9}\] with \((v^{0},v^{1})=(u^{1},u^{0})\) orthogonal to \((u^{0},u^{1})\). As \(u_{\alpha}(u^{\alpha})^{\prime}=0\), equation (2.8) is an algebraic constraint, \[q^{0}u^{0}-(\rho+q^{1})u^{1}=0, \tag{2.10}\] or \[u_{1}=U(\rho):=\left(\left(\frac{\rho+q^{1}}{q^{0}}\right)^{2}-1\right)^{-1/2} \tag{2.11}\] By virtue of (2.10), \[u^{0}v_{\alpha}(u^{\alpha})^{\prime}=u^{0}(v^{0}u^{\prime}_{0}+v^{1}u^{\prime}_{1 })=u^{\prime}_{1}, \tag{2.12}\] and \[u^{0}v_{\alpha}(T^{\alpha 1}-q^{\alpha})=u^{0}v_{\alpha}((\rho+p)u^{\alpha}u^{1 }+pg^{\alpha 1}-q^{\alpha})=u^{0}(-v_{0}q^{0}+v_{1}(p-q^{1})), \tag{2.13}\] equation (2.9) reduces to \[\sigma u^{\prime}_{1}=(\rho+p)u^{2}_{1}+(p-q^{1})\] or, using (2.10) again, \[\sigma U^{\prime}(\rho)\rho^{\prime}=R(\rho):=(\rho+p)\left(\left(\frac{\rho+ q^{1}}{q^{0}}\right)^{2}-1\right)^{-1}+(p-q^{1})=(\rho+p)\left(\frac{\rho+q^{1}}{q^ {0}}\right)^{2}-(\rho+q^{1}).\] As \(R>0\) between its two zeros, the heteroclinic solution connects them, and \(\rho\) increases in the direction in which the fluid moves. The latter correctly fits the fact that Lax shocks are compressive. _Remarks._ (i) The argument, notably inequality (2.7), reveals the geometric meaning of the genuine nonlinearity condition (2.1) for the Rankine-Hugoniot relations. (ii) For the special case of pure radiation, \(p=\rho/3\), the result updates considerations in [8] to the dissipation tensor \(\Delta T_{\Box}\) as derived in [10], which (cf. a remark towards the end of Sec. 1 in [10]) is not exactly identical with the version originally proposed in [8]. ## 3 Shock profiles in viscous barotropic fluids with heat conduction To include both viscosity and heat conduction, we have to use the full dissipation tensor (1.10). In this case, we can work directly with (1.6); in other words, we express (2.2) via (1.2) as \[B^{\alpha 1\gamma 1}(\hat{\psi})\hat{\psi}^{\prime}_{\gamma}=T^{\alpha 1}( \hat{\psi})-q^{\alpha},\quad q^{\alpha}:=T^{\alpha 1}(\psi^{\pm}) \tag{3.1}\] and consider this system on its natural domain of definition, \[\Psi=\{(\psi^{0},\psi^{1})\in\mathbb{R}^{2}:\psi^{0}>|\psi^{1}|\},\] with \(q^{\alpha}\) corresponding to a fixed Lax shock \(\psi^{-}\to\psi^{+}\) as in Sec. 2, the characteristic speeds \(\lambda_{1,2}\) being both positive at \(\psi^{-}\) and having different signs at \(\psi^{+}\). Since \[T^{\alpha\beta}=\theta^{3}p^{\prime}(\theta)\psi^{\alpha}\psi^{\beta}+p( \theta)g^{\alpha\beta}=\frac{\partial L^{\beta}(\psi)}{\partial\psi_{\alpha}}\] with \[L^{\beta}(\psi)\equiv p(-(\psi^{\gamma}\psi_{\gamma})^{-1/2})\psi^{\beta},\] and due to sharp causality \(B^{\alpha 1\gamma 1}\) is positive definite, and \(L\) with \[L(\psi)=L^{1}(\psi)-q^{\gamma}\psi_{\gamma}\] is a strict Liapunov function for (3.1). The Jacobian of the vector field \(F^{\alpha}=T^{\alpha 1}-q^{\alpha}\) is the Hessian of \(L\), \[\frac{\partial F^{\alpha}}{\partial\psi_{\beta}}=\frac{\partial^{2}L^{1}( \psi)}{\partial\psi_{\alpha}\partial\psi_{\beta}}=\frac{\partial^{2}L(\psi)}{ \partial\psi_{\alpha}\partial\psi_{\beta}}=:H^{\alpha\beta}(\psi),\] and the eigenvalues of \(H^{\alpha\beta}(\psi)\) relative to the positive definite matrix \[\frac{\partial T^{\alpha 0}}{\partial\psi_{\beta}}=\frac{\partial^{2}L^{0}( \psi)}{\partial\psi_{\alpha}\partial\psi_{\beta}}>0\] are the characteristic speeds \(\lambda_{1,2}\) at the fluid state \(\psi\). If we choose \(q^{\alpha}\) and signs as in Sec. 2, these speeds are both positive at the left hand state \(\psi^{-}\), and of different signs at the right hand state \(\psi^{+}\) of the shock. This implies that \[H^{\alpha\beta}(\psi^{-})>0\quad\mbox{and}\quad\det H^{\alpha\beta}(\psi^{+})<0;\] the exactly two critical points of the Liapunov function \(L\) thus are a strict local minimum at \(\psi^{-}\) and a hyperbolic saddle at \(\psi^{+}\). For a shock of sufficiently small amplitude, i.e., for a parameter value \(q=(q^{0},q^{1})\) with \(q^{1}>0\) while \(q_{0}^{2}-q_{1}^{2}>0\) is small enough, the shock profile exists, i.e., system (3.1) possesses a heteroclinic orbit with \(\alpha\)-limit \(\psi^{-}\) and \(\omega\)-limit \(\psi^{+}\). Consequently, for this value of \(q\) \[c^{+}\equiv L(\psi^{+})>L(\psi^{-})\equiv c^{-}.\] and the \(c^{+}\) level line of \(L\) contains a closed curve (with a corner at \(\psi^{+}\)) whose interior \[\Omega\subset L^{-1}((-\infty,c^{+}))\cap W^{u}(\psi^{-})\] contains \(\psi^{-}\). Now, on the one hand, as \[\frac{\partial L}{\partial\psi_{1}}(\psi^{0},\psi^{1})=p(\theta)+\theta^{3}p ^{\prime}(\theta)\psi_{1}^{2}-q^{1}>0\] for sufficiently small \[\theta^{-2}=-\psi^{\gamma}\psi_{\gamma}<\psi_{0}^{2},\] \(L\) increases strictly on the line segment \[S\equiv\{(\psi^{0},\psi^{1})\in\Psi:\psi^{0}=\sigma\}\] if \(\sigma>0\) is chosen small enough (in dependence on \(q^{1}\)). On the other hand, \(L\) tends to \(\pm\infty\) near \(\partial\Psi\), \[\lim_{\psi^{1}\searrow-\psi^{0}}L(\psi^{0},\psi^{1})=-\infty,\quad\lim_{\psi^{ 1}\nearrow+\psi^{0}}L(\psi^{0},\psi^{1})=+\infty.\] Therefore the situation, including the shock profile is robust against perturbations of \(q\) within the range given by (2.3). This means that every Lax shock has a profile. Non-existence of shock profiles in the Bemfica-Disconzi-Noronha model In 1940, Eckart has proposed equations for dissipative relativistic fluid dynamics [5]. For barotropic fluids, the Eckart theory reads \[\frac{\partial}{\partial x^{\beta}}\left(T^{\alpha\beta}+\Delta T _{E}^{\alpha\beta}\right)=0, \tag{4.1}\] with \[-\Delta T_{E}^{\alpha\beta}=\eta\Pi^{\alpha\gamma}\Pi^{\beta\delta }\left[\frac{\partial U_{\gamma}}{\partial x^{\delta}}+\frac{\partial U_{ \delta}}{\partial x^{\gamma}}-\frac{2}{3}g_{\gamma\delta}\frac{\partial U^{ \epsilon}}{\partial x^{\epsilon}}\right]+\zeta\Pi^{\alpha\beta}\frac{ \partial U^{\gamma}}{\partial x^{\gamma}}+\chi\left(\Pi^{\alpha\gamma}U^{ \beta}+\Pi^{\beta\gamma}U^{\alpha}\right)\frac{\partial\theta}{\partial x^{ \gamma}} \tag{4.2}\] where, as before, \(\eta,\zeta,\chi>0\) denote the fluid's coefficients of shear and bulk viscosity, and heat conduction. Sometimes referred to as'relativistic Navier-Stokes', the Eckart equations (4.1) have been fundamental to the development of fluid dynamics. However, like the classical Navier-Stokes equations, they do not have the property of finite speed of propagation. This violates causality, i.e., the principle that speeds of propagation must not be larger than that of light. Since physically, Navier-Stokes is a first order theory, in [10] we address the causality issue by introducing a rigorous notion of when two different dissipation tensors \(\Delta T^{\alpha\beta}\) produce equations which are equivalent in a first order sense, and show the following. **Theorem 3**.: _(i) Our hyperbolic Navier-Stokes system (1.1) with \(\Delta T^{\alpha\beta}=\Delta T_{\Box}^{\alpha\beta}\) and (1.11) is causal in the sense that all signal speeds are not larger than that of light. (ii) The system is first-order equivalent to the Eckart system (4.1)._ Here, first-order equivalence is defined in terms of gradient-expansion transformations between different four-field theories (1.1) of dissipative fluid dynamics - two systems are equivalent if the change in \(\Delta T\) occurs only at second order in the magnitude of the dissipation coefficients. See [10, 9] for a precise definition and a formal algebraic setup of first-order equivalence transformations. Our theory with \(\Delta T^{\alpha\beta}=\Delta T_{\Box}^{\alpha\beta}\) is not the only formulation (1.1) that is first-oder equivalent to Eckart's. After all, the prominent theory given in Landau and Lifshitz [12] also is, while it is again not causal. In [2], Bemfica, Disconzi, and Noronha have proposed for the dynamics of the pure radiation fluid, \(p(\theta)=\theta^{4}/3\), another four-field PDE formulation that is first-order equivalent to Eckart's (with \(\zeta=\chi=0\)), namely \[\partial_{\beta}(T^{\alpha\beta}+\Delta T_{BDN}^{\alpha\beta})=0 \tag{4.3}\] by setting \[-\Delta T_{BDN}^{\alpha\beta}=B_{BDN}^{\alpha\beta\gamma\delta} \frac{\partial\psi_{\gamma}}{\partial x^{\delta}}, \tag{4.4}\] with \[B^{\alpha\beta\gamma\delta}_{BDN}=\eta B^{\alpha\beta\gamma\delta}_{E}-\mu B^{ \alpha\beta\gamma\delta}_{1}-\nu B^{\alpha\beta\gamma\delta}_{2} \tag{4.5}\] where the classical Eckart viscosity tensor \[B^{\alpha\beta\gamma\delta}_{E}=\Pi^{\alpha\gamma}\Pi^{\beta\delta}+\Pi^{ \alpha\delta}\Pi^{\beta\gamma}-\frac{2}{3}\Pi^{\alpha\beta}\Pi^{\gamma\delta} \tag{4.6}\] is augmented via \[B^{\alpha\beta\gamma\delta}_{1}=(3U^{\alpha}U^{\beta}+\Pi^{\alpha\beta})(3U^{ \gamma}U^{\delta}+\Pi^{\gamma\delta}),\quad B^{\alpha\beta\gamma\delta}_{2}=( U^{\alpha}\Pi^{\beta}{}_{\epsilon}+U^{\beta}\Pi^{\alpha}{}_{\epsilon})(U^{ \gamma}\Pi^{\delta\epsilon}+U^{\delta}\Pi^{\gamma\epsilon}). \tag{4.7}\] It was shown in [2] that this formulation, which we will here briefly refer to as 'the BDN model', is indeed causal if and only if, relative to the classical coefficient \(\eta\) of viscosity, the coefficients \(\mu\) and \(\nu\) of the "regulators" \(B^{\alpha\beta\gamma\delta}_{1},B^{\alpha\beta\gamma\delta}_{2}\) satisfy \[\mu\geq\frac{4}{3}\eta\quad\mbox{and}\quad\nu\leq\left(\frac{1}{3\eta}-\frac{ 1}{9\mu}\right)^{-1}. \tag{4.8}\] On the other hand the following theorem was proven in [6]: **Theorem 4**.: _If the dissipation coefficients \(\eta,\mu,\nu>0\) satisfy the strict causality condition_ \[\mu\geq\frac{4}{3}\eta\quad\mbox{and}\quad\nu<\left(\frac{1}{3\eta}-\frac{1}{9 \mu}\right)^{-1}, \tag{4.9}\] _then the BDN model always possesses Lax shocks that do not admit any dissipation profile._ This contrasts sharply with our Theorems 1 and 2 above. Theorem 4 would not preclude the possibility that for some _sharply causal_ choice of \(\mu\) and \(\nu\), i.e., in the case \[\mu\geq\frac{4}{3}\eta\quad\mbox{and}\quad\nu=\left(\frac{1}{3\eta}-\frac{1}{9 \mu}\right)^{-1} \tag{4.10}\] (cf. [6]), all Lax shocks do have profiles again. However, Pellhammer has shown [14]: **Theorem 5**.: _Whatever values \(\eta,\mu,\nu>0\) with (4.10) are assumed, the BDN model always possesses a range of Lax shocks that have either no profile at all or an oscillatory profile, i.e., a profile whose orbit infinitely spirals around one of the endstates._ Compared with non-relativistic gas dynamics [11], this property seems exotic. It would be interesting to know whether oscillatory shock profiles in the BDN model are dynamically stable. For positive stability results on shock profiles in hyperbolically regularized systems of conservation laws see [1]. **Conclusion.** Our formulation (1.1), (1.10) of relativistic Navier-Stokes is causal, takes a solvable hyperbolic form, is physically justified by its leading-order equivalence with Eckart and Landau-Lifshitz, and, like classical Navier-Stokes, appears phenomenologically correct notably in the sense that it captures all shock waves consistently.
2304.13041
Fractional Schrödinger equation and time dependent potentials
We investigate the solutions for a time dependent potential by considering two scenarios for the fractional Schr\"odinger equation. The first scenario analyzes the influence of the time dependent potential in the absence of the kinetic term. We obtain analytical and numerical solutions for this case by considering the Caputo fractional time derivative, which extends Rabi's model. In the second scenario, we incorporate the kinetic term in the Schr\"odinger equation and consider fractional spatial derivatives. For this case, we analyze the spreading of the Gaussian wave package under the action of the time and spatial fractional differential operators.
EC Gabrick, E Sayari, ASM de Castro, J Trobia, AM Batista, EK Lenzi
2023-04-25T16:30:19Z
http://arxiv.org/abs/2304.13041v1
# Fractional Schrodinger Equation and Time Dependent Potentials ###### Abstract We investigate the solutions for a time-dependent potential by considering two scenarios for the fractional Schrodinger equation. The first scenario analyzes the influence of the time-dependent potential in the absence of the kinetic term. We obtain analytical and numerical solutions for this case by considering the Caputo fractional time derivative, which extends Rabi's model. In the second scenario, we incorporate the kinetic term in the Schrodinger equation and consider fractional spatial derivatives. For this case, we analyze the spreading of the Gaussian wave package under the action of the time and spatial fractional differential operators. anomalous spreading, fractional dynamics, fractional quantum mechanics ## I Introduction The meaning of the operator \(d^{\nu}y/dx^{\nu}\) with \(\nu\) integer is well known and has a profound physical background [2]. The challenge is to understand what this operator means when \(\nu\) is any number (positive or negative, real or complex) [3] or even a function [4]. This problem can be dated from a letter of L'Hopital to Leibniz in 1695, where he asked him what the operator \(d^{\nu}y/dx^{\nu}\) is when \(\nu=1/2\)[3]. Since then, many researchers have dedicated themselves to this problem, for example, Euler, Lagrange, Laplace, Fourier, and others [2], giving rise to the fractional calculus [5]. Nowadays, fractional calculus has quickly become a new efficient mathematical tool to analyze different properties of systems, in general, by extending the differential operators by incorporating non-integer indexes and, in particular, connecting them with experimental results [6; 7; 8; 9]. In this manner, it is possible to investigate many situations with a simple extension that may incorporate memory effects, long-range correlations, and other effects in complex systems [10]. For instance, in complex viscoelastic media [11; 12], electrical spectroscopy impedance [13; 14; 15], wave propagation in porous media [16; 17], microflows of viscoelastic fluids [18], and gas transport in heterogeneous media [19; 20; 21]. It has also been used in other physics branches to extend several partial differential equations to cover and bring new possibilities for applications in different scenarios [2]. One of them is quantum mechanics, which has been extended by incorporating spatial and time fractional differential operators [22; 23]. In this context, we have the pioneers works of N. Laskin [24; 25; 26], which lead us to a fractional Schrodinger equation, and that has been followed by other extensions incorporating fractional differential operators in time, and space [3] as well as non-local terms [27; 28] and constraints among the different spatial coordinates (comb-model) [29; 30; 31]. These extensions of the Schrodinger equation have also been analyzed by considering different choices of potential, such as delta potentials [32], constant or linear potentials [33] and for some time dependent potentials [34]. It is worth mentioning that, from the analytical and numerical point of view, it is a challenge to obtain solutions when fractional time derivatives are considered. Our goal in this work is to investigate the implications of considering time dependent potentials in the following fractional Schrodinger equation [35] \[i^{\alpha}\hbar_{\alpha}\frac{\partial^{\alpha}}{\partial t^{ \alpha}}\psi(\vec{r},t)=\widehat{H}(t)\psi(\vec{r},t)\;, \tag{1}\] where the fractional differential operator is the Caputo fractional time derivative, defined as follows [3]: \[\frac{\partial^{\alpha}}{\partial t^{\alpha}}\psi(\vec{r},t)= \frac{1}{\Gamma\left(1-\alpha\right)}\int_{0}^{t}dt^{\prime}\frac{1}{(t-t^{ \prime})^{\alpha}}\frac{\partial}{\partial t}\psi(\vec{r},t), \tag{2}\] for \(0<\alpha<1\). We employ analytical and numerical approaches to analyze Eq. (1). For the last one, we consider the finite difference method [66; 67; 68]. It should be mentioned, as discussed in Ref. [35], that we can also extend the Schrodinger equation as follows: \[i\hbar_{\alpha}\frac{\partial^{\alpha}}{\partial t^{\alpha}} \psi(\vec{r},t)=\widehat{H}(t)\psi(\vec{r},t). \tag{3}\] Equations (1) and (3) are two possible extensions of the Schrodinger equation. However, when performing a Wick rotation, the imaginary unit is raised to the same power as the time coordinates for Eq. (1). Another point between the two equations involves the temporal behavior of the solution, which for the first case, is more suitable than the second one that decreases or grows with time instead of a sinusoidal behavior. For these reasons point out in Ref. [35], we consider Eq. (1) in our developments. It is also interesting to mention the similar appearance between the Schrodinger and diffusion equations. This similarity between these equations is a consequence of the stochastic processes behind these equations, which can be evidenced by Feynman's path integral formulation [36], and transformed into Wiener's path integral, which is the integral over the path of Brownian motions. It has also motivated different extensions motivated by other aspects, which include Levy distributions [26], comb-model [37; 38], among others. In addition, these extensions of the Schrodinger equations have been considered in problems related to optica [39], solutions for free-particle [40], optical solitons [41] and others [42; 43; 44; 45; 46]. By using Eq. (1), we consider a two-level system with a time dependence on the potential and restricted to a one-dimensional wave function \(\psi(x,t)\), without any loss of generality, and \(\hbar_{\alpha}\) is an arbitrary time constant used to replace the Planck constant (see Ref. [35] for more details). As mentioned before, the difference between the definitions given by Eq. (1) and Eq. (3) is in the imaginary unit. Both equations violate the probability conservation law [47]. However, the probability related to Eq. (1) may increase and reach a constant value \(1/\alpha^{2}\) as discussed in Ref. [48] and the probability associated with Eq. (3) decays to zero [47]. It is worth mentioning that the two-level systems are very interesting because the simplicity and richness of results [49] have been used to study spin \(1/2\)-like [50], magnetic resonance [51], quantum computation [52], unitary evolution [53], and others [54]. In some cases, the two-level systems are analytical soluble, mostly when the Hamiltonian is unperturbed. However, perturbed Hamiltonians are particularly interesting, mainly in the presence of an electromagnetic field [55]. In situations like that, i.e., the time-dependent Hamiltonian, the exact solution is rare; one famous example is the Rabi problem [56]. Inspired by the Rabi problem and electromagnetic fields perturbation, we consider two distinct cases for the Hamiltonian in Eq. (1). The first one considers \[\widehat{H}(t)=\left(\begin{array}{cc}E_{1}&\gamma e^{i\omega t}\\ \gamma e^{-i\omega t}&E_{2}\end{array}\right), \tag{4}\] which corresponds to a two-level system, where \(E_{1}\) and \(E_{2}\) are the eigenvalues, and \(\gamma\) is the amplitude of the external field with frequency equals \(\omega\). In the second case, we consider the Hamiltonian given by \[\widehat{H}(t)=\left(\begin{array}{cc}\widehat{p}^{\,\mu}/(2m)&\gamma e^{i \omega t}\\ \gamma e^{-i\omega t}&\widehat{p}^{\,\mu}/(2m)\end{array}\right), \tag{5}\] which incorporates a kinetic term and consequently a spatial dependence in our problem. Note that the kinetic terms have the power \(\mu\), which can be related to a spatial fractional derivative, i.e., \(\mathcal{F}^{-1}\{|p|^{\mu}\widetilde{\psi}(p,t)\}\equiv-\partial_{|x|}^{\mu} \psi(x,t)\), where \(\mathcal{F}\{\psi(x,t);k\}=\widetilde{\psi}(k,t)\) (and \(\mathcal{F}^{-1}\{\widetilde{\psi}(k,t);x\}=\psi(x,t)\)) corresponds to the Fourier transform, respectively. This definition corresponds to the Riesz derivative [57; 58]. Aiming to understand the influence of fractional order in Schrodinger equation, we made the developments for standard quantum mechanics in Sec. II and for the fractional operators in Sec. III. We obtain analytical and numerical solutions for these Hamiltonians and analyze the spreading behavior of the wave package in different conditions. Finally, we present our discussions and conclusions in Sec. IV. ## II Schrodinger equation The standard Schrodinger equation is an specific case of Eq. (1) or Eq. (3) with \(\alpha=1\). To understand the effects of \(\alpha\neq 1\) in quantum dynamics, we first analyze the results obtained from the standard case. In this sense, let us start our analysis by reviewing the results obtained for the standard Schrodinger equation, i.e., \[i\hbar\frac{\partial}{\partial t}\psi(\vec{r},t)=\widehat{H}\psi(\vec{r},t)\;, \tag{6}\] where \(\widehat{H}\) is the Hamiltonian operator, \(\psi(\vec{r},t)\) is the wave function, \(i\) is the imaginary unit, and \(\hbar\) is the Planck constant [49], which, for simplicity, we consider \(\hbar=1\). Equation (6) will be analyzed first by considering the Hamiltonian given by Eq. (4) which corresponds to a two-level system, as previously discussed. Equation (4) has been applied in several situations, such as a two-level system interacting with light field [48]. After, we incorporate kinetic terms in Eq. (4) by performing the following change \(E_{1}\rightarrow\widehat{p}^{\,2}/\left(2m\right)\) and \(E_{2}\rightarrow\widehat{p}^{\,2}/\left(2m\right)\), which implies \[\widehat{H}=\left(\begin{array}{cc}\widehat{p}^{\,2}/\left(2m\right)&\gamma e ^{i\omega t}\\ \gamma e^{-i\omega t}&\widehat{p}^{\,2}/\left(2m\right)\end{array}\right)\;. \tag{7}\] Equation (7) is equivalent to considering the particular case \(\mu=2\) in Eq. (5), i.e., it considers the kinetics terms with an integer index. After analyzing the standard Schrodinger equation which emerges from these cases, we consider the fractional extensions of these cases and analyze the implications for spreading the wave package, particularly the case \(\mu\neq 2\). Equations (4) and (7) allows us to consider that the wave function has the following form \[\psi=\left(\begin{array}{c}\psi_{1}\\ \psi_{2}\end{array}\right)\;, \tag{8}\] with \(\psi_{1}\) and \(\psi_{2}\) are obtained by solving the Schrodinger equation for each case. Now let us consider the first case correspondent to the Hamiltonian, defined in terms of Eq. (4) and solutions \(\psi_{k}=\psi_{k}(t)\). For the initial condition, we analyze the situation in which only one state is populated initially, i.e., the initial condition is given by \(\psi_{1}(0)=1\) and \(\psi_{2}(0)=0\). The problem concerns obtaining the probability transition between states after applying the external field. We find these probabilities by solving Eq. (6), i.e., \[i\frac{\partial}{\partial t}\psi_{1}(t)=E_{1}\psi_{1}(t)+\gamma e^{i\omega t} \psi_{2}(t), \tag{9}\] and \[i\frac{\partial}{\partial t}\psi_{2}(t)=E_{2}\psi_{2}(t)+\gamma e^{-i\omega t }\psi_{1}(t), \tag{10}\] in which \(|\psi_{1}(t)|^{2}+|\psi_{2}(t)|^{2}=1\) is always verified. This case admits an analytical solution; for example, see Ref. [49] and, in particular, when \(\omega=0\), it is given by \[\psi_{1}(t)=\frac{1}{2}\left(1+\frac{E_{1}-E_{2}}{\sqrt{(E_{1}-E_{2})^{2}+4 \gamma^{2}}}\right)e^{-i\gamma_{+}t}+\frac{1}{2}\left(1-\frac{E_{1}-E_{2}}{ \sqrt{(E_{1}-E_{2})^{2}+4\gamma^{2}}}\right)e^{-i\gamma_{-}t}, \tag{11}\] and \[\psi_{2}(t)=\frac{\gamma}{\sqrt{(E_{1}-E_{2})^{2}+4\gamma^{2}}}\left(e^{-i \gamma_{+}t}-e^{-i\gamma_{-}t}\right), \tag{12}\] where \(\gamma_{\pm}=\left(E_{1}+E_{2}\pm\sqrt{(E_{1}-E_{2})^{2}+4\gamma^{2}}\right)/2\). Figure 1(a) illustrates the result obtained by considering the external field constant, i.e., \(\omega=0\). It is possible to verify that the system has oscillation between two levels. It is interesting to note in Fig. 1(a) that for the parameters used, \(\psi_{1}\) is predominant over \(\psi_{2}\). On the other hand, when we consider an oscillatory external field, \(\omega\neq 0\), the system oscillates between the two states as shown in Fig. 1(b). The result for \(\omega\neq 0\) is numerically obtained by solving Eqs. (9) and (10). When \(0<\omega<1\), the amplitude of \(|\psi_{2}|^{2}\) tends to \(1\), founding this value in \(\omega=1\). By the other hand, for \(\omega>1\) the \(|\psi_{2}|^{2}\) value oscillate asymptotically to zero while \(|\psi_{1}|^{2}\) oscillate in \(1\) direction. Similar analysis can be performed by considering the second case, i.e., for the Hamiltonian defined in terms of Eq. (7). The equations for the wave functions \(\psi_{1(2)}(x,t)\) read as \[i\frac{\partial}{\partial t}\psi_{1}(x,t)=-\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}\psi_{1}(x,t)+\gamma e^{i\omega t}\psi_{2}(x,t), \tag{13}\] and \[i\frac{\partial}{\partial t}\psi_{2}(x,t)=-\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}\psi_{2}(x,t)+\gamma e^{-i\omega t}\psi_{1}(x,t), \tag{14}\] Figure 1: Probability of finding the system in \(\psi_{1}(t)\) state given by the red line and in the \(\psi_{2}(t)\) by the blue line. The panel (a) is for \(\omega=0\) and (b) for \(\omega=1\). We consider \(\gamma=1\), \(E_{1}=1\) and \(E_{2}=2\). where, for simplicity, we assume \(m=1\). The solution for this case can be found by applying the Fourier transform (\(\widetilde{\psi}_{1,2}(k,t)=\mathcal{F}\{\psi_{1,2}(x,t);k\}\) and \(\psi_{1,2}(x,t)=\mathcal{F}^{-1}\{\widetilde{\psi}_{1,2}(k,t);x\}\)), as defined before) in Eqs. (13) and (14) yielding \[i\frac{\partial}{\partial t}\widetilde{\psi}_{1}(k,t)=\frac{1}{2}k^{2} \widetilde{\psi}_{1}(k,t)+\gamma e^{i\omega t}\widetilde{\psi}_{2}(k,t), \tag{15}\] and \[i\frac{\partial}{\partial t}\widetilde{\psi}_{2}(k,t)=\frac{1}{2}k^{2} \widetilde{\psi}_{2}(k,t)+\gamma e^{-i\omega t}\widetilde{\psi}_{1}(k,t)\;. \tag{16}\] By performing some calculations, it is possible to show that the solution \(\widetilde{\psi}_{2}(k,t)\) is related to the solution \(\widetilde{\psi}_{1}(k,t)\) as follows: \[\widetilde{\psi}_{2}(k,t)=-i\gamma\int_{0}^{t}dt^{\prime}e^{-\frac{1}{2}ik^{2 }(t-t^{\prime})}e^{-i\omega t^{\prime}}\widetilde{\psi}_{1}(k,t^{\prime})\;, \tag{17}\] for which we assume the initial condition \(\widetilde{\psi}_{2}(k,0)=0\). Furthermore, this relation implies that \[i\frac{\partial}{\partial t}\widetilde{\psi}_{1}(k,t)=\frac{1}{2}k^{2} \widetilde{\psi}_{1}(k,t)-i\gamma^{2}\int_{0}^{t}dt^{\prime}e^{-\frac{1}{2}ik ^{2}(t-t^{\prime})}e^{-i\omega(t-t^{\prime})}\widetilde{\psi}_{1}(k,t^{\prime })\;. \tag{18}\] Note that the last term present in Eq. (18) is a nonlocal term and the kernel has a nonsingular dependence on the variable \(t\). It is worth mentioning that the nonsingular kernels have been successfully applied in many situations, such as the ones presented in Refs. [59; 60; 61; 62; 63; 64]. Equation (18) can be solved by using the Laplace transform, yielding \[\widetilde{\psi}_{1}(k,t)=e^{-\frac{1}{2}i\left(k^{2}-\omega\right)t}\left[ \cos\left(\frac{1}{2}t\sqrt{\omega^{2}+4\gamma^{2}}\right)-\frac{i\omega}{ \sqrt{\omega^{2}+4\gamma^{2}}}\sin\left(\frac{1}{2}t\sqrt{\omega^{2}+4\gamma^ {2}}\right)\right]\widetilde{\varphi}_{1}(k)\;, \tag{19}\] where \(\widetilde{\psi}_{1}(k,0)=\widetilde{\varphi}_{1}(k)\) is the initial condition for \(\psi_{1}(x,t)\). Applying the inverse of the Fourier transform, we obtain that \[\psi_{1}(x,t)=\Xi_{1}(t)\int_{-\infty}^{\infty}dx^{\prime}\mathcal{G}(x-x^{ \prime},t)\varphi_{1}(x^{\prime}), \tag{20}\] where \[\Xi_{1}(t)=e^{\frac{i}{2}\omega t}\left[\cos\left(\frac{1}{2}t\sqrt{\omega^{2 }+4\gamma^{2}}\right)-\frac{i\omega}{\sqrt{\omega^{2}+4\gamma^{2}}}\sin\left( \frac{1}{2}t\sqrt{\omega^{2}+4\gamma^{2}}\right)\right], \tag{21}\] and \[\psi_{2}(x,t)=\Xi_{2}(t)\int_{-\infty}^{\infty}dx^{\prime}\mathcal{G}(x-x^{ \prime},t)\varphi_{1}(x^{\prime})\;. \tag{22}\] The function \(\Xi_{2}(t)\) is written as follows: \[\Xi_{2}(t)=-\frac{2i\gamma}{\sqrt{\omega^{2}+4\gamma^{2}}}e^{-\frac{i}{2} \omega t}\sin\left(\frac{1}{2}t\sqrt{\omega^{2}+4\gamma^{2}}\right), \tag{23}\] and \(\mathcal{G}(x,t)\) is the quantum free particle propagator, i.e., \(\mathcal{G}(x,t)=e^{-\frac{\pi^{2}}{2it}}/\sqrt{2\pi it}\). In addition to the analytical result, given by the Eqs. (20) and (22), we also obtain the numerical solutions of the Eqs. (13) and (14). For the numerical approach, we consider the finite difference method [65]. We consider a grid defined by \([0,X]\times[0,T]\), with boundary conditions equal to \(\psi_{1,2}(0,t)=\psi_{1,2}(X,t)=0\). The time is discretized by \(t_{j}=j\Delta t\), where \(j=1,2,...,N_{t}\), with time step equal to \(\Delta t=T/N_{t}\); and the each space coordinate is given by \(x_{i}=i\Delta x\), where \(i=1,2,...,N_{x}\), with space step equal to \(\Delta x=X/N_{x}\). To avoid numerical boundary problems, the origin of our space coordinate is in \(X/2\). From these considerations, the discretization of Eqs. (13) and (14) are given by \[\psi_{1}^{i,j+1}=\psi_{1}^{i,j}+i\xi(\psi_{1}^{i+1,j}-2\psi_{1}^{i,j}+\psi_{1}^ {i-1,j})-i\beta(V_{1}^{j}\psi_{2}^{i,j}+V_{1}^{j+1}\psi_{2}^{i,j+1}), \tag{24}\] and \[\psi_{2}^{i,j+1}=\psi_{2}^{i,j}+i\xi(\psi_{2}^{i+1,j}-2\psi_{2}^{i,j}+\psi_{2}^{i -1,j})-i\beta(V_{2}^{j}\psi_{1}^{i,j}+V_{2}^{j+1}\psi_{1}^{i,j+1}), \tag{25}\] where \(\xi\equiv\Delta t/2\Delta x^{2}\), \(\beta\equiv\gamma\Delta t/2\), \(V_{1}=e^{i\omega t}\) and \(V_{2}=e^{-i\omega t}\). For the stability conditions, it is required that \(\xi\leq 1/2\) and the \(\beta\) order less than \(\xi\) order [65]. Considering \(\psi_{1}(x,0)=e^{-\frac{x^{2}}{2\sigma^{2}}}/(2\pi\sigma^{2})^{1/4}\) and \(\psi_{2}(x,0)=0\) as the initial condition, the results for \(|\psi_{1}|^{2}\) and \(|\psi_{2}|^{2}\) are displayed in Figs. 2(a) and (b), respectively. The parameters considered in this simulation are: \(\xi=0.0016\), \(\gamma=0.5\), \(\Delta x=0.25\), \(\Delta t=0.0002\), and \(\sigma=0.4\). As observed in the results without kinetic terms, the system starts mostly in \(\psi_{1}\). However, a transition occurs to \(\psi_{2}\) state due to the external field. This effect is present in the presence of kinetic terms. Numerically, we observed that \(\int_{-\infty}^{\infty}dx(|\psi_{1}(x,t)|^{2}+|\psi_{2}(x,t)|^{2})=1\). It is worth mentioning that if we decrease \(\Delta x\), the oscillations due to the potential become smoother. The probability of finding the system in both states becomes approximately equal after \(t\geq 10\). The first state is mostly populated for a short time, as observed in the result in Fig. 3. This result shows that the package centered in origin is spread in the space in the first state and starts transit to the second state in a sinusoidal form. The mean square displacement is a measure of the spreading of the system, represented by the wave package. It is widely applied in diffusion processes to characterize the type of diffusion, usual or anomalous. For the usual diffusion, we have a linear time dependence for the mean square displacement, i.e., \(\langle(\Delta x)^{2}\rangle\sim t\), which is related to the Markovian processes. For the anomalous case, we have that \(\langle(\Delta x)^{2}\rangle\sim t^{S_{d}}\), where \(S_{d}>1\) and \(S_{d}<1\) are related to the super-diffusive and sub-diffusive cases [3], respectively. In quantum mechanics, we can also use this quantity to understand the spreading of the probability density, i.e., \(|\psi_{1,2}|^{2}\), in time. The normal case corresponds to the free particle for the standard Schrodinger equation, where \(\langle(\Delta x)^{2}\rangle\sim t^{S}\) with \(S=2\). The anomalous cases are those that have different behaviors for the mean square displacement. Note that these results are in agreement with the analytical results obtained for Eq. (13) and (14), which results in Gaussian distributions for both wave functions. Considering the Gaussian package as the initial condition, the mean square displacement for the free particle is shown in Fig. 4(a) by the black points, which follows \(\sim t^{S_{1}}\), with \(S_{1}=2.02\). This result is obtained by taking \(\gamma=0\) in the numerical simulations. The effect of the potential is displayed in Fig. 4(a) by the red points, which follows \(\sim t^{S_{2}}\) with \(S_{2}=2.07\), for \(|\psi_{1}|^{2}\). Due to the external potential, after a certain time, the probability of finding the system transfer from the first level to the second one, as shown in Fig. 4(b). The distribution for \(|\psi_{2}|^{2}\) increase as \(\sim 2\). The slopes found by the numerical simulations are in agreement with our analytical expressions, which indicate \(\sim t^{2}\) for both cases, free-particle and two-level system. ## III Fractional Schrodinger equations Now, we analyze the previous scenarios within the fractional extension in time of the Schrodinger equation. For the first case, i.e., the Hamiltonian given by Eq. (4), we have that \[i^{\alpha}\frac{\partial^{\alpha}}{\partial t^{\alpha}}\psi_{1}(t)=E_{1}\psi_ {1}(t)+\gamma e^{i\omega t}\psi_{2}(t), \tag{26}\] and \[i^{\alpha}\frac{\partial^{\alpha}}{\partial t^{\alpha}}\psi_{2}(t)=E_{2}\psi_ {2}(t)+\gamma e^{-i\omega t}\psi_{1}(t), \tag{27}\] where \(\hbar_{\alpha}=1\), without loss of generality. The relations represented by Eqs. (26) and (27) extend Rabi's model. For this set of equations, obtaining an analytical solution for \(\omega=0\), i.e., a static field, is possible. To obtain the solutions for this case, we can use the Laplace transform (\(\mathcal{L}\{\psi(t);s\}=\hat{\psi}(s)\) and \(\mathcal{L}^{-1}\{\hat{\psi}(s);t\}=\psi(t)\)) to simplify Eq. (26) and (27) for the static case, yielding \[\hat{\psi}_{1}(s)=\frac{i^{\alpha}s^{\alpha-1}\left(i^{\alpha}s^{\alpha}-E_{2 }\right)}{\left(i^{\alpha}s^{\alpha}-E_{1}\right)\left(i^{\alpha}s^{\alpha}-E _{2}\right)-\gamma^{2}}, \tag{28}\] and \[\hat{\psi}_{2}(s)=\frac{i^{\alpha}s^{\alpha-1}}{\left(i^{\alpha}s^{\alpha}-E _{1}\right)\left(i^{\alpha}s^{\alpha}-E_{2}\right)-\gamma^{2}}, \tag{29}\] for the initial condition \(\psi_{1}(0)=1\) and \(\psi_{2}(0)=0\). After performing the inverse of the Laplace transform, it is possible to show that \[\psi_{1}(t)=\frac{1}{2}\left(1-\frac{E_{1}-E_{2}}{\sqrt{(E_{1}-E_{2})^{2}+4 \gamma^{2}}}\right)E_{\alpha}\left(\gamma_{-}t^{\alpha}/i^{\alpha}\right)+ \frac{1}{2}\left(1+\frac{E_{1}-E_{2}}{\sqrt{(E_{1}-E_{2})^{2}+4\gamma^{2}}} \right)E_{\alpha}\left(\gamma_{+}t^{\alpha}/i^{\alpha}\right), \tag{30}\] Figure 4: Mean square displacement for the Gaussian package. Panel (a) is for \(\psi_{1}\) state, and panel (b) is for \(\psi_{2}\) standard case. The red points are for standard two-level equations. The black points are for the free particle. The slopes are \(S_{1}=2.02\), \(S_{2}=2.07\). We consider \(\xi=0.0012\), \(\gamma=0.5\), \(\sigma=0.4\), \(\omega=2\pi\), \(\Delta x=0.2\), \(\Delta t=0.0001\), and \(\sigma=0.4\). \[\psi_{2}(t)=\frac{\gamma}{\sqrt{(E_{1}-E_{2})^{2}+4\gamma^{2}}}\bigg{(}E_{\alpha} \left(\gamma_{+}t^{\alpha}/i^{\alpha}\right)-E_{\alpha}\left(\gamma_{-}t^{ \alpha}/i^{\alpha}\right)\bigg{)}\;, \tag{31}\] where \(\gamma_{\pm}=\left(E_{1}+E_{2}\pm\sqrt{(E_{1}-E_{2})^{2}+4\gamma^{2}}\right)/2\) and \(E_{\alpha}(x)\) is the Mittag-Leffler function, \[E_{\alpha}(x)=\sum_{n=0}^{\infty}\frac{x^{n}}{\Gamma(1+\alpha n)}\;, \tag{32}\] which corresponds to an extension of the exponential function [3]. The solutions found for \(\psi_{1}(t)\) and \(\psi_{2}(t)\), are determined in terms of the Mittag-Leffler function, implying that the system has an unusual oscillation process, i.e., different from the standard case. For the case \(\omega\neq 0\), the solution can also be found, and it is given by \[\psi_{1}(t) = E_{\alpha}\bigg{[}\left(E_{1}/i^{\alpha}\right)t^{\alpha}\bigg{]} \tag{33}\] \[+ \sum_{n=1}^{\infty}\left(\frac{\gamma}{i^{\alpha}}\right)^{2n} \int_{0}^{t}dt_{n}\Lambda(t-t_{n})\int_{0}^{t_{n}}dt_{n-1}\Lambda(t_{n}-t_{n-1 })\cdots\int_{0}^{t_{2}}dt_{t_{1}}\Lambda(t_{2}-t_{1})E_{\alpha}\bigg{[}\left( E_{1}/i^{\alpha}\right)t_{1}^{\alpha}\bigg{]},\] with \[\psi_{2}(t)=\frac{\gamma}{i^{\alpha}}\int_{0}^{t}dt^{\prime}t^{ \alpha-1}E_{\alpha,\alpha}\bigg{[}\left(E_{2}/i^{\alpha}\right)(t-t^{\prime}) ^{\alpha}\bigg{]}e^{i\omega t^{\prime}}\psi_{1}(t^{\prime})\;, \tag{34}\] where \[\Lambda(t)=e^{i\omega t}\int_{0}^{t}dt^{\prime}t^{\prime\alpha-1}e ^{-i\omega t^{\prime}}E_{\alpha,\alpha}\bigg{[}\left(E_{1}/i^{\alpha}\right)t ^{\prime\alpha}\bigg{]}(t-t^{\prime})^{\alpha-1}E_{\alpha,\alpha}\bigg{[} \left(E_{2}/i^{\alpha}\right)(t-t^{\prime})^{\alpha}\bigg{]}\;, \tag{35}\] by considering \(\psi_{1}(0)=1\) and \(\psi_{2}(0)=0\). The solutions for this case are found in terms of the generalized Mittag-Leffler function [3], \[E_{\alpha,\beta}(x)=\sum_{n=0}^{\infty}\frac{x^{n}}{\Gamma( \beta+\alpha n)}\;. \tag{36}\] Figure 5 displays the numerical solution of Eqs. (26) and (27). For the static case, in Figs. 5(a) and 5(b), and for the non-static case, in Figs. 5(c) and 5(d). The results are in perfect agreement with the analytical solutions found in Eqs. (30), (31), (33), and (34) (see the Appendix for details of the numerical procedure). A direct consequence obtained by incorporating fractional time derivative in the Schrodinger equation is the non-conservation of the probability, i.e., \(|\psi_{1}(\infty)|^{2}+|\psi_{2}(\infty)|^{2}=1/\alpha^{2}\). This result agrees with the results presented in Refs. [47; 48]. Figure 5: Probability of finding the system in \(\psi_{1}(t)\) state given by the red line, in panels (a) and (c), and in the \(\psi_{2}(t)\) by the blue line, in panels (b) and (d). The panel (a) is for \(\omega=0\) and (b) for \(\omega=1\). We consider \(E_{1}=1\), \(E_{2}=2\), \(\gamma=1\). Considering the kinetic terms, the time fractional Schrodinger equation can be written in the form \[i^{\alpha}\frac{\partial^{\alpha}}{\partial t^{\alpha}}\psi_{1}(x,t)=-\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}\psi_{1}(x,t)+\gamma e^{i\omega t}\psi_{2}(x,t), \tag{37}\] and \[i^{\alpha}\frac{\partial^{\alpha}}{\partial t^{\alpha}}\psi_{2}(x,t)=-\frac{1} {2}\frac{\partial^{2}}{\partial x^{2}}\psi_{2}(x,t)+\gamma e^{-i\omega t}\psi_ {1}(x,t). \tag{38}\] These equations can be approximated by the following discretization [66]: \[\psi_{1}^{i,j+1} = \psi_{1}^{i,j}-i^{-\alpha}\xi_{\alpha}(\psi_{1}^{i+1,j}-2\psi_{1} ^{i,j}+\psi_{1}^{i-1,j})+i^{-\alpha}\beta_{\alpha}(V_{1}^{j}\psi_{2}^{i,j}+V_{ 1}^{j+1}\psi_{2}^{i,j+1}) \tag{39}\] \[- \sum_{k=1}^{j}[(k+1)^{(1-\alpha)}-k^{(1-\alpha)}][\psi_{1}^{i,j+1 -k}-\psi_{1}^{i,j-k}],\] and \[\psi_{2}^{i,j+1} = \psi_{2}^{i,j}-i^{-\alpha}\xi_{\alpha}(\psi_{2}^{i+1,j}-2\psi_{2 }^{i,j}+\psi_{2}^{i-1,j})+i^{-\alpha}\beta_{\alpha}(V_{2}^{j}\psi_{1}^{i,j}+V_ {2}^{j+1}\psi_{1}^{i,j+1}) \tag{41}\] \[- \sum_{k=1}^{j}[(k+1)^{(1-\alpha)}-k^{(1-\alpha)}][\psi_{2}^{i,j+1 -k}-\psi_{2}^{i,j-k}],\] where \(\xi_{\alpha}\equiv\Gamma(2-\alpha)\Delta t^{\alpha}/2\Delta x^{2}\), \(\beta_{\alpha}\equiv\Gamma(2-\alpha)\gamma\Delta t^{\alpha}/2\), \(V_{1}=e^{i\omega t}\) and \(V_{2}=e^{-i\omega t}\). The convergence condition is \(\Delta t^{\alpha}/\Delta x^{2}\leq(1-2^{-\alpha})/\Gamma(2-\alpha)\)[67]. Figures 6(a) and 6(b) show the numerical solution for \(\psi_{1}(x,t)\) and \(\psi_{2}(x,t)\) with \(\alpha=0.98\), by considering the initial conditions \(\psi_{1}(x,0)=e^{-x^{2}/(2\sigma^{2})}/(2\pi\sigma^{2})^{1/4}\) and \(\psi_{2}(x,0)=0\), where \(\sigma=0.4\). Note that, for \(\alpha\) slightly different from the standard case the dynamics properties of probabilities densities spreads have a significant change from the standard case. If we consider \(\alpha<0.98\) these changes will be pronounced. For example, the results presented in Fig. 6(a) and 6(b) show that the time fractional operator makes the probability spread slowly when compared to the standard case. Also, the transition between the states occurs with a greater amplitude than in the integer case. Another anomalous behavior is non-probability conservation. In this case, the probability decays, and the imaginary part of the effective potential operates as a dissipate term [69]. For a fixed time, a comparison between the probability distribution at the space in the case where \(\alpha=1\) (dotted lines) and \(\alpha=0.98\) (continuous lines) is shown in Fig. 7. For this time, the results show that the package spread decreases the amplitude of \(|\psi_{1}|^{2}\), and the shape of \(|\psi_{2}|^{2}\) is wider. The mean square displacement for the Gaussian package is exhibited in Fig. 8 for \(\alpha=0.98\) by the blue, in panel 8(a) for \(|\psi_{1}|^{2}\) and in panel 8(b) for \(|\psi_{2}|^{2}\). The red points are for the standard case, and the black line is for the free particle. The fractional time spread is similar to the standard case for short times. However, after this initial time, the blue points follow \(\sim t^{S_{3}}\), with \(S_{3}=1.87\), while the red ones \(\sim t^{2.07}\). The behavior for the fractional case in time shows that the package spreads with less intensity than the standard case; the spread is more centered. The effect of the oscillatory potential is observed in the spread for the second state, as shown in Fig. 8(b), by the blue curve for \(|\psi_{2}|^{2}\). The second state populated in fractional presence in this scenario differs from the standard case. The fractional operator in time makes the probability for \(\psi_{2}\) state oscillates like a sinusoidal function. As the time increase, the \(\psi_{2}\) becomes populated with more frequency. Another difference in fractional cases is that the probability is non-conservative, and the deviations go to zero. The imaginary part of the effective potential operates like a dissipate term [69]. Now, let us consider the Schrodinger equation with fractional differential operators in space. This extension can be directly related to the works of Laskin [26], which takes Levy flights in the Feynman path integral approach into account. Following an analogous scheme [22], it is possible to include the fractional differential operator in space in such a way that the equations become \[i\frac{\partial}{\partial t}\psi_{1}(x,t)=-\frac{1}{2}\frac{\partial^{\mu}}{ \partial|x|^{\mu}}\psi_{1}(x,t)+\gamma e^{i\omega t}\psi_{2}(x,t), \tag{42}\] and \[i\frac{\partial}{\partial t}\psi_{2}(x,t)=-\frac{1}{2}\frac{\partial^{\mu}}{ \partial|x|^{\mu}}\psi_{2}(x,t)+\gamma e^{-i\omega t}\psi_{1}(x,t). \tag{43}\] This extension for the set of Schrodinger equation essentially considers \(\partial_{x}^{2}(\cdots)\to\partial_{|x|}^{\mu}(\cdots)\) with \(1<\mu<2\), which corresponds to a Riesz-Weyl fractional operator. By applying the Fourier transform in the previous set of equations and using the property \(\mathcal{F}\left\{\partial_{|x|}^{\mu}\psi_{1,2}(x,t);k\right\}=-|k|^{\mu} \widetilde{\psi}_{1,2}(k,t)\), we have that \[i\frac{\partial}{\partial t}\widetilde{\psi}_{1}(k,t)=\frac{1}{2}|k|^{\mu} \widetilde{\psi}_{1}(k,t)+\gamma e^{i\omega t}\widetilde{\psi}_{2}(k,t), \tag{44}\] and \[i\frac{\partial}{\partial t}\widetilde{\psi}_{2}(k,t)=\frac{1}{2}|k|^{\mu} \widetilde{\psi}_{2}(k,t)+\gamma e^{-i\omega t}\widetilde{\psi}_{1}(k,t)\;. \tag{45}\] By performing some calculations, it is possible to show that \[i\frac{\partial}{\partial t}\widetilde{\psi}_{1}(k,t)=\frac{1}{2}|k|^{\mu} \widetilde{\psi}_{1}(k,t)-i\gamma^{2}\int_{0}^{t}dt^{\prime}e^{-\frac{1}{2}i| k|^{\mu}(t-t^{\prime})}e^{i\omega(t-t^{\prime})}\widetilde{\psi}_{1}(k,t^{\prime})\;, \tag{46}\] which can be solved by using the Laplace transform. The wave functions for this case can be obtained and are written as \[\widetilde{\psi}_{2}(k,t)=-i\gamma\int_{0}^{t}dt^{\prime}e^{-\frac{1}{2}i|k|^ {\mu}(t-t^{\prime})}e^{-i\omega t^{\prime}}\widetilde{\psi}_{1}(k,t^{\prime})\;, \tag{47}\] and \[\widetilde{\psi}_{1}(k,t)=e^{-\frac{1}{2}i(|k|^{\mu}-\omega)t}\left[\cos\left( \frac{1}{2}t\sqrt{\omega^{2}+4\gamma^{2}}\right)-\frac{i\omega}{\sqrt{\omega^ {2}+4\gamma^{2}}}\sin\left(\frac{1}{2}t\sqrt{\omega^{2}+4\gamma^{2}}\right) \right]\widetilde{\varphi}_{1}(k)\;, \tag{48}\] assuming the initial conditions \(\widetilde{\psi}_{1}(k,0)=\widetilde{\varphi}_{1}(k)\) and \(\widetilde{\psi}_{2}(k,0)=0\). The inverse Fourier transform of Eqs. (47) and (48) results in \[\psi_{1}(x,t)=\Xi_{1}(t)\int_{-\infty}^{\infty}dx^{\prime}\mathcal{G}_{\mu}(x -x^{\prime},t)\varphi_{1}(x^{\prime}), \tag{49}\] and \[\psi_{2}(x,t)=\Xi_{2}(t)\int_{-\infty}^{\infty}dx^{\prime}\mathcal{G}_{\mu}(x -x^{\prime},t)\varphi_{1}(x^{\prime}), \tag{50}\] with \[\mathcal{G}_{\mu}(x,t)=\frac{1}{|x|}\mathrm{H}_{2,2}^{1,1}\left[\frac{2}{it}|x |^{\mu}\begin{array}{c}(1,1),(1,\frac{\mu}{2})\\ (1,\mu),(1,\frac{\mu}{2})\end{array}\right], \tag{51}\] which resembles the form of the Levy distribution found in anomalous diffusion processes. In Eq. (51), we have the H Fox function [70], usually represented [3] by \[\mathrm{H}_{p,q}^{m,n}\bigg{[}z\bigg{|}\begin{array}{c}(a_{p},A_{p})\\ (b_{q},B_{q})\end{array}\bigg{]}=\mathrm{H}_{p,q}^{m,n}\bigg{[}z\bigg{|} \begin{array}{c}(a_{1},A_{1})\cdots(a_{p},A_{p})\\ (b_{1},B_{1})\cdots(b_{q},B_{q})\end{array}\bigg{]}=\frac{1}{2\pi i}\int_{L} ds\chi(s)z^{s} \tag{52}\] where \[\chi(s)=\frac{\prod_{j=1}^{m}\Gamma\left(b_{j}-B_{j}s\right)\prod_{j=1}^{n} \Gamma\left(1-a_{j}+A_{j}s\right)}{\prod_{j=1}^{q}\Gamma\left(1-b_{j}+B_{j}s \right)\prod_{j=1}^{p}\Gamma\left(a_{j}-A_{j}s\right)}\;, \tag{53}\] which involves Mellin-Barnes integrals [3]. The asymptotic behavior of Eq. (51) in the limit of \(|x|\rightarrow\infty\) is given by \(\mathcal{G}_{\mu}(x,t)\sim i\left[t/\left(2|x|^{1+\mu}\right)\right]\), which is different from the usual one characterized by the Gaussian behavior. Note that this result for the asymptotic limit can be obtained by using the approach employed in Ref. [71]. It is essentially an integration over the Mellin - Branes integral poles, which represents Eq. (51). This feature is directly connected to the presence of spatial fractional differential operators in Eqs. (45) and (47). In addition to the analytical approach, it is possible to investigate the dynamical behavior from the numerical point of view by using the following discretization \[\psi_{1}^{i,j+1}=\psi_{1}^{i,j}+i\xi_{\mu}\sum_{k=0}^{i-1}[\psi_{1}^{i-k+1,j}-2 \psi_{1}^{i-k,j}+\psi_{1}^{i-k-1,j}][(k+1)^{2-\mu}-k^{2-\mu}]-i\beta(V_{1}^{j} \psi_{2}^{i,j}+V_{1}^{j+1}\psi_{2}^{i,j+1}), \tag{54}\] \[\psi_{2}^{i,j+1}=\psi_{2}^{i,j}+i\xi_{\mu}\sum_{k=0}^{i-1}[\psi_{2}^{i-k+1,j}-2 \psi_{2}^{i-k,j}+\psi_{2}^{i-k-1,j}][(k+1)^{2-\mu}-k^{2-\mu}]-i\beta(V_{2}^{j} \psi_{1}^{i,j}+V_{2}^{j+1}\psi_{1}^{i,j+1}), \tag{55}\] where \(\xi_{\mu}\equiv\Delta t/2\Gamma(3-\mu)\Delta x^{\mu}\), \(\beta\equiv\gamma\Delta t/2\), \(V_{1}=e^{i\omega t}\) and \(V_{2}=e^{-i\omega t}\)[72]. Small changes in the order of the fractional space operator make significant changes in the spread probability dynamics. This phenomenon is observed in Fig. 9, which exhibits the spread of \(|\psi_{1}|^{2}\) in the panel (a) and \(|\psi_{2}|^{2}\) in the panel (b). The Gaussian package is the initial condition spread widely in the presence of a space fractional operator compared with the standard case. Another notable characteristic is the behavior of \(|\psi_{2}|^{2}\). The probability of finding the system in \(\psi_{2}\) state is more centered in origin and has a higher probability when compared with the previous cases. The \(\psi_{2}\) state assumes the Gaussian shape probability and, for long times, replicates the dynamics observed in \(\psi_{1}\). The comparison of the probabilities in \(t=1.5\) is shown in Fig. 10, where the continuous and dotted lines represent the cases \(\mu=1.95\) and \(\mu=2.0\), respectively. The result shows a sharper division in the Gaussian package along with an enlargement in the package. The probability of the \(\psi_{2}\) is more centered in origin, indicating that this state will take over a Gaussian behavior over time. The mean square displacement for the fractional derivative in space is shown in Fig. 11, with the green points in Fig. 11(a) for \(\psi_{1}\) and by the green line in Fig. 11(b) for \(\psi_{2}\). The red points are for the fractional time derivative (\(\alpha=0.98\)), the blue points for the fractional space derivative (\(\mu=1.95\)), and the black points for the free-particle. The \(|\psi_{1}|^{2}\) spread as \(\sim t^{S_{4}}\) with \(S_{4}=2.61\). Compared with other cases, the fractional space operator makes the probability package spread more widely, i.e., if we consider \(\mu=2\) in a determined time, the p range of space; however, for the same situation with \(\mu=1.95\) the package occupies a larger range. As shown in Fig. 11(b), the \(|\psi_{2}|^{2}\) spread more intensely. Populating the \(\psi_{2}\) state follows the Gaussian shape, as noted in Fig. 9(b). The last possible case to be analyzed is the Schrodinger equation with fractional differential operators in space and time by taking into account a time-dependent potential, i.e., \[i^{\alpha}\frac{\partial^{\alpha}}{\partial t^{\alpha}}\psi_{1}(x,t)=-\frac{1 }{2}\frac{\partial^{\mu}}{\partial|x|^{\mu}}\psi_{1}(x,t)+\gamma e^{i\omega t }\psi_{2}(x,t), \tag{56}\] and \[i^{\alpha}\frac{\partial^{\alpha}}{\partial t^{\alpha}}\psi_{2}(x,t)=-\frac{1 }{2}\frac{\partial^{\mu}}{\partial|x|^{\mu}}\psi_{2}(x,t)+\gamma e^{-i\omega t }\psi_{1}(x,t)\;, \tag{57}\] It is possible to find a solution for these equations, and it is given by \[\psi_{1}(x,t) = \psi_{1}^{(0)}(x,t)+\sum_{n=1}^{\infty}{(\gamma/i^{\alpha})^{2n} \int_{-\infty}^{\infty}dx_{n}\int_{0}^{t}dt_{n}\Upsilon(x-x_{n},t-t_{n})} \tag{58}\] \[\times \int_{-\infty}^{\infty}dx_{n-1}\int_{0}^{t_{n}}dt_{n-1}\Upsilon( x_{n}-x_{n-1},t_{n}-t_{n-1})\cdots\int_{-\infty}^{\infty}dx_{1}\int_{0}^{t_{2}} dt_{1}\Upsilon(x_{2}-x_{1},t_{2}-t_{1})\psi_{1}^{(0)}(x_{1},t_{1})\] with \[\psi_{2}(x,t)=\frac{\gamma}{i^{\alpha}}\int_{-\infty}^{\infty}dx^{\prime}\int_ {0}^{t}dt^{\prime}t^{\prime\alpha-1}\mathcal{G}_{\alpha,\mu}^{(\alpha)}(x-x^ {\prime},t-t^{\prime})e^{i\omega t^{\prime}}\psi_{1}(x^{\prime},t^{\prime})\;, \tag{59}\] where \(\psi_{1}^{(0)}(x,t)=\int_{-\infty}^{\infty}dx^{\prime}\varphi(x^{\prime}) \mathcal{G}_{\alpha,\mu}^{(1)}(x-x^{\prime},t)\), \[\Upsilon(x,t)=e^{i\omega t}\int_{-\infty}^{\infty}dx^{\prime}\int_{0}^{t}dt^{ \prime}t^{\prime\alpha-1}e^{-i\omega t^{\prime}}\mathcal{G}_{\alpha,\mu}^{( \alpha)}(x^{\prime},t^{\prime})\mathcal{G}_{\alpha,\mu}^{(\alpha)}(x-x^{ \prime},t-t^{\prime})\;, \tag{60}\] and \[\mathcal{G}_{\alpha,\mu}^{(\beta)}(x,t)=\frac{1}{|x|}\mathrm{H}_{2,3}^{2,1} \left[-\frac{|x|^{\mu}}{t^{\alpha}/(2i^{\alpha})}\begin{vmatrix}(1,1),(\beta, \alpha),(1,\frac{\mu}{2})\\ (1,\mu),(1,1),(1,\frac{\mu}{2})\end{vmatrix}\right]\;. \tag{61}\] Note that Eq. (61) is essentially the Green function of this case and, consequently, connected to the relaxation process of this system. It differs from the previous case since it mixes different fractional operators in space and time. Figure 11: Mean square displacement for the Gaussian package. Panel (a) is for \(\psi_{1}\) state and panel (b) is for \(\psi_{2}\) state. The green points are for \(\mu=1.95\), the red for the standard case (\(S_{2}=2.07\)), the blue for \(\alpha=0.98\) (\(S_{3}=1.87\)), and the black for free-particle. The slope associated with the green curve is \(S_{4}=2.61\). We consider \(\mu=1.95\), \(\xi_{\mu}=10^{-4}\), \(\beta=10^{-5}\), \(\gamma=0.5\), \(\sigma=0.4\), \(\omega=2\pi\), \(\Delta x=0.66\), \(\Delta t=0.000125\). It is possible to write a combination of the previous discretizations schemes and find the equations \[\psi_{1}^{i,j+1} = \psi_{1}^{i,j}-\sum_{k=1}^{j}[(k+1)^{(1-\alpha)}-k^{(1-\alpha)}][ \psi_{1}^{i,j+1-k}-\psi_{1}^{i,j-k}]+i^{-\alpha}\beta_{\alpha,\mu}(V_{1}^{j} \psi_{2}^{i,j}+V_{1}^{j+1}\psi_{2}^{i,j+1}) \tag{62}\] \[- i^{-\alpha}\xi_{\alpha,\mu}\sum_{k=0}^{i-1}[\psi_{1}^{i-k+1,j}-2 \psi_{1}^{i-k,j}+\psi_{1}^{i-k-1,j}][(k+1)^{2-\mu}-k^{2-\mu}],\] and \[\psi_{2}^{i,j+1} = \psi_{2}^{i,j}-\sum_{k=1}^{j}[(k+1)^{(1-\alpha)}-k^{(1-\alpha)}][ \psi_{2}^{i,j+1-k}-\psi_{2}^{i,j-k}]+i^{-\alpha}\beta_{\alpha,\mu}(V_{2}^{j} \psi_{1}^{i,j}+V_{2}^{j+1}\psi_{1}^{i,j+1}) \tag{63}\] \[- i^{-\alpha}\xi_{\alpha,\mu}\sum_{k=0}^{i-1}[\psi_{2}^{i-k+1,j}-2 \psi_{2}^{i-k,j}+\psi_{2}^{i-k-1,j}][(k+1)^{2-\mu}-k^{2-\mu}],\] where \(\xi_{\alpha,\mu}=\Gamma(2-\alpha)\Delta t^{\alpha}/2\Gamma(3-\mu)\Delta x^{\mu}\), \(\beta_{\alpha,\mu}=\gamma\Delta t^{\alpha}\Gamma(2-\alpha)/2\), \(V_{1}=e^{i\omega t}\) and \(V_{2}=e^{-i\omega t}\). Considering \(\alpha=0.98\) and \(\mu=1.95\) the results for probability distribution is shown in Fig. 12(a) for \(|\psi_{1}|^{2}\) and in 12(b) for \(|\psi_{2}|^{2}\). The results of the combination of both fractional derivatives show a combination of the two previously discussed behavior. However, for this set of parameters, the results resemble fractional time dependence than space one. Figure 13 displays the comparison between the probability distribution for fractional time and space order (in continuous lines) versus the standard model. These results make it possible to verify the composition of both fractional operators. To gain greater influence for \(\mu\) or \(\alpha\) in the dynamics, it is necessary to decrease one of these values. The spreading of the Gaussian package for \(\alpha<1\) and \(\mu<2\) is more centered than in the standard case, as we see in Fig. 14(a) - orange points. Figure 14(b) displays the spread for \(\psi_{2}\) state by the orange line. The associated slope is equal to \(S_{5}=1.76\), near \(S_{3}=1.87\), obtained for the case when only the time fractional operator was considered. Therefore, in the presence of both fractional operators, the time derivative supplants the effects of the space derivative. The orange and blue points match in the range displayed in Fig. 14(a) after some \(t\). However, the distribution for \(|\psi_{2}|^{2}\), corresponding to the orange line in the panel 14(b), follows the same shape as in the case of the fractional time derivative. ## IV Conclusion We analyzed the influence of fractional operators in the Schrodinger equation when an oscillating time-dependent potential is considered to simulate an oscillatory external field applied in the system. We started with a two-level system, which was first analyzed by considering the static case \(\omega=0\) and after the time-dependent case \(\omega\neq 0\). We obtained time analytical and numerical solutions for the standard and the fractional cases. In particular, we verified that the solutions had an oscillating behavior for a long time. Afterward, we incorporated the kinetic term in the Hamiltonian to allow the spreading of the system. We also considered one state populated as an initial condition while the other remained empty. We also analyzed this scenario from the analytical and numerical points of view for the standard and the fractional cases. For the fractional cases, we first consider the effect of the fractional time derivatives, and after analyzing the spatial fractional derivatives, which preserve the probability of the system. One of them is the non-conservation of the probability of the system. We analyzed the behavior of the mean square displacement (or deviation) for these cases and compared it with the free particle case. The results showed that the fractional differential operators lead to different behavior for spreading the system when compared with the standard case. For the fractional derivative in space, we have a faster spreading of the initial condition. On the other hand, we see a slower spreading of the wave package when fractional derivatives in time are incorporated in the Schrodinger equation. This feature is also present in the diffusion context when fractional differential operators are considered, evidencing that these operators strongly influence the random process connected to these phenomena. The mean square displacement also evidenced this point and the influence on the uncertain relations, as observed by Laskin [25]. ## Acknowledgements The authors thank the financial support from the Brazilian Federal Agencies (CNPq), the Sao Paulo Research Foundation (FAPESP, Brazil), CAPES, Fundacao Araucaria. The authors thank the 105 Group Science (www.105groupscience.com). E.K.L. acknowledges the support of the CNPq (Grant No. 301715/2022-0). ## Appendix I A numerical method to solve initial-problem based on Caputo definition is a generalization of the classical Adams-Bashforth-Moulton. This method was proposed by Diethelm, Ford and Freed [73], and is defined by the follows equations: \[y_{h}(t_{n+1})=\sum_{k=0}^{\lceil\alpha\rceil}\frac{t_{n+1}^{k}}{k!}y_{0}^{(k)} +\frac{h^{\alpha}}{\Gamma(\alpha+2)}f(t_{n+1},y_{h}^{P}(t_{n+1}))+\frac{h^{ \alpha}}{\Gamma(\alpha+2)}\sum_{j=0}^{n}a_{j,n+1}f(t_{j},y_{h}(t_{j})), \tag{64}\] where \[y_{h}^{P}(t_{n+1})=\sum_{k=0}^{\lceil\alpha\rceil}\frac{t_{n+1}^{k}}{k!}y_{0}^ {(k)}+\frac{1}{\Gamma(\alpha)}\sum_{j=0}^{n}b_{j,n+1}f(t_{j},y_{h}(t_{j})). \tag{65}\] The coefficients are defined by \[a_{j,n+1}=\left\{\begin{array}{ll}n^{\alpha+1}-(n-\alpha)(n+1)^{\alpha},& \mbox{if }j=0,\\ (n-j+2)^{\alpha+1}+(n-j)^{\alpha+1}-2(n-j+1)^{\alpha+1},&\mbox{if }1\leq j \leq n\\ 1,&\mbox{if}j=n+1,\end{array}\right. \tag{66}\] and \[b_{j,n+1}=\frac{h^{\alpha}}{\alpha}((n+1-j)^{\alpha}-(n-j)^{\alpha}), \tag{67}\] where \(j=1,2,...,n\) and \(n\) is associated with discrete time window, \(T\), which is discrete in \(t_{n}=nh\), with \(n=0,1,...,N\), where \(T=Nh\).
2303.17280
Processing System for Coherent Dedispersion of Pulsar Radio Emission
The work describes a system for converting VLBI observation data using the algorithms of coherent dedispersion and compensation of two-bit signal sampling. Coherent dedispersion is important for processing pulsar observations to obtain the best temporal resolution, while correction for signal sampling makes it possible to get rid of a number of parasitic effects that interfere with the analysis of the diffraction pattern of pulsars. A pipeline has been established that uses the developed converter and the ASC Software Correlator, which will allow reprocessing all archived data of Radioastron pulsar observations and to conduct a search for giant pulses, which requires the best temporal resolution.
Girin I. A., Likhachev S. F., Andrianov A. S., Burgin M. S., Popov M. V., Rudnitskiy A. G., Soglasnov V. A., Zuga V. A
2023-03-30T10:36:14Z
http://arxiv.org/abs/2303.17280v1
# Processing System for Coherent Dedispersion of Pulsar Radio Emission ###### Abstract The work describes a system for converting VLBI observation data using the algorithms of coherent dedispersion and compensation of two-bit signal sampling. Coherent dedispersion is important for processing pulsar observations to obtain the best temporal resolution, while correction for signal sampling makes it possible to get rid of a number of parasitic effects that interfere with the analysis of the diffraction pattern of pulsars. A pipeline has been established that uses the developed converter and the ASC Software Correlator, which will allow reprocessing all archived data of Radioastron pulsar observations and to conduct a search for giant pulses, which requires the best temporal resolution. ## 1 Introduction The Radioastron project comprised the 10-meter space radio telescope (SRT) that together with ground VLBI antennas formed a space-ground interferometer with a maximum baseline projection of up to 380000 km and a record angular resolution of about 8 \(\mu\)as (Baan et al., 2022). The SRT was launched on the 18th of July 2011, and successfully operated till January 2019. The strategy of the Radioastron mission was to archive all the original raw baseband data. Such an approach provides an ability to re-correlate the original data if new scientific problems arise or improved methods of data reduction and interpretation are developed. At the end of the mission operation, the total volume of the raw data was approximately 3500 TB. More details about the archive and observations database can be found in Shatskaya et al. (2020, 2020). Pulsar observations were an important part of the Radioastron scientific program (Kardashev et al., 2013, 2017). They were conducted at 324 MHz (P-band) or 1668 MHz (L-band) and in some cases at both frequencies simultaneously. The single intermediate frequency (IF) bandwidth of the Radioastron was 16 MHz. The P-band receiver supported one sub-band, while 1668 MHz had two 16 MHz sub-bands. In total, the data of 25 pulsars were accumulated during the Radioastron operation, which included 98 observations having total duration of 250 hours. Usually, at least two large ground-based radio telescopes participated in the space-ground VLBI sessions, such as GBT, Arecibo, or the WSRT aperture synthesis system. As compared to single dish observations of pulsars that are usually performed using temporal resolution, \(\Delta t\), of the order of \(1\,\mu\)s, interferometric observations are conducted with lower values of \(\Delta t\). In particular, for the Radioastron pulsar observations \(\Delta t=62\,\)ns. High temporal resolution of the recorded signal combined with high sensitivity of the ground-based telescopes participating in the observations and their ability to simultaneously measure the flux density in two polarization channels permits, in principle, to study the phenomena that involve rapid variations of intensity and/or polarization. An example of such a phenomenon is the longitudinal dependence of the polarization of giant pulses of the Crab pulsar. An analysis of that dependence was carried out by Main et al. (2021), who detected the difference between locations of emitting regions where pulses and interpulses originate. In studying the physics of pulsars, to take full advantage of the high temporal resolution of Radioastron observations it is necessary to "dedisperse" the received signal, that is to correct it for smearing caused by the frequency dependence of the propagation speed of radio waves in the intervening interstellar plasma. The standard Radioastron data processing includes the procedure of the so called "incoherent" dedispersion, but precise measurements of rapid intrapulse variations in the data strongly influenced by the interstellar plasma dispersion may require the application of more accurate and much more computer-intensive "coherent" dedispersion (see Hankins and Rickett (1975), van Straten and Bailes (2011)). High temporal resolution of the interferometric data is achieved partly through the low number of bits in each individual readout of the observed signal. The ground- and space-based observations in the Radioastron project were performed using one- and two-bit digitizing, respectively. Precision of observations with a low number of digitizing levels depends critically on proper choice of the quantization thresholds. The standard Radioastron data processing routines are based on the assumption that the thresholds are set to values close to optimal as described by Thompson et al. (2017). The assumption usually holds for objects with slowly varying flux density, where necessary adjustments are performed by an automatic gain control (AGC) system. However, in observing pulsars the AGC is not used as it cannot function properly when the received signal is rapidly variable. Consequently, the digitizing may be performed with quantization thresholds that deviate significantly from optimal level, and reduction of the data using the standard approach introduces additional error in the final results. To minimize that error, the algorithms of processing of the digitized signal should be generalized to the case of arbitrary values of quantization thresholds. This problem was considered by Jenet and Anderson (1998). An example of how using a standard procedure for processing pulsar observations leads to erroneous conclusions was presented by Popov et al. (2023). In that paper it was shown that the dynamic spectra of the interstellar scintillations of the PSR B1237+25 obtained using the standard data processing algorithms exhibit frequency shifts that depend on the longitude and, if real, could be interpreted as a signature of the "interstellar interferometer" effect. However, the shift almost disappears when the coherent dedispersion and proper correction for the actual values of quantization thresholds is applied. In this paper we describe the software for processing pulsar observations performed by the Radioastron. The newly developed preprocessor of the raw observational data and the modified ASC software correlator allow one to perform the coherent dedispersion and correctly account for digitization with non-optimal quantization thresholds. The paper is organized as follows: in Section 2 we briefly describe the methods of dedispersion, and the numerical method used for correction for digitizing is outlined in Section 3. Technical details of the implementation are described in Section 4. We present the results of application of the described methods to Radioastron observations of the PSR B1237+25 in Section 5 and conclude in Section 6. ## 2 Correction of distortions caused by the dispersion Due to the dispersion of radio waves, emission propagating through the ionized interstellar plasma arrives at the observer with a delay depending on the signal frequency. A short quasi-monochromatic impulse emitted at the low-frequency edge of the observing band will be received later than a synchronously emitted impulse at the high-frequency edge. The relative delay, \(\tau_{\rm d}\), is given by \[\tau_{\rm d}=D\cdot\left(\frac{1}{f_{0}^{2}}-\frac{1}{(f_{0}+B)^{2}}\right), \tag{1}\] where \(f_{0}\) and \(B\) are the lowest observing frequency and the bandwidth, respectively. The coefficient \(D\) (s\(\cdot\)MHz\({}^{2}\)) is related to the dispersion measure \(DM\) (pc\(\cdot\)cm\({}^{-3}\)) by \[DM=2.410\times 10^{-4}\cdot D, \tag{2}\] and the dispersion measure is defined as the integral of the electron column density along the line of sight to the pulsar. As a result of relative delay, the variability in integrated flux on time scales shorter than \(\tau_{\rm d}\) is temporally smeared. There exist two methods to reduce the effect of smearing: post-detection or incoherent dedispersion and pre-detection or phase-coherent dedispersion. Incoherent dedispersion is performed by dividing the frequency band into many smaller bins. The signal received in each bin is shifted in time to compensate for the difference in arrival time, then the shifted signals are summed. Such a method compensates for the delay between the bins, but the dispersion within individual bins remains uncompensated. The coherent dedispersion is free from such a drawback. The method is based on the fact that the effect of dispersion on the signal received from a pulsar can be modeled as a linear filtering operation and the original signal can be recovered from the received signal by performing the inverse filtering. The effect is best described in the frequency domain. The dispersion causes the phase shift, \[\phi(f)=\frac{2\pi Df^{2}}{f_{0}^{2}\cdot(f_{0}+f)}, \tag{3}\] of a Fourier component of the original signal corresponding to frequency \(f\). Consequently, the Fourier transform of the dedispersed signal may be computed by shifting the phase of each Fourier component of the observed signal by \(-\phi(f)\). In the observations of pulsars, this procedure is complicated by the fact that the function sought to be measured is not the Fourier spectrum of the signal in the strict mathematical sense, but the so called dynamic spectrum. Computation of the dynamic spectrum consists in dividing the observing session in many non-intersecting time intervals, usually of equal duration and covering the whole session, and performing the Fourier analysis of the signal on each interval separately. Because of the dispersion, the interval at which the signal is received depends not only on the moment of emission, but also on the frequency. For this reason, coherent dedispersion can not be performed independently on each separate interval. An approach to overcome this difficulty was proposed by Hankins and Rickett (1975), who presented an elaborate description of the coherent dedispersion technique that we follow in our study. ## 3 Correction for 2-bit Sampling Usually, ground telescopes record the signal using 2-bit (four-level) digitizing. Four levels prescribed for such digitizing are -3, -1, +1, and +3. The transition threshold between the levels 1 and 3 is supposed to be close to the value of the original analog signal RMS (\(\sigma\)). The optimal value for the threshold is \(t=0.9674\sigma\) in the case of 2-bit sampling (Thompson et al., 2017). Besides, it is important to switch off the telescope's automatic gain control (AGC) system in pulsar observations because the system inertia would not operate properly at the ON-pulse and OFF-pulse stages of pulsar observation. Hence a pulsar signal is an example of a non-stationary noise process. Thus, records for such signals must be corrected for 2-bit digitizing. The problem was considered by Jenet and Anderson (1998). They have demonstrated that digitizing the signal before removing the dispersive effects generates unwanted systematic artifacts in the data. Namely, the "negative" dips appear around the pulse in the average pulse profile. We follow the method described in Jenet and Anderson (1998). First, it is necessary to estimate the undigitized power level \(\sigma_{0}\) using a fraction of samples \(\Phi\) with values of \(\pm 1\) in the given portion of the signal record: \[\Phi(t)=\frac{1}{\sqrt{2\pi}\sigma}\int\limits_{0}^{t}\exp{\left(-\frac{x^{2}}{2 \sigma^{2}}\right)}dx=\mathrm{erf}\left(\frac{t}{\sqrt{2}\sigma}\right) \tag{4}\] With the value of \(\sigma_{0}\) known it is possible to calculate the corrected values of signal \(y1\) and \(y3\) instead of \(\pm 1\) and \(\pm 3\) using Eq. (40) and (41) from Jenet and Anderson (1998). In practice, we calculated all values using \(\sigma_{0}\) as an independent variable, and then we consider the results with independent variable \(\Phi\). These results are shown in Fig. 1. To make corrected values closer to VLBI standard, the obtained \(y1\) and \(y3\) were multiplied by a factor of 2. Let us define the calculated threshold value from Eq. (4) as \(\Lambda\). For example, \(y1(0.67)=1.085\) and \(y3(0.67)=3.23\) and \(\Phi=0.67\) for \(t=0.9674\) is an optimal threshold. Approximation functions were introduced to calculate \(y1\) and \(y3\) from the measured fraction of samples \(\Phi\) with digitized values as \(\pm 1\): \[y1(\Lambda)=a1+b1\cdot\Lambda+c1\cdot\Lambda^{2}+d1\cdot\Lambda^{3}+e1\cdot \Lambda^{4} \tag{5}\] \[y3(\Lambda)=a3\cdot\exp{\left(-\frac{\Lambda^{b3}}{d3}\right)}+c3+e3\cdot\Lambda \tag{6}\] The polynomial approximation is sufficient for \(y1\), while an approximation by a combination of an exponent and linear term is more suitable for \(y3\). The coefficients used in approximating functions are the following: Figure 1: The corrected values \(y1\) and \(y3\) versus observed fraction of samples equal \(\pm 1\). \(a1=1.1438(5)\), \(b1=0.169(7)\), \(c1=-0.96(3)\), \(d1=1.62(4)\), \(e1=-1.13(2)\); \(a3=1920(200)\), \(b3=0.228(5)\), \(c3=3.24(4)\), \(d3=0.119(1)\), \(e3=-1.39(3)\). RMS uncertainties of the coefficients are given in brackets. The difference between the calculated and approximated values can't be distinguished well due to the coarse scale in the figures,. In fact, RMS residuals are 0.0012 for \(y1\) approximation and 0.011 for \(y3\). ## 4 Implementation of Coherent Dedispersion ### ASC Software Correlator The VLBI data are processed using a correlator. There are two types of correlators: software and hardware, which can be XF and FX correlators. Software correlators perform data processing on conventional computers or on computing clusters running operating systems such as Microsoft Windows or Linux. In turn, hardware correlators are implemented using programmable logic integrated circuits (FPGAs). Correlators of the XF and FX types differ in the sequence of operations performed, where X stands for multiplication and F for Fourier transform. Accordingly, in XF correlators, the signals are first multiplied, and then the Fourier transform is performed. In FX correlators, the Fourier transform performed and the result is multiplied. A dedicated software correlator (the ASC correlator) was developed in Astro Space Center to process the data of the Radioastron mission. It is a software FX correlator that calculates auto and cross spectra for all possible combinations of baselines and polarizations. The detailed description is provided in Likhachev et al. (2017). The ASC software correlator operates on a high-performance computing (HPC) cluster using the message-passing interface (MPI) for parallel calculations. Its total computing power is 1 Tflops. This software correlator supports several modes of data processing: continuous spectrum, spectral lines, and several types of pulsar data processing, including the search and correlation of giant pulses. It supports widespread nomenclature of VLBI baseband data formats such as MarkIV/A, Mark5B, VDIF, RDF, LBA, etc. The ASC Correlator processed about 95% of Radioastron data, including the pulsar one. The complexity of the pulsar VLBI data processing primarily lies in the need to compensate for the dispersion of the signal. ### Compensation of Dispersion #### 4.2.1 Incoherent Dedispersion Incoherent dedispersion implemented in the ASC software correlator is performed in several steps. The pulsar period is divided into bins. The correlator calculates the spectra for each bin. The delay of signal in each bin then is compensated using a polynomial model provided by the TEMPO2 software package (Edwards et al., 2006). Further, each bin is averaged over the observation time. As a result, each bin will contain the signal with removed dispersion. The dispersion effect will decrease linearly with the increasing number of bins. Accordingly, it is necessary to sum only the data that contain fringes of pulses and discard the rest. It is important to compensate the pulse shape with a dispersive delay (Likhachev et al., 2017). #### 4.2.2 Coherent Dedispersion Up to now, the ASC correlator has not been able to perform data processing using coherent dedispersion. For these purposes we have developed a data converter that coherently removes the dispersion effects. This converter creates the data in a new complex format, which is supported by the updated version of the correlator. The converted data then is processed in the standard mode but with the dedispersion switched off. Such an approach minimized the changes in the ASC software correlator structure. Based on the converter, a pipeline was developed for processing pulsar observations. The operation steps of this pipeline are the following: At first, one has to calculate the discrete Fourier transform of the recorded signal. Next, the result of the Fourier transform is multiplied by the complex frequency response function given by Eq. (3). Inverse Fourier transform will yield the recovered signal. It is equivalent to a convolution of impulse response function with the recorded voltage signal in a time domain. The discrete Fourier transform is represented with the periodical assumption. Thus, the first \(n\) points corresponding to the time interval of \(\tau_{\rm d}\) will be erroneously multiplied with the data wrapped around from the end of the segment and it must be discarded. So in this approach the first portion of the signal will be lost in the interval \(\tau_{d}\). Fig. 2 illustrates the scheme of sequential sampling of signal sections of duration \(T\) from a continuous recording. Sequential values of starting time \(t_{i}\) are determined by the relation \(t_{i}=t_{0}+i(T-\tau_{\rm d})\). It is clear that \(T\) must be larger than \(\tau_{\rm d}\) and the efficiency of data reduction is \(1-\tau_{\rm d}/T\). More details on data reduction approach used in our processing system are given below in Section 4. * Read raw baseband data sample of \(M\) size. Supported data formats are the same as for ASC Correlator: RDF (Radioastron Data Format), MarkIV/A, Mark5B, K5A, LBA, and VDIF. * Split the sample \(M\) into \(P\) pieces, each piece of \(N\) size (\(P=M/N\)). * Determine a parameter \(\Lambda\) by counting portion of samples equal to \(\pm 1\) and calculate the values \(y1\) and \(y3\) for each data sample in the \(P\)th piece using equations 5,6; * Replace every sample \(s_{i}\) by the values \(y1\) or \(y3\), thus performing 2-bit sampling correction; * Calculate the matrix \(\phi_{i}\) of the phase shifts depending on the frequency \(f_{i}\): \[\phi_{i}=\frac{2\pi Df_{i}^{2}}{f_{0}^{2}\cdot(f_{0}+f_{i})}\] (7) * Perform a complex \(M\)-point Fourier transform for each data chunk numbered \(k=1,...,P\); * Multiply Fourier spectra by a correction complex function \(R_{i}=e^{-\phi_{i}}\); * Add \(M\) more zeros to the end of the array to get an array of \(2M\) values; * Perform inverse FFT to obtain complex \(M\) samples; * Write the converted data to the output file in half-precision float (2 bytes) format. Figure 2: The scheme of sequential sampling of recording sections with a duration \(T\) from a continuous record. The converter is capable to adjust the parameters of the FFT size, and the data sample size. At the output a new data file is generated in the special format, which can be used in the ASC correlator. Bit statistics (the number of \(\pm 1\) and \(\pm 3\)) are estimated with a floating average method for \(P\) data piece size (\(N\)). The statistics are calculated for each data point in the interval [current position - N/2, current position + N/2]. Further, the 2-bit sampling correction coefficient is considered by a function of estimated statistics (5 and 6). The correlated data will be in a UVX format where cross and auto spectra are represented as a float (4 bytes number format). The UVX files can be converted straight to the widespread IDI-FITS format for further analysis. Finally, bandpass correction and noise cleaning can be applied to the correlated data using the corresponding calibration tools of ASL software package Likhachev et al. (2020). #### 4.2.3 Data Format The developed converter uses its special format as an output (No Packet Data or NPD), which is supported by the ASC software correlator. The first 512 bytes are dedicated to the header, which is a string that contains the information about the observation parameters separated by commas and includes: version, date (DD/MM/YYYY), time (HH:MM:SS.sss), bandwidth (\(1\cdot 10^{-6}\times 1/dt\) - full bandwidth including all sub-bands), number of channels (\(NCH\), polarization channels and sub-band channels), number of bits per sample. The header is followed by half-precision float FP16\(\times\)NCH data. The single-time data sample is (2\(\times\)NCH) bytes. The order of channels is the same as for the initial input raw baseband data. At the same time, the lower sub-band is converted to the upper one, which in turn requires a shift in the reference frequency for it when correlating the converted data. ## 5 Results We have performed several data processing experiments with the Radioastron observations of the pulsar B1237+25 to test the effectiveness of the developed coherent dedispersion data converter. The observations were conducted at 324 MHz with a 16 MHz IF bandwidth. Arecibo and Green Bank radio telescopes took part in the session as ground support. The dispersion measure of the pulsar is 9.3 pc\(\cdot\)cm\({}^{-3}\). The estimated time smearing is 36.35 ms in the band of 316-332 MHz. For this test, we used the following parameters of the number of samples and pieces: \(M=10^{6}\) and \(P=10^{3}\). Fig. 3 (left panel) shows the reconstructed signal smoothed by 1 ms for one individual pulse recorded at the Green Bank radio telescope. The lower curve shows the signal after coherent dedispersion, but without 2-bit sampling correction. The upper curve shows the signal with sampling correction applied. It can be seen that the applied 2-bit correction eliminates parasitic signal dips around the pulse and improves the SNR by about 20%. Fig. 3 (right panel) gives a similar illustration for a pulse averaged over 10 s. in time (7 individual pulses). The distortions caused by two-bit digitization are complex and need to be corrected. Two-bit sampling of the signal causes additional distortion in the radio spectra of pulsar pulses. These radio spectra contain important information about the scattering and scintillation parameters of the interstellar plasma inhomogeneities, namely, the so-called diffraction pattern. From an astrophysical point of view, it is of interest to compare the diffraction pattern for different parts of the pulsar averaged profile. The detection of a shift in the diffraction pattern with longitude would indicate that the radio emission at different longitudes occurs in spatially different parts of the pulsar's magnetosphere, i.e. it would be possible to measure such localization. Fig. 4 shows measurements of the frequency shift of the diffraction pattern in the spectrum of the B1237+25 pulsar as a function of the longitude of Figure 3: Comparison of pulsar signals for B1237+25 restored without correction for 2-bit sampling (lower curves), an with such correction applied (upper curves). The left panel shows a single pulse, while the right panel presents a pulse averaged over 10 seconds (7 pulses). Time resolution is one millisecond. the averaged profile. The observations of 20 minutes were carried out with the Arecibo radio telescope. The solid curve with squares corresponds to the measurements obtained without two-bit digitization correction and the dotted line with circles corresponds to the measurements for the corrected signal. The notable effect of the diffraction pattern drift with longitude disappears after the correction. This case is considered a separate study. The program supports multi-threading and we have estimated the approximate performance on the VDIF data of the Green Bank telescope of B1237+25 which had two polarization channels (LCP and RCP) and one sub-band with a bandwidth of 16 MHz. The measured time ratio was 1:14, thus it takes 14 seconds of real-time to convert one second of such data. ## 6 Conclusions The developed converter allows not only perform of coherent dedispersion but also correct the digitized signal for two-bit sampling. This tool, together with the ASC software correlator and the ASL software package, forms a new pipeline for processing the pulsar data of the Radioastron project. This pipeline was tested on the observational data of the pulsar B1237+25 ob Figure 4: Frequency shift between dynamic spectra obtained at different longitudes of the mean profile of B1237+25. The solid curve passing through the squares corresponds to the analysis of the spectra obtained without two-bit digitization correction, and the dotted line passing through the circles corresponds to the spectra obtained for the corrected signal. served by the Radioastron space-ground interferometer. The corrected data then were used in the studies in Popov et al. (2023). Processing of pulsar data with compensation for both dispersion and 2-bit sampling is an important point and allows the removal of the parasitic effects in the data that interfere with their subsequent correct interpretation. It was shown, that the distortions caused by the 2-bit sampling can introduce interference leading to false pulsar diffraction pattern drift in longitude disappearing after bit statistic compensation. Further, this pipeline will be used to reprocess the Radioastron pulsar raw baseband data with the maximum available temporal resolution, not smeared by dispersion, and also to process any pulsar VLBI data of modern known and supported formats.
2307.05373
Classification of sleep stages from EEG, EOG and EMG signals by SSNet
Classification of sleep stages plays an essential role in diagnosing sleep-related diseases including Sleep Disorder Breathing (SDB) disease. In this study, we propose an end-to-end deep learning architecture, named SSNet, which comprises of two deep learning networks based on Convolutional Neuron Networks (CNN) and Long Short Term Memory (LSTM). Both deep learning networks extract features from the combination of Electrooculogram (EOG), Electroencephalogram (EEG), and Electromyogram (EMG) signals, as each signal has distinct features that help in the classification of sleep stages. The features produced by the two-deep learning networks are concatenated to pass to the fully connected layer for the classification. The performance of our proposed model is evaluated by using two public datasets Sleep-EDF Expanded dataset and ISRUC-Sleep dataset. The accuracy and Kappa coefficient are 96.36% and 93.40% respectively, for classifying three classes of sleep stages using Sleep-EDF Expanded dataset. Whereas, the accuracy and Kappa coefficient are 96.57% and 83.05% respectively for five classes of sleep stages using Sleep-EDF Expanded dataset. Our model achieves the best performance in classifying sleep stages when compared with the state-of-the-art techniques.
Haifa Almutairi, Ghulam Mubashar Hassan, Amitava Datta
2023-07-03T01:05:24Z
http://arxiv.org/abs/2307.05373v1
# Classification of sleep stages from EEG, EOG and EMG signals by SSNet ###### Abstract Classification of sleep stages plays an essential role in diagnosing sleep-related diseases including Sleep Disorder Breathing (SDB) disease. In this study, we propose an end-to-end deep learning architecture, named SSNet, which comprises of two deep learning networks based on Convolutional Neuron Networks (CNN) and Long Short Term Memory (LSTM). Both deep learning networks extract features from the combination of Electrooculogram (EOG), Electroencephalogram (EEG), and Electromyogram (EMG) signals, as each signal has distinct features that help in the classification of sleep stages. The features produced by the two-deep learning networks are concatenated to pass to the fully connected layer for the classification. The performance of our proposed model is evaluated by using two public datasets Sleep-EDF Expanded dataset and ISRUC-Sleep dataset. The accuracy and Kappa coefficient are 96.36% and 93.40% respectively, for classifying three classes of sleep stages using Sleep-EDF Expanded dataset. Whereas, the accuracy and Kappa coefficient are 96.57% and 83.05% respectively for five classes of sleep stages using Sleep-EDF Expanded dataset. Our model achieves the best performance in classifying sleep stages when compared with the state-of-the-art techniques. **Keywords: Sleep Disorder Breathing, EEG, EMG, EOG, Deep learning, Classification, Convolutional Neural Networks, Long Short Term Memory, Sleep stage.** #### Classification of sleep stages #### 1 Introduction Sleep is a critical part of human life which helps to maintain good health and quality of life. When a person feels tired after a full night's sleep or fatigue during the day, this can be an indication that the person may be suffering from Sleep Disorders (SD) [1; 2]. Examples of SD diseases include Sleep Disordered Breathing (SDB) [3], Periodic Legs Movement (PLM) [4] and Insomnia [5]. A study by Peppard et al.[6] found that about 30% of the adult population in the United States of America have insomnia. Also, more than 50 million Americans are diagnosed with sleep disorders, and approximately 25 million Americans have SDB [7]. An early-stage diagnosis of SD can protect patients from severe diseases including cardiovascular problems, neurocognitive deficits, diabetes, stroke and recurrent heart attacks [3; 8]. Sleep is categorized into five sleep stages according to the guidelines of American Academy of Sleep Medicine (AASM) [9], which are Wake (W) stage, Non-Rapid Eye Movement stage (NREM) which contains three stages (N1, N2 and N3) and Rapid Eye Movement stage (REM). Normally, people move from W stage to NREM stage followed by REM stage. Each sleep stage's electrical brain activity is recorded by sensors attached to different parts of the body. There are three different types of brain activities: alpha, theta and delta. W stage exhibits an alpha activity which appears in the occipital region. N1 stage is a shallow sleep that characterizes low alpha activity and the occurrence of theta activity [10]. The actual sleep starts in N2 stage and a unique waveform is produced which is called _sleep spindle_[11]. N3 stage is a deep sleep stage characterized by the occurrence of delta wave [12]. Lastly, REM stage is characterized by low-voltage and fast activity in theta waves [13]. Table 1 shows the Characteristic frequency of EEG signals for each sleep stage. The percentages of a normal cycle of sleep stages are: 50-60% of sleep time spent in the (N1, N2) light sleep stages, 15-20% of sleep time spent in the (N3) deep sleep stage, 20-25% of sleep time spent in REM sleep stage, and 5% or less of the sleep time spent in W sleep stage [14]. In a sleep laboratory, polysomnography (PSG) [15] is a standard clinical procedure used for classification of sleep stages. PSG device has multi-sensors to record physiological signals such as Electromyogram (EMG) [16], Electrocardiography (ECG) [17], Electroencephalogram (EEG) [18], and Electroocologram (EOG) [19] signals. Sleep experts use manual analysis of physiological signals to classify sleep stages. The drawbacks of manual analysis include time-consuming process, the \begin{table} \begin{tabular}{|l|l|} \hline **Sleep stage** & **Characteristic frequency** \\ \hline W & Alpha (8-12 Hz) \\ \hline N1 & Theta (4-8 Hz) \\ \hline N2 & Spindle (12-15 Hz) \\ \hline N3 & Delta (0.5-4 Hz) \\ \hline REM & Alpha (8-12 Hz) \\ \hline \end{tabular} \end{table} Table 1: Characteristic frequency ranges of EEG signals for each sleep stage possibility of human errors in diagnosis, and an inconvenient procedure for patients [20]. Therefore, an automatic procedure for the classification of sleep stages will help for diagnosing SD at hospitals. Due to technologies showing improvement in health care system, machine learning models are developed to evaluate biomedical signals including EEG, ECG, EMG and EOG signals. For example, studies proposed models for different biomedical problems, such as detection of Parkinson's disease using EEG signals [21], detection of directions of eye movements using EOG signals [22], and detection of atrial fibrillation using ECG signals [23]. Few studies in the literature developed machine learning models for the sleep stage classification. Out of which some studies suggested to extract features from EEG signals and then classifying them using machine learning. They classified 30-second segments into three sleep stages: W, NREM and REM, and five sleep stages: W, N1, N2, N3 and REM. For instance, Hassan et al. [24] proposed Empirical Mode Decomposition (EMD) method and a random undersampling boosting (RUSBoost) classifier. The segments' classification accuracy was 94.23% in the three sleep stage classification and 83.49% in the five sleep stage classification. Another study by Hassan et al. [25] proposed Complete Ensemble Empirical Mode Decomposition (CEEMD) and Bootstrap Aggregating (Bagging) classifier. The classification accuracy on all segments for the three sleep stage classification was 94.10%, and for the five sleep stage classification was 90.96%. Zhu et al. [26] proposed a graph domain method and a Support vector machine (SVM) to classify the segments into the three and five sleep stages. The accuracy of their model on the classification of the three sleep stages was 92.60%, and for the five sleep stages was 88.90%. Sharma et al. [27] proposed a wavelet filter method and SVM classifier. The segment classification accuracy for the three sleep stages was 93.50%, and for the five sleep stages was 91.5%. Satapathy et al. [28] proposed a model that used statistical features such as mean, variance and skewness. They used a random forest classifier to classify 30-second segments of EEG signals into the five sleep stages and the accuracy was found to be 92.79%. A study by Rahman et al.[29] proposed a model that used a discrete wavelet transform method and SVM classifier. The segment classification accuracy for the five sleep stages was found to be 91.70%. Recently, researchers proposed Deep Learning (DL) techniques based on Convolutional Neural Networks (CNN) for sleep stage classification. CNN architecture has been very successful in classification [30; 31], object recognition [32] and image segmentation [33] problems. Several studies proposed different CNN models to classify 30-second segments into the three and five sleep stages. For instance, Yildirim et al. [34] proposed a CNN model to extract features from EEG and EOG segments without applying any feature engineering methods. They used 10 layers of 1D-CNN and a fully connected layer. Their model achieved an accuracy of 94.24% in the three sleep stage classification and 90.98% in the five sleep stage classification. Nguyen et al. [35] proposed CNN model for the five sleep stage classification. Their architecture contains three layers of 1D-CNN, in which the first 1D-CNN is followed by max-pooling and dropout layers, the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers, the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers, and the second 1D-CNN layer is followed by max-pooling and dropout layers. The last 1D-CNN layer is followed by and batch normalization, and the last 1D-CNN is followed by max-pooling and two fully connected layers. Their model achieved an accuracy of 87.67% in the five sleep stage classification. Similarly, Zhu et al. [36] proposed a deep learning model based on 1D-CNN and attention mechanism. The segment classification accuracy for the five sleep stages was reported to be 82.80%. The existing studies mentioned above share some limitations. Firstly, they involve feature extracting methods which are complicated, time-consuming and computationally complex processes [37]. Secondly, most of the existing works are based on a single channel of EEG signals. We found that some other behaviours such as muscle and eye movements, which record from EMG and EOG signals can also affect sleep irregularities [38]. We solved these limitations by using a combination of EEG, EMG and EOG signals, which provide distinct features that help us to improve the results. EOG and EMG signals contribute valuable additional sources with EEG signals in the classification of sleep stages. Muscular activities and eye movements appear in EMG and EOG signals during sleep stages. EMG signals show that muscular activities are reduced during NREM sleep stage, whereas muscular activities are lost completely during REM stage. On the other hand, EOG signals show bilateral eye movements during REM stage [38]. These features from EMG and EOG signals can distinguish between NREM and REM sleep stages. Few studies focused on the classification of 30-second segments of sleep stages based on a combination of EEG, EMG and EOG signals. For instance, Cui et al. [39] proposed a CNN model to extract features from EEG, EMG and EOG segments without applying any feature engineering methods. They used two layers of 2D-CNN followed by max-pooling and the last layer is fully connected. Their model achieved an accuracy of 92% on a subject wise test set for the five sleep stage classification. Phan et al. [40] proposed a Fast Fourier Transform (FFT) method and 2D-CNN model for five sleep stage classification. Their model achieved an accuracy of 83.6% on a subject wise test set. In this paper, we propose an efficient automatic deep learning model for the classification of the three and five sleep stages. Our proposed model is an end-to-end deep learning model called _SleepStageNet_ (_SSNet_) to classify 30-second segments of the combination of EEG, EMG and EOG signals. Our proposed architecture contains two deep learning networks. The first deep learning network includes a 1D-CNN network to extract time-invariant features from the raw signals. The second deep learning network includes a Long Short Term Memory (LSTM) to extract temporal features from a sequence of the raw signal. A fully connected layer classifies the combined features extracted from both the deep learning networks. SSNet can be used for the automatic classification of sleep stages at hospitals. It can assist physician experts in analysing PSG signals rather than using manual methods. #### Classification of sleep stages This paper is organized as follows. Section 2 includes data preparation which involves describing two datasets and data distribution. Section 3 describes the proposed SSNet. Section 4 presents the results while Section 5 presents discussion. The conclusion is presented at the end. ## 2 Data Preparation This section describes two public datasets: ISRUC-Sleep dataset and Sleep-EDF Expanded dataset, and data distribution used in the experiments of this study. ### ISRUC-Sleep Dataset This dataset is collected by Sleep Medicine Centre of Hospital of Coimbra University (CHUC) [9]. The total number of PSG recordings is 116 with 11 channels. Each recording includes the following channels: * Six EEG channels with references A1 and A2 (C3-A2), (C4-A1), (F3-A2), (O1-A2), (O2-A1), and (F4-A1) which are placed on the both sides of earlobes. * Two EOG channels (LOC-A2) and (ROC-A1) which are placed on the left and right eye movements. * Chine EMG channel (X1) which are placed between the chin and the lower lip. * One channel of ECG signals (X2) * Two EMG channels (X3 and X4) which are placed on the left and right leg movement. Each recording was sampled at 200 Hz, and the duration of the recording was around 8 hours. Sleep physicians segmented the recordings into 30-second segments and labelled them according to American Academy of Sleep Medicine (AASM) rules. Each segment was labelled with one of the five sleep stages: W, N1, N2, N3 and REM. ISRUC-Sleep dataset is divided into three subgroups depending on the health status: * **Subgroup 1:** the data is recorded from 100 subjects having sleep disorder disease. Each recording belongs to one subject. * **Subgroup 2:** the data is recorded from 8 subjects who were under treatment. Two recording sessions are provided per subject. * **Subgroup 3:** the data is recorded from 10 healthy subjects. Each recording belongs to one subject. ### Sleep-EDF Expanded Dataset Sleep-EDF Expanded dataset is an extended version of Sleep-EDF dataset [41], which was published publicly on the PhysioBank website in 2013. The total number of PSG recordings is 197. Each recording contains two channels of EEG signals (Fpz-Cz and Pz-Oz electrode locations), one channel of EOG signals (horizontal), and one channel of chin EMG signals. The labels of 30-second segments of each recording are done manually by sleep experts based on AASM guidelines. Each segment is labelled with one of five sleep stages: W, N1, N2, N3, and REM and the sampling rate of each recording is 100 Hz. The dataset is divided into two subgroups [42]: * **Sleep Cassette subgroup (SC*):** it contains 153 recordings. Each two recordings set belongs to one healthy subject. The duration of the two recordings is around 20 hours. * **Sleep Telemetry subgroup (ST*):** it contains 44 recordings. The duration of each recording is around 9 hours. Each recording belongs to one subject who has mild difficulty in falling asleep. ### Data Distribution We used five channels of EEG, EOG and EMG signals (O1-A2, C3-A2, C4-A1, X1 and LOC-A2) of ISRUC-Sleep dataset as per recommendation of Cui et al. [39] who suggested that the CNN can extract features from the combination of EEG, EOG and EMG signals to classify the five sleep stages. Each segment has 6000 sampling points and each recording is labelled with a final diagnosis as SDB, Epilepsy [43], Parasomnia [44], or other sleep-related disorders. In this study, we selected the recordings that have the final diagnosis of SDB. In Sleep-EDF Expanded dataset, we selected 73 recordings from the (SC*) subgroup and 42 recordings from the (ST*) subgroup randomly. This restricted selection was due to the computational resources limitation including GPU and size of RAM. Each segment has 3000 sampling points. We used all the four available channels of EEG, EOG and EMG signals (Fpz-Cz, Pz-Oz, EOG horizontal and chin EMG). Table 2 shows the details of the selected segments of the two datasets. We randomly selected 25,449 out of 55,824 NREM segments of ISRUC-Sleep dataset and 25,201 out of 77,158 NREM segments of Sleep-EDF Expanded datasets. The reason for decreasing the segments is to prevent overfitting and resolve class imbalance during the classification stages. It is worth mentioning that, we did not use any methods for filtering or noise removal. We only normalised the segments in the two datasets by Z-score as presented in Eq. (1) [45]. \[z_{score}=\frac{(S-E_{S})}{\alpha_{S}} \tag{1}\] where \(S\) is the segment, \(E\) is the mean of the segment and \(\alpha\) represents the standard deviation of the segment. SSNet was trained and tested on a system having an Intel (R) Core (TM) 3.6 GHz (i7-7700) processor and 8 GB RAM. We used Python 3.7 version with Keras and Scikit-learn libraries. We used Adam optimization rate of 0.002, cross-entropy loss functions and a batch size of 128. We evaluated the performance of the proposed SSNet with the two datasets. We selected segments randomly for each set: training set has 70%, validation set has 15%, and testing set has 15% of the datasets. ## 3 The proposed SSNet Our proposed architecture consists of two main deep learning networks as shown in Figure 1. The combination of EEG, EMG, and EOG signals is fed into the first and second deep learning networks. The first deep learning network is based on a CNN model, while the second deep learning network is based on the LSTM model. CNN model learns filters to extract time-invariant features from the raw signals while LSTM model learns long term dependencies from the input sequences of the previous sleep stage segments. For the first and second deep learning networks, we set the sizes of CNN and LSTM to be small to select only the important features from the raw signals. The selected features produced from the first and second deep learning networks are concatenated and fed to a classifier which is a fully connected layer to predict the final results. Our architecture is designed for classifying combination of 30-second EEG, EMG and EOG segments following the standard of AASM. Table 3 lists the parameters of the two-deep learning networks and the classifier of SSNet. ### First deep learning network We employ multi 1D-CNN layers with small filters size to extract time-invariant features from the raw signals such as specific signal patterns. The first deep learning network consists of five 1D-CNN, max-pooling and dropout layers. Each 1D-CNN layer contains _Kernels_ filters, which are used to extract features from the raw signal in the form of a feature map [46]. The CNN layer is presented in Eq. (2)[8]: \[CNN=W_{k}^{l}x_{i,j}^{l}+b_{k}^{l} \tag{2}\] where \(W\) is the weight vector, \(b\) is bias, and \(x\) is the raw signals. While \(l\) is the layer, and \((i,j)\) is the location of the feature value in the \(k\)th feature map. The feature maps are produced by convolving the inputs with filters and ReLU which is represented in Eq. (3)[8]: \[ReLU=\begin{cases}0,\text{ for }x<0\\ x,\text{ for }x=>0\end{cases} \tag{3}\] where \(x\) is the raw signal. Each 1D-CNN layer is followed by 1D-max pooling to reduce feature size and the computational cost of the architecture. Then, we add a dropout layer to prevent \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Number of segments** & **Dataset** & **W** & **N1** & **N2** & **N3** & **R** & **Total** \\ \hline Total segments & SleepEDFX & 25,201 & 10,420 & 52,502 & 14,236 & 21,602 & 123,962 \\ \hline Selected segments & SleepEDFX & 25,201 & 3,207 & 16,748 & 5,246 & 21,602 & 72,000 \\ \hline Total segments & ISRUC-sleep & 19,810 & 11,101 & 27,398 & 17,325 & 11,256 & 86,993 \\ \hline Selected segments & ISRUC-sleep & 19,810 & 4,935 & 12,668 & 7,846 & 11,256 & 56,515 \\ \hline \end{tabular} \end{table} Table 2: Distribution of the selected segments of SleepEDFX and ISRUC-Sleep datasets. overfitting during the training. We repeat the same order of previous layers (1D-CNN, max-pooling and dropout) four times with different parameters as presented in Table 3. After that, we add a flatten layer to convert the features produced from the last 1D-max-pooling layer to a single long feature vector. The total number of features produced from the first deep learning network is 120. ### Second deep learning network We apply two layers of LSTM networks to capture temporal features from previous input sequences such as sleep scoring rules [47]. For instance, sleep experts determine segments as W stage when alpha activity appears in the occipital region with more than 50% of the segment. In this case, LSTM network can learn long term dependencies from the previous sleep stages segments to remember that it has seen W stage, and score segments as W stage if it still detects characteristics of W stage. LSTM network can learn long term dependencies through three gate layers: the Input gate layer, Forget gate layer and Output gate layer. The mathematical representation of Forget gate layer \(f_{t}\) is presented in Eq.(4), which uses a sigmoid layer to exclude some information from the cell state \(C_{t}\). The Input gate layer \(i_{t}\) decides to store new information in the \(C_{t}\) by two steps. The first step, as presented in Eq (5), is a sigmoid layer determining which values will be updated. Second step, as presented in Eq (6), which is a \(tanh\) layer creating a new candidate values \(C_{t}^{-}\) that will be added to the \(C_{t}\). Then, the previous two steps will update the old \(C_{t-1}\) as presented in Eq. (7). The Output gate layer \(h_{t}\) produces the output from two steps: first step, a sigmoid Figure 1: The detailed architecture of the proposed SSNet consists of two main deep learning networks using Sleep-EDFX dataset size. The first deep learning network is based on 1D-CNN layers with a number of feature maps are 64,32,20,16,10, respectively. Max-pooling layers with size 3 are added after each 1D-CNN layer. The second deep learning network is based on LSTM layers with sizes 64 and 20, respectively. The classifier at the end concatenates the extracted features and predicts the final outputs by using a fully connected layer with softmax. layer \(o_{t}\) is used to filter the information in the \(C_{t}\) as presented in Eq. (8), second step, a \(tanh\) layer is used to normalize the values in the \(C_{t}\) between 1 and -1 and multiply the result from \(o_{t}\) with a \(tanh\) layer as presented in Eq. (9. \[f_{t}=\sigma(W.[h_{t-1},x_{t}]+b) \tag{4}\] \[i_{t}=\sigma(W.[h_{t-1},x_{t}]+b)\] (5) \[C_{t}^{-}=tanh(W.[h_{t-1},x_{t}]+b)\] (6) \[C_{t}=f_{t}*C_{t-1}+i_{t}*C_{t}^{-}\] (7) \[o_{t}=\sigma(W.[h_{t-1},x_{t}]+b)\] (8) \[h_{t}=o_{t}*tanh(C_{t}) \tag{9}\] where \(h_{t-1}\) is the hidden units and \(x_{t}\) is the input feature at the time \(t\), while \(W\) is the weight of the inputs and \(b\) is the bias. \(\sigma\) is the non-linear hyperbolic function. The first LSTM network is followed by batch-normalization layer to speed up the training phase. The size of the two LSTM networks are 64 and 20, respectively. The total number of features produced from the second deep learning network is 20. \begin{table} \begin{tabular}{l l c c c c} \hline **Deep learning network** & **Layer name** & **feature map** & **Kernal size** & **Stride** & **Activation** \\ \hline \multirow{4}{*}{**First**} & 1D-CNN & 64 & 5 & Same & ReLU \\ & 1D-Maxpooling & 3 & & & \\ & Dropout & 0.02 & & & \\ & 1D-CNN & 32 & 3 & Same & ReLU \\ & 1D-Maxpooling & 3 & & & \\ & Dropout & 0.02 & & & \\ & 1D-CNN & 20 & 2 & Same & ReLU \\ & 1D-Maxpooling & 3 & & & \\ & 1D-Maxpooling & 3 & & & \\ & 1D-CNN & 10 & 3 & Same & ReLU \\ & 1D-Maxpooling & 3 & & & \\ & 1D-Maxpooling & 3 & & & \\ & 1D-Maxpooling & 3 & & & \\ & 1D-Maxpooling & 3 & & & \\ & Dropout & 0.02 & & & \\ & Flatten & & & & \\ \hline \multirow{4}{*}{**Second**} & LSTM & 64 & & & \\ & Recurrent dropout & 0.02 & & & \\ & BatchNormalization & & & & \\ & LSTM & 20 & & & \\ & Recurrent dropout & 0.02 & & & \\ \hline \multirow{4}{*}{**Classifier**} & Concatenated & & & & \\ & Fully connected & 3,5 & Softmax \\ \hline \end{tabular} \end{table} Table 3: The detailed feature map of all layers of SSNet. ### Classifier We concatenate all the selected features extracted by the first and second deep learning networks, resulting in 140 features. From this step, our model enables to classify the combination of time-invariant features extracted from the CNN and the temporal features learned from the previous input sequences in LSTM networks. We add a fully connected layer with a softmax to predict the final classification results. We train our model to classify the segments into three classes: W, NREM, and REM. Then, we repeat the experiment to classify the segments into five classes: W, N1, N2, N3, and REM. ## 4 Performance Metrics We evaluated the performance of SSNet using machine learning metrics such as Sensitivity (SE) or Recall, Accuracy (ACC), F1 score, Specificity (SP) and Kappa coefficient. Kappa coefficient is an appropriate performance metric for assessing classification performance on an imbalanced dataset [48]. These metrics are calculated as: \[\text{SE}=\frac{TP}{TP+FN} \tag{10}\] \[\text{SP}=\frac{TN}{TN+FP}\] (11) \[\text{ACC}=\frac{TN+TP}{N}\] (12) \[\text{Precision}=\frac{TP}{TP+FP}\] (13) \[\text{F1}=\frac{2(SE\times Precision)}{SE+Precision}\] (14) \[\text{ Kappa}=\frac{2(TN\times TP-FP\times FN)}{(TN+FN)\times(FN+ TP)+(FP+TP)\times(TN+FP)} \tag{15}\] where TP refers to True Positive segments, TN refers to True Negative segments, FP refers to False Positives segments, and N refers to the total number of segments. ## 5 Results We conduct experiments using Sleep EDFX dataset for three sleep stage classification to find the best input sources. We train our proposed model with single-channel of EEG: FPz-Cz, single-channel of EEG: Pz-Oz, single-channel EMG, single-channel EMG, a combination of the two channels of EEG signal and single-channel EOG, and a combination of the two channels of EEG signal and single-channel EMG. The detailed results of the performance of the proposed SSNet with single-channels of EEG, EMG and EOG signals, and the combination of signals of EEG+EOG, and EEG+EMG for the three sleep stage classification using Sleep-EDFX dataset are presented in Table 4. It can be observed that the highest results of accuracy and kappa are obtained with the combination of two channels of EEG: FPz-Cz and Pz-Oz, and EMG signals, and with the same two channels of EEG and EOG signals. The average accuracy and Kappa are 95.46% and 89.70% respectively with the combination of two channels of EEG and EMG signals. Similarly, the average accuracy and Kappa are 95.65%, and 90.12% respectively for the combination of two channels of EEG and EOG signals. From the observation of results presented in Table 4, we select the two channels of EEG with EMG and EOG signals as input sources. We train our model on both the datasets: ISRUC-Sleep and Sleep-EDFX. The detailed results of the performance of the proposed SSNet for the three sleep stage classification with the combination of EEG, EMG and EOG signals are presented in Table 5 and the confusion matrix is presented in Figure 2. The average accuracy, sensitivity, specificity, F1 score and Kappa using ISRUC-Sleep dataset are 94.90%, 92.00%, 96.02%, 91.90% and 90.34%, respectively while, the average accuracy, sensitivity, specificity, F1 score, and Kappa using Sleep-EDFX dataset are 96.36%, 94.53%, 97.28%, 94.49%, and 93.40%, respectively. Kappa results of the proposed model with ISRUC-Sleep dataset range from 87.98% to 98.88%, while kappa results with Sleep-EDFX dataset range from 92.08% to 95.15%. The detailed results of the performance of the proposed SSNet for the five sleep stage classification are presented in Table 6 and confusion matrix is shown in Figure 3. The average accuracy, sensitivity, specificity, F1 score and Kappa using ISRUC-Sleep dataset are found to be 93.69%, 79.51%, 96.10%, 79.05%, and 77.31% respectively. For Sleep-EDFX dataset, the average accuracy, sensitivity, specificity, F1 score, and Kappa are found to be 96.57%, 82.81%, 97.89%, 84%, and 83.05%, respectively. It can be seen that the lowest results of Kappa are obtained for N1 class with 48.24% on ISRUC-Sleep dataset and 58.77% on Sleep-EDFX dataset, while Kappa results for other classes are significantly better with the range from 73.55% to 95.15%. ### Classification of sleep stages \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{|l|}{**ISRUC-Sleep Dataset**} & \multicolumn{3}{|l|}{**Sleep-EDF Expanded Dataset**} \\ \hline **Class** & **ACC** & **SE** & **SP** & **F1** & **Kappa** & **ACC** & **SE** & **SP** & **F1** & **Kappa** \\ \hline W & 95.80 & 91.30 & 98.08 & 93.61 & **92.35** & 97.25 & 94.32 & 98.81 & 95.98 & **95.15** \\ \hline N1 & 91.92 & 47.65 & 96.08 & 50.34 & 48.24 & 96.80 & 55.95 & 98.60 & 59.58 & 58.77 \\ \hline N2 & 89.55 & 74.74 & 93.91 & 76.49 & 73.55 & 94.48 & 90.27 & 95.75 & 88.31 & 86.75 \\ \hline N3 & 96.18 & 89.39 & 97.29 & 86.55 & 85.52 & 97.59 & 77.98 & 99.19 & 83.00 & 82.44 \\ \hline R & 95.01 & 94.47 & 95.15 & 88.26 & 86.87 & 96.73 & 95.52 & 97.09 & 93.05 & 92.21 \\ \hline _Average_ & _93.69_ & _79.51_ & _96.10_ & _79.05_ & _77.31_ & _96.57_ & _82.81_ & _97.89_ & _84.00_ & _83.05_ \\ \hline \end{tabular} Bold represents the best results. ACC = Accuracy, SE = Sensitivity, SP = Specificity, Kappa= Kappa coefficient. \end{table} Table 6: The classification results for each class and average using SSNet for five sleep stage classification. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Authors** & **Dataset** & **Method** & **Number of segments** & **Signals** & **Accuracy** & **Kappa** \\ \hline Hassan et al.[24] & Sleep-EDF & EEMD+RUSBoost & 15,188 & 1- EEG & 94.23 & 84.70 \\ \hline Hassan et al.[25] & Sleep-EDF & CEEMDAN+ Bagging & 15,188 & 1- EEG & 94.10 & 93.00 \\ \hline Zhu et al.[26] & Sleep-EDF & HVG+SVM & 14,963 & 1- EEG & 92.60 & 87.00 \\ \hline Sharma et al.[27] & Sleep-EDF & Wavelet filter+SVM & 85,900 & 1- EEG & 92.10 & 56.80 \\ \hline Yildirim et al.[34] & Sleep-EDF & 1D-CNN & 15,188 & \begin{tabular}{l} 1-EEG \\ 1-EOG \\ \end{tabular} & 94.20 & - \\ \hline Yildirim et al.[34] & Sleep-EDFX & 1D-CNN & 127,512 & \begin{tabular}{l} 1-EEG \\ 1-EOG \\ \end{tabular} & 94.23 & - \\ \hline Proposed model & Sleep-EDFX & SSNet & 72,000 & \begin{tabular}{l} 2-EEG \\ 1-EOG \\ \end{tabular} & **96.36** & **93.40** \\ \hline \end{tabular} Bold represents the best results. \end{table} Table 7: The performance of proposed model and the state-of-the-art models for three sleep stages classification using Sleep-EDF and Sleep-EDFX datasets. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Authors** & **Dataset** & **Method** & **Number of segments** & **Signals** & **Accuracy** & **Kappa** \\ \hline Hassan et al.[24] & Sleep-EDF & EEMD+RUSBoost & 15,188 & 1- EEG & 83.49 & **84.05** \\ \hline Hassan et al.[25] & Sleep-EDF & CEEMDAN+ Bagging & 15,188 & 1- EEG & 90.69 & **89.00** \\ \hline Zhu et al.[26] & Sleep-EDF & HVG+SVM & 14,963 & 1- EEG & 88.90 & 83.00 \\ \hline Sharma et al.[27] & Sleep-EDF & Wavelet filter+SVM & 85,900 & 1- EEG & 91.50 & 58.81 \\ \hline Yildirim et al.[34] & Sleep-EDF & 1D-CNN & 15,188 & 1- EEG & 91.22 & - \\ \hline Rahman et al.[29] & Sleep-EDF & DWT+SVM & 15,188 & 1- EOG & 90.20 & - \\ \hline Nguyen et al.[35] & Sleep-EDFX & 1D-CNN & 3,000 & 1- EEG & 87.67 & - \\ \hline Yildirim et al.[34] & Sleep-EDFX & 1D-CNN & 127,512 & \begin{tabular}{l} 1-EEG \\ 1-EOG \\ \end{tabular} & 90.98 & - \\ \hline Satapathy et al.[28] & Sleep-EDFX & Statistic features+ RF & 15,139 & 1-EEG & 92.79 & **88.00** \\ \hline Rahman et al.[29] & Sleep-EDFX & DWT+SVM & 54,587 & 1- EOG & 91.70 & - \\ \hline Zhu et al.[36] & Sleep-EDFX & Attention CNN & 42,269 & 1-EEG & 82.80 & 77.34 \\ \hline Proposed model & Sleep-EDFX & SSNet & 72,000 & \begin{tabular}{l} 2-EEG \\ 1-EOG \\ \end{tabular} & **96.57** & 83.05 \\ \hline \end{tabular} Bold represents the best results. \end{table} Table 8: The performance of our proposed model and the state-of-the-art models for five sleep stages classification using Sleep-EDF and Sleep-EDFX datasets. testing sets. Many studies propose feature engineering methods and machine learning models for the three and five sleep stage classifications, while a few studies use deep learning models without any feature engineering methods. Table 7 presents a comparison of accuracy and kappa between our proposed model and the state-of-the-art models using Sleep-EDF and Sleep-EDFX datasets. Most of the state-of-the-art studies did not provide the other evaluation metrics. The total number of segments of sleep stages obtained from Sleep-EDFX dataset is 72,000 segments. We achieved an accuracy of 96.36%, approximately 3% higher than the existing state-of-the-art result. It can also be observed from kappa results that our proposed model is found to be better (93.40%) than the stat-of-the-art models. Hassan et al. [25] achieved 93% of kappa which is approximately the same as our kappa result using a small number of segments in their study (15,188 segments). Overall, we conclude that the performance of our proposed model for classification of the three sleep stages achieved promising results and setting new state-of-the-art result. Table 8 presents the performance of our proposed model and state-of-the-art models for five sleep stage classification using Sleep-EDF and Sleep-EDFX datasets. We obtained an accuracy of 96.57%, approximately 5% higher compared to the other state-of-the-art models. Comparing the Kappa results, Hassan et al. [24; 25] and Satapathy et al. [28] achieved higher kappa results of 84%, 89% and 88%, respectively with comparatively smaller number of segments as compared to our study as presented \begin{table} \begin{tabular}{|l|l|l|} \hline **Research studies** & **Precision** & **Recall** \\ \hline Hassan et al. [25] & 80.17 & 80.86 \\ \hline Sharma et al. [27] & 46.87 & 36.45 \\ \hline Zhu et al. [26] & 76.21 & 72.85 \\ \hline Zhu et al. [36] & 82 & 84.60 \\ \hline Rahman et al. [29] & - & 84.70 \\ \hline Proposed model & **90.71** & **95.52** \\ \hline \end{tabular} Bold represents the best results. \end{table} Table 10: Comparison of REM detection in terms of precision and recall for five sleep stage classification by the proposed model and the state-of-the-art methods. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Authors** & **Classes** & **Method** & **Number of segments** & **Signals** & **Accuracy** & **Kappa** \\ \hline Nguyen et al. [35] & 5 & 1D-CNN & 3,000 & 1- EEG & 86.76 & - \\ \hline Rahman et al.[29] & 5 & DWT+SVM & 9,001 & 1- EOG & 86.00 & - \\ \hline \multirow{3}{*}{Proposed model} & \multirow{3}{*}{3} & \multirow{3}{*}{SSNet} & \multirow{3}{*}{56,515} & 2-EEG & \multirow{3}{*}{**94.90**} & \multirow{3}{*}{**90.34**} \\ & & & & 2-EMG & & \\ \hline \multirow{3}{*}{proposed model} & \multirow{3}{*}{5} & \multirow{3}{*}{SSNet} & \multirow{3}{*}{56,515} & 2-EEG & \multirow{3}{*}{**93.96**} & \multirow{3}{*}{**77.31**} \\ & & & & 2-EMG & & \\ \hline \end{tabular} \end{table} Table 9: The performance of proposed model and the state-of-the-art models for three and five sleep stages classification using ISRUC-Sleep dataset. in Table 8. In addition, these studies require feature engineering methods for extracting features from PSG signals to classify them by traditional machine learning models. These models may cause overfitting if high dimensional PSG signals are used to train their models as reported in [49; 50]. In addition, it is also reported that use of feature engineering methods to convert PSG signals to lower-dimensional feature vectors may result in information loss [50]. Therefore, we believe that our proposed deep learning model is appropriate to use for five stage classification as it provides good performance in large dataset and more generalised. Table 9 presents the performance of our proposed model and state-of-the-art models for three and five sleep stage classifications using ISRUC-Sleep dataset. We used 56,515 segments of sleep stages obtained from ISRUC-Sleep dataset. We obtained the accuracies of the three and five sleep stage classification to be 94.90% and 93.96%, respectively. These results are the higher as compared to the state-of-the-art results by 7%. The state-of-the-art studies [29; 35] did not provide Kappa results in their article to be compared. It can be observed from Table 5 and 6 that our proposed model can classify the combination of signals with different sampling frequencies. Our model achieved good performance using ISRUC-Sleep dataset (200 Hz) as well as Sleep-EDFX dataset (100 Hz). However, it can also be observed from Table 6 that N1 class did not performed well as compared to other classes. This low performance in identifying N1 class can be attributed to the common transition characteristics of W stage to other sleep stages, making it harder to distinguish as reported by Khalighi et al. [9]. Detection of REM stage is essential for diagnosing sleep disorders including narcolepsy and REM behaviour disorder [51]. Previous research studies suggested that EOG and EMG signals may provide discriminative features to detect REM stage from the other sleep stages [52]. Therefore, we compare the performance of our model in detecting REM stage using evaluation metrics of precision and recall with the state-of-the-art methods. Few studies [25; 26; 27; 29; 36] provided their confusion matrix which helped us to calculate precision and recall of REM stage for those studies. Table 10 presents REM stage precision and recall in the classification of five classes of the proposed model and the existing state-of-the-art studies. It can be observed that our proposed model demonstrates that the combination of EEG, EMG and EOG improves the detection of REM stage which is found to be 90.71% precision and 95% recall which is higher as compared to the state-of-the-art studies. SSNet has several advantages over the existing studies. Firstly, it uses multi-channels of EEG, EMG and EOG signals, which help to improve the classification results. Secondly, we propose CNN and LSTM, which work alongside efficiently to extract features automatically instead of using complicated feature engineering methods. Thirdly, we thoroughly tested our proposed model by using two popular datasets for the classification of three and five sleep stage classes. The results show that our proposed network perform better for both classification problem. ## 7 Conclusion In this paper, we introduced a novel deep learning model, called SSNet. Our proposed model classified 30-second segments of a combination of EEG, EOG and EMG signals to classify three and five sleep stages. SSNet contains two deep learning networks. The first deep learning network is composed of CNN, while the second deep learning network is composed of LSTM networks. The extracted features of both networks are concatenated and passed to the fully connected layer for classification. The results demonstrated that the combination of EEG, EOG and EMG signals contributed significantly in improving the performance of classification using our proposed model. The accuracy and Kappa achieved by SSNet for three sleep stage classification were 96.36%, and 91.81%, respectively while the accuracy and Kappa achieved by SSNet for five sleep stage classification were 96.57%, and 87.43%, respectively. The limitation of our work is the low performance of our proposed model for the detection of N1 class.
2305.14407
Large-Scale Formal Proof for the Working Mathematician -- Lessons learnt from the ALEXANDRIA Project
ALEXANDRIA is an ERC-funded project that started in 2017, with the aim of bringing formal verification to mathematics. The past six years have seen great strides in the formalisation of mathematics and also in some relevant technologies, above all machine learning. Six years of intensive formalisation activity seem to show that even the most advanced results, drawing on multiple fields of mathematics, can be formalised using the tools available today.
Lawrence C Paulson
2023-05-23T13:45:34Z
http://arxiv.org/abs/2305.14407v2
Large-Scale Formal Proof for the Working Mathematician -- Lessons learnt from the ALEXANDRIA Project ###### Abstract ALEXANDRIA is an ERC-funded project that started in 2017, with the aim of bringing formal verification to mathematics. The past six years have seen great strides in the formalisation of mathematics and also in some relevant technologies, above all machine learning. Six years of intensive formalisation activity seem to show that even the most advanced results, drawing on multiple fields of mathematics, can be formalised using the tools available today. Keywords:Isabelle formalisation of mathematics ALEXANDRIA project ## 1 Introduction In the summer of 2017, the Newton Institute at Cambridge held a programme entitled _Big Proof_ (BPR) "directed at the challenges of bringing proof technology into mainstream mathematical practice". It was held in recognition of the formalisations that had already been done (which were indeed big). The programme webpage1 specifically lists the proofs of the Kepler conjecture [18], the odd order theorem [16] and the four colour theorem [15]. That summer also saw the start of my ERC project, ALEXANDRIA. _Big Proof_ represented an acknowledgement that the formalisation of mathematics could no longer be ignored, but also an assertion that big problems remain to be solved.These included "novel pragmatic foundations" and large-scale "formal mathematical libraries" and "inference engines", and also the "curation" of formalised mathematical knowledge. Footnote 1: [https://www.newton.ac.uk/event/bpr/](https://www.newton.ac.uk/event/bpr/) ALEXANDRIA was conceived in part to try to identify those big problems. By hiring professional mathematicians and asking them to formalise advanced mathematics, we would get a direct idea of the obstacles they faced. We would also try to refine our tools, extend our libraries and investigate other technologies. We would have only five years (extended to six due to COVID-19). The need for formalisation had been stressed by Vladimir Voevodsky, a Fields medallist, who pointedly asked "And who would ensure that I did not forget something and did not make a mistake, if even the mistakes in much more simple arguments take years to uncover?" [37]. He advocated a new sort of formalism, homotopy type theory, which was the subject of much excitement. However, the most impressive formalisations by that time had been done in Coq (four colour theorem, odd order theorem), HOL Light (Kepler conjecture and much else) or Isabelle/HOL (part of the Kepler proof, and more). Lean, a newcomer, was attracting a user community. Perhaps our project would shed light on the respective values of the available formalisms: calculus of constructions (Coq, Lean), higher-order logic or homotopy type theory. Voevodsky would never find out, due to his untimely death in September 2017. Since that date, research into the formalisation of mathematics has plunged ahead. Kevin Buzzard, a number theorist at Imperial College London, followed some of the _Big Proof_ talks online. This resulted in his adoption of Lean for his Xena Project, with the aim of attracting students to formalisation.2 Xena has had a huge impact, but here I'd like to focus on the work done within ALEXANDRIA. Footnote 2: [https://www.ma.imperial.ac.uk/~buzzard/xena/](https://www.ma.imperial.ac.uk/~buzzard/xena/) ## 2 A Brief Prehistory of the Formalisation of Mathematics Mathematics is a work of the imagination, and the struggle between intuition and rigour has gone on since classical times. Euclid's great contribution to Greek geometry was the unification of many separate schools through his system of axioms and postulates. Newton and Leibniz revolutionised mathematics, but the introduction of infinitesimals was problematical. During the 19th centuries, the "arithmetisation of analysis" carried out by Cauchy and Weierstrass replaced infinitesimals by rigorous \(\epsilon\)-\(\delta\) arguments. (We would not get a consistent theory of infinitesimals until the 1960s, under the banner of non-standard analysis.) Dedekind and Cantor promulgated a radical new understanding of sets and functions that turned out to be inconsistent until Zermelo came up with his axioms. It is notable that Zermelo set theory (which includes the axiom of choice but lacks Fraenkel's replacement axiom) is approximately equal in logical strength to higher-order logic. Only axiomatic mathematics can be formalised. The first attempt was by Frege, whose work (contrary to common belief) was not significantly impacted by Russell's paradox. Russell and Whitehead in their _Principia Mathematica_[39] wrote out the proofs of thousands of mathematical propositions in a detailed axiomatic form. The work of Bourbaki can also be seen as a kind of formalised mathematics. The philosopher Hao Wang wrote on the topic and also coded the first automatic theorem prover [38] for first-order logic, based on what we would now recognise as a tableau calculus. This takes us to NG de Bruijn, who in 1968 created AUTOMATH [4], and to his student's formalisation [23] of Landau's _Foundations of Analysis_ in 1977. This takes us to the birth of Mizar [17], in which a truly impressive amount of mathematics was formalised in a remarkably readable notation. More recent history -- analysis in HOL Light, the four colour theorem in Coq, etc -- is presumably familiar to readers. But it is appropriate to close this section with a prescient remark by de Bruijn back in 1968: As to the question what part of mathematics can be written in AUTOMATH, it should first be remarked that we do not possess a workable definition of the word "mathematics". Quite often a mathematician jumps from his mathematical language into a kind of metalanguage, obtains results there, and uses these results in his original context. It seems to be very hard to create a single language in which such things can be done without any restriction.[3, p. 3] And so we have two great scientific questions: * **What sort of mathematics can be formalised?** * **What sort of proofs can be formalised?** We would investigate these questions -- mostly in the context of Isabelle/HOL -- by formalising as much mathematics as we could, covering as many different topics as possible. I expected to run into obstacles here and there, which would have to be recorded if they could not be overcome. ## 3 Alexandria: Warmup Formalisation Exercises The ERC proposal called for hiring research mathematicians, who would bring their knowledge of mathematics as it was practised, along with their _inexperience_ of Isabelle/HOL. Their role would be to formalise increasingly advanced mathematical material with the twin objectives of developing formalisation methodologies and identifying deficiencies that might be remedied by extending Isabelle/HOL somehow. The project started in September 2017. We hired Anthony Bordg and Angeliki Koutsoukou-Argyraki. A third postdoc was required to undertake any necessary Isabelle engineering, and Wenda Li was hired. One of the tasks for the first year was simply to reorganise and consolidate the Isabelle/HOL analysis library, which had mostly been translated from HOL Light. But we were also supposed to conduct pilot studies. The team set to work enthusiastically, and already in the first year they created a number of impressive developments: * _Irrational rapidly convergent series_, formalising a 2002 proof by J. Hancl [19] * _Projective geometry_, including Hessenberg's theorem and Desargues's theorem * The theory of _quantum computing_ (which identified a significant error in one of the main early papers) * _Quaternions_, _octonions_ and several other small exercises * Effectively counting _real and complex roots of polynomials_, and the Budan-Fourier theorem [29, 30] * The first formal proof that _every field contains an algebraically closed extension_[36] Koutsoukou-Argyraki wrote up her reactions to Isabelle/HOL from the perspective of a mathematician in her paper "Formalising Mathematics -- in Praxis" [24]. ## 4 Advanced Formalisations As noted above, Kevin Buzzard had taken an interest in formalisation through participation in _Big Proof_, and by 2019 had marshalled large numbers of enthusiastic students to formalise mathematics using Lean. He had also made trenchant criticisms of even the most impressive prior achievements: that most of it concerned simple objects such as finite groups, or was just 19th-century mathematics. Nobody seemed to be working with sophisticated objects. He expressed astonishment that Grothendieck schemes -- fundamental objects in algebraic geometry and number theory -- had not been formalised in any tool. His criticisms helped focus our attention on the need to tackle difficult, recent and deep mathematics. Team members proposed their own tasks, but we also contributed to one another's tasks, sometimes with the help of interns or students. We completed three notable projects during this middle period: * _Irrationality and transcendence criteria for infinite series_[26], extending the Hancl work mentioned above with material from two more papers: Erdos-Straus [12] and Hancl-Rucki [20]. * _Ordinal partition theory_[8]: infinite forms of Ramsey's theorem, but for order types rather than cardinals. We formalised relatively papers by Erdos-Milner [13] and Larson [28], and as a preliminary, the Nash-Williams partition theorem [35]. These were deep results in the context of Zermelo-Fraenkel set theory, involving highly intricate inductive constructions. One of the papers contained so many errors as to necessitate publishing a second paper [14] with a substantially different proof. This material was difficult even for Erdos! * _Grothendieck Schemes_[2]. Buzzard had formalised schemes in Lean [5] (three times), and even claimed that Isabelle was not up to the job due to its simple type system. We took the challenge and it was straightforward, following a new approach based on locales to manage the deep hierarchies of definitions. We were aiming for a special issue devoted to formalisation in the journal _Experimental Mathematics_, and were delighted to see these projects take up three of the six papers ultimately accepted. ## 5 Seriously Deep Formalisation Projects Inspired by the success of the previous projects -- conducted under the difficult circumstances of COVID-19 lockdown -- team members continued to propose theorems to formalise, and we continued to collaborate in small groups. By now we had the confidence to take on almost anything. There are too many projects to describe in full, so let's look at some of the highlights. ### Szemeredi's regularity lemma and Roth's theorem on arithmetic progressions _Szemeredi's regularity lemma_ is a fundamental result in extremal graph theory. It concerns a property called the _edge density_ of two given sets of vertices \(X\) \(Y\subseteq V(G)\), and a further property of \((X,Y)\) being an \(\epsilon\)-regular pair for any given \(\epsilon>0\). The lemma itself states that for a given \(\epsilon>0\) there exists some \(M\) such that every graph has an \(\epsilon\)-regular partition of its vertex set into at most \(M\) parts. Intuitively, \((X,Y)\) is an \(\epsilon\)-regular pair if the density of edges between various subsets \(A\subseteq X\) and \(B\subseteq Y\) is more or less the same for all possible \(A\) and \(B\); an \(\epsilon\)-regular partition enjoys that property for all but an insignificant number of pairs \((X,Y)\) of vertex sets taken from the partition. Intuitively then, the vertices of any graph can be partitioned into most \(M\) parts such that the edges between the various parts are uniform in this sense. We used Szemeredi's regularity lemma to prove _Roth's theorem on arithmetic progressions_, which states that every "sufficiently dense" set of natural numbers includes three elements of the form \(k\), \(k+d\), \(k+2d\). We used a variety of source materials and discovered a good many significant infelicities in the definitions and proofs. These included confusion between \(\subset\) and \(\subseteq\) (which are often synonymous in combinatorics) and between a number of variants of the lemma statement. One minor claim was flatly incorrect. To make matters worse, the significance of these issues only became clear in the application of the regularity lemma to Roth's theorem. Much time was wasted, and yet the entire formalisation project [9] took under six months.3 By a remarkable coincidence, a group based in the mathematics department at Cambridge formalised a slightly different version of Szemeredi's regularity lemma, using Lean, around the same time [7]. Footnote 3: An email from Angeliki proposing to prove Szemeredi’s regularity lemma is dated 8 July 2021. The formalisation was done by 5 November; Roth, 28 December. ### Additive combinatorics Let \(A\) and \(B\) be finite subsets of a given abelian group \((G,+)\), and define their _sumset_ as \[A+B=\{a+b:a\in A,b\in B\}.\] Write \(nA\) for the \(n\)-fold iterated sumset \(A+\cdots+A\). _Additive combinatorics_ concerns itself with such matters as the relationship between the cardinality of \(A+B\) and other properties of \(A\) and \(B\). Angeliki proposed this field as the natural successor to the formalisation of Szemeredi's regularity lemma because it's fairly recent (many results are less than 50 years old) and significant (providing a route to Szemeredi's theorem, a much stronger version of the Roth result mentioned above). Here's an overview of the results formalised, all within the 7-month period from April to November 2022: * The _Plunnecke-Ruzsa inequality_: yields an upper bound on the _difference_ set \(mB-nB\) * _Khovanskii's theorem_: for any finite \(A\subseteq G\), the cardinality of \(nA\) grows like a polynomial for all sufficiently large \(n\). * The _Balog-Szemeredi-Gowers theorem_ is a deep result bearing on Szemeredi's theorem. The formalisation combines additive combinatorics with extremal graph theory and probability [25]. * _Kneser's theorem_ and the _Cauchy-Davenport theorem_ yield lower bounds for the size of \(A+B\). These are highly significant results by leading mathematicians. They can all be found in Isabelle's _Archive of Formal Proofs_ (AFP).4 Footnote 4: [https://www.isa-afp.org](https://www.isa-afp.org) ### Other formalisation projects The members chose a variety of large and small projects with a variety of specific objectives: * _Combinatorial structures_. This is the PhD project of Chelsea Edmonds, who has used Isabelle's locale system to formalise dozens of varieties of block designs, hypergraphs, graphs and the relationships among them [10]. Results proved include Fisher's inequality [11]. * _Number theory_. We have formalised several chapters of _Modular Functions and Dirichlet Series in Number Theory_, a graduate textbook by Tom M. Apostol. * _Wetzel's problem_ is a fascinating small example, due to Erdos, where the answer to a question concerning complex analysis depends on the truth or falsity of the continuum hypothesis. The formal proof illustrates analysis and axiomatic set theory smoothly combined into a single argument [32]. * _Turan's graph theorem_ states a maximality property of Turan graphs. This was a Master's student project. This is a partial list, especially as regards contributions from interns, students and other visitors. ### On legibility of formal proofs A proof is an argument, based on logical reasoning from agreed assumptions, that convinces mathematicians that a claim is true. How then do we understand a computer proof? To follow the analogy strictly, a computer proof convinces computers that a claim is true. But computers, even in this age of clever chatbots, are not sentient. We need to convince mathematicians. Of the early efforts at the formalisation of mathematics, only Mizar aimed for legibility. Even pre-computer formal proofs such as _Principia Mathematica_ are unreadable. Isabelle's proof language (Isar) follows the Mizar tradition, as in the following example: ``` lemmaderiv_sum_int:"deriv()x.\(\sum\)i=0..n.real_of_int(c i)*x^i)x ``` = (if n=0 then 0 else (\(\sum\)i=0..n-1. of_int((i+1) * c(Suc i)) * x^i))" (is "deriv?f x = (if n=0 then 0 else?g)") proof - have"(?f has_real_derivative?g) (at x)" if "n > 0" proof - have"(\(\sum\)i = 0..n. i * x ^(i - Suc 0) * (c i)) = (\(\sum\)i = 1..n. (real (i-1) + 1) * of_int (c i) * x ^(i-1))" using that by (auto simp: sum.atLeast_Suc_atMost intro!: sum.cong) also have"... = sum ((\(\lambda\)i. (real i + 1) * c (Suc i) * x^i) (\(\lambda\)n. n-1)) {1..Suc (n-1)}" using that by simp also have"... =?g" by (simp flip: sum.atLeast_atMost_pred_shift [where m=0]) finally have$: "(\(\sum\)a = 0..n. a * x ^(a - Suc 0) * (c a)) =?g". show?thesis by (rule derivative_eq_intros $ \(\sum\)jmp)+ qed then show?thesis by (force intro: DERIV_imp_deriv) qed ``` Only a little training is required to make some sense of this. The lemma claims that the derivative of a certain summation equals a certain other summation. The proof refers of the variables?f and?g, which are defined by the pattern provided in the lemma statement:?f denotes the original summation, and we prove that?g is its derivative. Within that proof we can see summations being manipulated through changes of variable. Since we can see these details of the reasoning, we have reasons to believe that the proof is indeed correct: we do not simply have to trust the computer. Not all Isabelle proofs can be written in a structured style. Page-long formulas often arise when trying to verify program code, and sometimes just from expanding mathematical definitions. Then we must use the traditional tactic style: long sequences of proof commands. However, most mathematical proofs that humans can write go into the structured style with ease. We have aimed for maximum legibility in all our work. ## 6 Library Search and Machine Learning Experiments The focus of this paper is achievements in the formalisation of mathematics, but the ALEXANDRIA proposal also called for investigating supporting technologies. The name of the project refers to the library of Alexandria, and Isabelle's AFP already has nearly 4 million lines of proof text and well over 700 separate entries. How can we take advantage of all this material when developing new proofs? In May 2019, the team acquired a new postdoc: Yiannos Stathopoulos. He came with the perfect background to tackle these objectives. After much labour, he and Angeliki produced the SErAPIS search engine,5 which searches both the pre-installed Isabelle libraries and the AFP, offering a great many search strategies based on anything from simple keywords to abstract mathematical concepts [34]. It is not easy to determine the relevance or significance of a formal text to an abstract concept, but a variety of query types can be combined to explore the libraries. Footnote 5: [https://behemoth.cl.cam.ac.uk/search/](https://behemoth.cl.cam.ac.uk/search/) Also mentioned in the proposal was the aim of Intelligent User Support. I had imagined that common patterns of proofs could be identified in the existing libraries and offered up to users, but with no idea how. To generate structured proofs automatically would require the ability to generate intermediate mathematical assertions. Six years of dramatic advances in machine learning have transformed our prospects. Language models can generate plausible texts given a corpus of existing texts. And as the texts we want would be inserted into Isabelle proofs, we can immediately check their correctness. An enormous amount of work is underway, particularly by a student in our group, Albert Qiaochu Jiang, working alongside Wenda Li and others. It is now clear that language models can generate formal Isabelle proof skeletons [31] and can also be useful for identifying relevant lemmas [21]. We can even envisage _automatic formalisation_[22, 40]: translating informal proofs into formal languages, by machine. Autoformalisation is easier with a legible proof language like ours, because the formal proof can have the same overall structure as the given natural language proof; a project currently underway is to develop the Isabelle Parallel Corpus, pairing natural language and Isabelle texts.6 The next few years should see solid gains through machine learning. Footnote 6: [https://behemoth.cl.cam.ac.uk/ipc/](https://behemoth.cl.cam.ac.uk/ipc/) ## 7 Evaluation At the start of this paper, I listed two scientific questions: what sort of mathematics, and what sort of proofs, can be formalised? And the answer so far is, everything we attempted, and we attempted a great variety of mathematical topics: number theory, combinatorics, analysis, set theory. The main difficulties have been errors and omissions in proofs. A vignette illustrates this point. Chelsea was formalising a probabilistic argument where the authors wrote "these probabilities are clearly independent, and therefore the joint probability is obtained by multiplying them." The problem is that this multiplication law is the mathematical definition of independent probabilities, which the authors had somehow confused with the real-world concept of unconnected random events. Frequently we have found proofs that are almost right: they need a bit of adjustment, but getting everything to fit takes effort. Effort remains the main obstacle to the use of verification tools by mathematicians. Obvious claims are often tiresome to prove, which is both discouraging and a waste of an expert's time. But we might already advocate an approach of formalising the definitions and the proofs, stating the obvious claims without proofs (using the keyword **sorry**). Even for this idea to be feasible, much more library material is needed, covering at least all the definitions a mathematician might expect to have available. Another key scientific question is the role of dependent types. People in the type theory world seem to share the conviction that dependent types are necessary to formalise nontrivial mathematics. But in reality it seems to be Lean users who repeatedly fall foul of _intensional equality_: that \(i=j\) does not guarantee that \(T(i)\) is the same type as \(T(j)\). Falling foul of this can be fatal: the first definition of schemes had to be discarded for this reason. Intensional equality is adopted by almost all dependent type theories, including Coq and Agda: without it, type checking becomes undecidable. But with it, type dependence does not respect equality. The main limitation of simple type theory is that axiomatic type classes are less powerful than they otherwise would be. Isabelle/HOL has type classes for groups, rings, topological spaces among much else, but they are not useful for defining the theories of groups, rings or topological spaces. Rather they allow us, for example, to define the quaternions, prove a dozen or so laws and immediately inherit entire libraries of algebraic and topological properties. Abstract groups, rings, etc., need to be declared with an explicit carrier set (logically, the same thing as a predicate) rather than using the corresponding type class. It's a small price to pay for a working equality relation. Having said this, one must acknowledge the enormous progress made by the Lean community over roughly the same period, 2017-now. Lean users, inspired by Buzzard, have taken on hugely ambitious tasks. The most striking is probably the Liquid Tensor Experiment [6]: brand-new mathematics, by a Fields medallist (Peter Scholze) who was concerned about its correctness, formalised over about a year and a half. This one accomplishment, more than anything else, demonstrates that formalisation can already offer real value to professional mathematicians. We have from time to time looked at type issues directly. De Vilhena [36] describes an interesting technique for defining the \(n\)-ary direct product of a finite list of groups, iterating the binary direct product; his trick to avoid type issues involves creating an isomorphism to a suitable type. However, here one could avoid type issues (and handle the infinite case) by defining the direct product of a family in its own right as opposed to piggybacking off of the binary product. Anthony Bordg has done a lot of work on the right way to express mathematics without dependent types [1, 2]. Ongoing work, still unpublished, is exploring the potential of the _types-to-sets framework_[27] to allow a smooth transition between type-based and carrier-set based formalisations. One can also compare formalisms in terms of their logical strength. Higher-order logic is somewhat weaker than Zermelo set theory, which is much weaker than ZFC, which in turn is much weaker than Tarski-Grothendieck set theory: \[\mathrm{HOL}<\mathrm{Z}\ll\mathrm{ZF}\ll\mathrm{TG}\] The Calculus of Inductive Constructions, which is the formalism of Lean and Coq, is roughly equivalent to TG. The advantage of a weaker formalism is better automation. The power of ZF set theory, when it is required, can be obtained simply by loading the corresponding library from the AFP [32]. It's highly likely that a similar library could be created for Tarski-Grothendieck. And yet, remarkably, everything we have tried to formalise, unless it refers explicitly to ZF, sits comfortably within HOL alone. Since HOL is essentially the formalism of _Principia Mathematica_[39], we can conclude that Whitehead and Russell were right all along. The AFP entries contributed by the project authors are too many to list, but they can be consulted via the on-line author indices: * Anthony Bordg [https://www.isa-afp.org/authors/bordg/](https://www.isa-afp.org/authors/bordg/) * Chelsea Edmonds [https://www.isa-afp.org/authors/edmonds/](https://www.isa-afp.org/authors/edmonds/) * Angeliki Koutsoukou-Argyraki [https://www.isa-afp.org/authors/argyraki/](https://www.isa-afp.org/authors/argyraki/) * Wenda Li [https://www.isa-afp.org/authors/li/](https://www.isa-afp.org/authors/li/) * Lawrence C. Paulson [https://www.isa-afp.org/authors/paulson/](https://www.isa-afp.org/authors/paulson/) ## 8 Conclusions We set out to tackle serious mathematics with a combination of hope and trepidation. We were able to formalise everything we set out to formalise and were never forced to discard a development part way through. As Angeliki has pointed out, "we have formalised results by two Fields medalists (Roth and Gowers), an Abel prize winner (Szemeredi) and of course Erdos too!" We've also seen impressive advances in search and language models to assist users in proof development. Although the effort required to formalise mathematical articles remains high, we can confidently predict that formalisation will be playing a significant role in mathematical research in the next few years. #### Acknowledgements This work was supported by the ERC Advanced Grant ALEXANDRIA (Project GA 742178). Chelsea Edmonds, Angeliki Koutsoukou-Argyraki and Wenda Li provided numerous helpful comments and suggestions. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
2303.07093
Weakly Unsupervised Domain Adaptation for Vestibular Schwannoma Segmentation
Vestibular schwannoma (VS) is a non-cancerous tumor located next to the ear that can cause hearing loss. Most brain MRI images acquired from patients are contrast-enhanced T1 (ceT1), with a growing interest in high-resolution T2 images (hrT2) to replace ceT1, which involves the use of a contrast agent. As hrT2 images are currently scarce, it is less likely to train robust machine learning models to segment VS or other brain structures. In this work, we propose a weakly supervised machine learning approach that learns from only ceT1 scans and adapts to segment two structures from hrT2 scans: the VS and the cochlea from the crossMoDA dataset. Our model 1) generates fake hrT2 scans from ceT1 images and segmentation masks, 2) is trained using the fake hrT2 scans, 3) predicts the augmented real hrT2 scans, and 4) is retrained again using both the fake and real hrT2. The final result of this model has been computed on an unseen testing dataset provided by the 2022 crossMoDA challenge organizers. The mean dice score and average symmetric surface distance (ASSD) are 0.78 and 0.46, respectively. The predicted segmentation masks achieved a dice score of 0.83 and an ASSD of 0.56 on the VS, and a dice score of 0.74 and an ASSD of 0.35 on the cochleas.
Shahad Hardan, Hussain Alasmawi, Xiangjian Hou, Mohammad Yaqub
2023-03-13T13:23:57Z
http://arxiv.org/abs/2303.07093v1
# Weakly Unsupervised Domain Adaptation for Vestibular Schwannoma Segmentation ###### Abstract Vestibular schwannoma (VS) is a non-cancerous tumor located next to the ear that can cause hearing loss. Most brain MRI images acquired from patients are contrast-enhanced T1 (ceT1), with a growing interest in high-resolution T2 images (hrT2) to replace ceT1, which involves the use of a contrast agent. As hrT2 images are currently scarce, it is less likely to train robust machine learning models to segment VS or other brain structures. In this work, we propose a weakly supervised machine learning approach that learns from only ceT1 scans and adapts to segment two structures from hrT2 scans: the VS and the cochlea from the crossMoDA dataset. Our model 1) generates fake hrT2 scans from ceT1 images and segmentation masks, 2) is trained using the fake hrT2 scans, 3) predicts the augmented real hrT2 scans, and 4) is retrained again using both the fake and real hrT2. The final result of this model has been computed on an unseen testing dataset provided by the 2022 crossMoDA challenge organizers. The mean dice score and average symmetric surface distance (ASSD) are 0.78 and 0.46, respectively. The predicted segmentation masks achieved a dice score of 0.83 and an ASSD of 0.56 on the VS, and a dice score of 0.74 and an ASSD of 0.35 on the cochleas. Among the 2022 crossMoDA challenge participants, our method was ranked the \(8^{th}\). Keywords:Domain adaptation Unsupervised segmentation Weak supervision Generative adversarial network Vestibular schwannoma ## 1 Introduction Deep learning (DL) is becoming more popular due to its practicality and its ability to outperform experts in various applications, such as in the medical field. However, unlike humans who can adapt and learn new experiences from different existing ones, DL models are sensitive to the settings they are trained in and do not implicitly adapt to these unseen settings. For instance, changes due to different scanners, image acquisition protocols, and medical centers are among forms of variability [8]. Domain Adaptation (DA) is a field in machine learning (ML) that deals with the distribution changes between different data. The cross-Modality Domain Adaptation (crossMoDA) challenge [4] introduced the first multi-class benchmark for unsupervised cross-modality DA. The challenge consists of two tasks: the segmentation of two structures in the hrT2 scans, and the classification of hrT2 images with vestibular schwannoma (VS) according to the Koos grade. Our work is focused on the first, which involves using contrast-enhanced T1 (ceT1) MRI as a source domain and high-resolution T2 (hrT2) MRI as a target domain to segment two objects: the VS and the cochleas. Vestibular schwannoma is a benign tumor in the brain that, in case of growth, affects the hearing nerves. As an intervention, open surgery or radiosurgery is performed to cure it. These operations require information about the volume and the exact location of the tumor [4]. Therefore, accurate segmentation of the relevant anatomy helps plan the operation properly and consequently increases the chance of patients' recovery. ## 2 Related Work Several studies tackled domain adaptation in the medical imaging field. However, they are mostly private, small, and aim for binary image segmentation, unlike the crossMoDA dataset [4]. The challenge started in 2021, providing a total of 242 training images from the two domains (ceT1 and hrT2). In 2021, the winning model achieved a dice score of 0.857 for VS and 0.844 for the cochleas [12]. They applied CycleGAN for domain translation, then used the fakeT2 images to train the nnUNet. After that, they inferred pseudo-labels of the real hrT2, which are used to retrain the model. The second 2021 ranking model used nnUNet while applying the approaches of pixel alignment and self-training [2]. They generated fake hrT2 images using NiceGAN and achieved a mean dice score of 0.839. Finally, the third model used the Contrastive Unpaired Translation (CUT) method to generate fake hrT2, with 3D nnUNET for segmentation to attain a mean dice score of 0.829 [1]. Their approach mainly depends on doubling the number of images by generating augmented images with varying tumor intensities. Aside from the 2021 crossMoDA challenge, several approaches were followed for medical imaging domain adaptation, including weak supervision. In [3], the authors applied a weak supervision methodology based on having partial annotations derived from scribbles on the target domain. They propose a technique that combines structured learning and co-segmentation to segment VS on T2 scans (target domain) from T1 scans (source domain). They achieved a dice score of 0.83 on the target domain ## 3 Methods Our work consists of two public frameworks: Contrastive Unpaired Translation (CUT) [9] for transferring ceT1 to hrT2, and nnUNet [6] for segmentation. All the following work is implemented using PyTorch 1.11 and with the same mathematical formulation proposed in the CUT and nnUNet papers. ### Data The dataset contains 210 ceT1 MRI scans, 210 hrT2 MRI scans for training, and 64 hrT2 MRI scans for validation [11]. This dataset is an addition to the publicly available Vestibular-Schwannoma-SEG dataset, a part of The Cancer Imaging Archive (TCIA), that was manually segmented [4]. The ceT1 and hrT2 scans are unpaired, and the segmentation masks are only provided for the ceT1 scans. The tumor is on one side of the brain, thus, only one of the cochleas would experience the pressure caused by it. Regardless, the segmentation masks include both the cochleas, as it was found that segmenting the two increases the performance of the models [4]. Image acquisition happened at two institutes in two locations: London and Tilburg. The testing set is made of hrT2 and is not publicly available. Evaluation of the testing set is made by the challenge organizers using the participant's submitted Docker container. ### Pre-processing The provided 3D scans differ in size and voxel spacing. Thus, we resampled the images into an isotropic resolution of 1 mm\({}^{3}\). During resampling, the interpolation techniques used are a third order b-spline for the images and nearest neighbor for the labels. Based on each image dimension, they were either cropped or padded in the \(xy\)-plane to 256 \(\times\) 256, with a varying number of slices per image. The scans were normalized on a 3D basis, which led to better results during the domain translation phase. Previous techniques included normalization per slice. Since the slices have different ranges of pixel intensities, 2D normalization resulted in inconsistencies that decreased the quality of the generated hrT2 images during the domain translation phase. We note that domain adaptation processes that rely on generative algorithms are sensitive to the pre-processing techniques, as these results are essentially the input to the segmentation task. ### Domain Translation Our domain translation work is mainly based on CUT [9] (based on CycleGAN without bijection request) framework to transfer ceT1 images to hrT2 images. In the original CUT paper, ResNet was used as the generator network and a multi-layer perceptron as the discriminator network. In our work, we used StyleGAN2 backbone [7] instead, as it showed more promising results in the literature. Also, the authors trained the CUT for \(N\) number of epochs with a fixed learning rate, followed by another \(N\) epochs where the learning rate linearly decays to zero. Similarly, we applied this approach during our domain translation phase. We trained the CUT model in two stages to speed up the training and avoid the discriminator over-shadowing the generator which could make it generate poorly representative hrT2 images. In the first stage, we set a batch size of 32 and a learning rate of 0.001 for the first 50 epochs. Then, for the second 50 epochs, the learning rate was linearly decaying to reach 0 at the final epoch. We used the Frechet Inception Distance (FID) [10] as a metric to evaluate the proximity of the generated image to the target domain. In the second stage, we re-initialized the discriminator with random weights. This makes it more challenging for the discriminator to distinguish the views, allowing the generator to learn a better representation of hrT2 images. If the discriminator was not re-initialized, its effect will dominate, and the generator will produce low-quality hrT2 scans. We trained the CUT for 5 epochs with a learning rate of 0.001, followed by another 5 epochs where the learning rate experienced a linear decay to finally reach 0. The method is summarized in Table 1. In addition, Figure 1 shows 2D slices of a scan after passing through stage 1 and clarifies how it gets clearer after stage 2. We used these pseudo hrT2 images as the training dataset of the segmentation network. ### Segmentation For the segmentation task, we used the nnUNet framework with the 3D full-resolution U-Net configuration. Following the approach of [1], we generated the augmented fake hrT2 images by reducing the tumor signal by 50%, naming them \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Stage & BS & LR & Generator network baseline & Discriminator network baseline & Epochs & Epochs to decay \\ \hline 1 & 32 & 0.001 & StyleGAN2 instead of ResNet9 & StyleGAN instead of MLP & 50 & 50 \\ \hline 2 & 1 & 0.0002 & StyleGAN2 instead of ResNet9 & StyleGAN instead of MLP & 5 & 5 \\ \hline \end{tabular} \end{table} Table 1: Presented is the methodology followed during the domain translation phase. The table shows the settings of the two training stages of the CUT. BS refers to batch size, while LR refers to the learning rate. Figure 1: Overview of our CUT process. Dividing the training of CUT into two parts can speed up the training and avoid the discriminator being too strong. Keeping the batch size as 1 yields the same result. Figure 2: Overview of the model: Stage 1 includes generating the fake hrT2 images, using them to create the tumor augmented version, and training using fake hrT2 images and the tumor augmented version. Stage 2 involves applying different augmentations to the real hrT2 images and producing their pseudo-labels from the model at stage 1. the AT dataset. By that, we had a total of 420 training images. We applied five-fold cross validation using the default nnUNetTrainerV2 which combines two losses: cross entropy and dice, which are \[\mathcal{L}_{dice}=\frac{-2}{C}\sum_{c\in C}\frac{\sum_{i}p_{i}^{c}g_{i}^{c}+ \epsilon}{\sum_{i}p_{i}^{c}+\sum_{i}g_{i}^{c}+\epsilon} \tag{1}\] \[\mathcal{L}_{CE}=\sum_{i}-g_{i}\log(p_{i})-(1-g_{i})log(1-p_{i}) \tag{2}\] where \(i\) refers to the voxel, \(p\) is the predicted mask, \(g\) is the ground truth, \(C\) is the number of classes, and \(\epsilon\) is the smoothing parameter. In the default setting, \(\epsilon\) is set to 1. We also used the nnUNet variant that applies the non-smooth dice loss instead of the regular one, having \(\epsilon=0\) in Equation 1. The non-smooth dice loss proved its effectiveness when the structure of interest is small relative to the scan size, such as the cochlea. We still noticed a gap between the performance on the training and validation datasets. Therefore, to alleviate the generalizability of the model, we used a nnUNet variant which applies multiple augmentation techniques to the training data. Since [12] showed that generating pseudo-labels of real hrT2 images to train on increases the dice score, we followed a similar approach. Thus, our models were then trained on a total of 630 images. For all the nnUNet variants used in this work, instance normalization was used. Following [5], we applied squeeze-excitation normalization that enhances channel interdependencies. The SE blocks include a global average pooling layer, two fully connected layers (FC), and activation functions. The aim of the pooling layer is to squeeze the channel information into one value, while the FC layers learn the non-linear dependencies between the channels. The model involved a reduction rate ratio of 2 and a ReLU activation function. Other than the normalization approach, all settings were kept the same as in the default nnUNet. For the final model, we made it depend on augmentations as a weak supervision approach as shown in Figure 2. We applied eight types of augmentations on the real hrT2 dataset and predicted their pseudo-labels. The augmentation techniques are random rotation of up to 20 degrees, adding noise, scaling and translating, changing the contrast, and flipping on the three axes. As a result, we had 2100 images with a majority of real hrT2 images. This approach allows our model to better learn the features specific to the hrT2 modality and makes it less prone to the discrepancy arising from the GAN used. All the nnUNet variants apply the deep supervision approach for the loss functions which helps with the gradient vanishing problems for deep networks. Lastly, we ensembled two models involving: the augmentations variant with the combined loss, and the augmentations variant with the non-smooth dice loss. When predicting on an unseen case, we post-process the mask by finding the largest connected component of the VS. This is because the model tends to segment it on both sides of the brain while it is only located on one. Results As for the CUT model, the first stage achieved an FID of 70.37, while the second stage improved the model, reaching an FID of 51.3. Regarding the nnUNet implementation, Table 2 presents some of the results acquired from the different settings on the validation dataset. The baseline model included the fake hrT2 images with their tumor augmented version. The mean dice score achieved is 0.73, with low performance on segmenting the cochleas. Then, two models were trained after producing the pseudo-labels of the original hrT2 images, giving almost similar results: a mean dice score of 0.76. We then experimented by replacing the instance normalization in the default nnUNet with a squeeze-excitation normalization. However, obtaining a mean dice score of 0.73, we noticed no significant improvement using SE normalization. Thus, we used instance normalization in the rest of the experiments. After that, we experimented with various augmentation techniques on the training data. Table 2 describes the performance of the augmentations variant models. Relatively, tested on the validation dataset, our analysis concludes that the two models with augmentations during training gave the best results. As a consequence, we produced augmented real T2 images with their pseudo-labels and ran the final ensemble model described in Section 3.4. The ensemble model achieved a mean dice score of 0.77 on the validation set, with a considerable improvement on the ASSD of the cochleas, reaching 0.37. As we can see from Table 2, this model has a lower variance compared to the other proposed models, which could be due to its learning to focus on the regions that are shared with the augmented pseudo-label. Since the testing dataset is not made public, our performance metrics were obtained by the challenge organizers. Our model achieved a mean dice score of 0.78 and an ASSD of 0.46. As to the VS, the dice score is 0.83 and the ASSD is 0.56. The cochleas have a dice score of 0.74 and an ASSD of 0.35. ## 5 Qualitative Analysis Our best and worst segmentation results of our final model on the validation set are shown in Figure 3. The final model is an ensemble of two networks: nnUNet with augmentations and non-smooth dice loss, and nnUNet with augmentations and combined loss. It is still possible to discuss a few points even though we do not have the ground truth mask at hand. As observed in Figure 3a, the model missed parts of the cochleas during segmentation and over-segmented the background. On the other hand, in the best case scenario for the cochlea in Figure 3c, the model over-segments the cochlea region. Based on the two VS cases in Figures 3b and 3d, we can see that the model can segment clear tumors well, while dark tumors are more difficult to segment. ## 6 Discussion We developed a deep learning algorithm that follows a weak supervision approach to segment two brain structures in the hrT2 modality. The model uses a GAN to generate fake hrT2 images from the ceT1 scans, then these hrT2 images are trained using an nnUNet. After that, the fake hrT2 images are augmented and the nnUNet is used to predict the pseudo-labels of the augmented images. Finally, the model is retrained using the real hrT2 images and the latter augmented hrT2 images as the training dataset, and is validated on the validation dataset provided from the challenge organizers. The applied experiments vary in the loss function, datasets, and augmentations. We observed that the implementation of an accurate GAN model plays a huge role in the success of the unsupervised DA model. Given that the model is predicting an unseen modality, the more generalizable it is, the better. Thus, even if the augmentations increase the size of the training data, it offers the model an optimal chance to diversify the features learned from the target domain. We noticed that the model is more consistent in predicting the cochlea, but less accurate, relative to segmenting the VS. This may be due to the cochlea's size being small in relation to the entire image, but has a fixed location across the patients' scans. However, the VS prediction is not consistent because the tumor intensity, location, and size vary by patient. Moreover, the ASSD metric improved significantly in the final model compared to the improvement noticed in the dice score, especially for the cochleas. We associate it with the different augmentations applied during training, which led to a better study of the overall shape of the structures in different orientations. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & Dataset & Epoch & Score \(\uparrow\) & VS Dice \(\uparrow\) & VS ASD \(\downarrow\) & Cochlea Dice \(\uparrow\) & Cochlea ASSD \(\downarrow\) \\ \hline nnUNetTrainerV2 & fake hrT2 + AT & 300 & 0.73\(\pm\)0.08 & 0.79\(\pm\)0.13 & 0.73\(\pm\)0.43 & 0.68\(\pm\)0.07 & 0.63\(\pm\)1.80 \\ \hline nnUNetTrainerV2 & fake hrT2 + AT + pseudo-labels & 800 & 0.76\(\pm\)0.07 & 0.81\(\pm\)0.11 & 1.14\(\pm\)1.71 & 0.71\(\pm\)0.06 & 0.60\(\pm\)1.80 \\ \hline nnUNetTrainerV2 & fake hrT2 + AT + non-smooth dice & 800 & 0.76\(\pm\)0.07 & 0.81\(\pm\)0.11 & 1.28\(\pm\)1.84 & 0.71\(\pm\)0.06 & 0.59\(\pm\)1.80 \\ \hline nnUNetTrainerV2 & fake hrT2 + AT & 750 & 0.73\(\pm\)0.09 & 0.76\(\pm\)0.15 & 1.74\(\pm\)2.50 & 0.69\(\pm\)0.07 & 0.62\(\pm\)1.79 \\ \hline nnUNetTrainerV2 & fake hrT2 + AT & 500 & 0.75\(\pm\)0.07 & 0.80\(\pm\)0.11 & 0.69\(\pm\)0.38 & 0.70\(\pm\)0.06 & 0.62\(\pm\)1.80 \\ \hline nnUNetTrainerV2 & fake hrT2 + AT & 500 & 0.75\(\pm\)0.08 & 0.79\(\pm\)0.13 & 0.71\(\pm\)0.41 & 0.71\(\pm\)0.06 & 0.61\(\pm\)1.80 \\ \hline Ensemble of (nnUNetTrainerV2 aug var + non-smooth dice) & fake hrT2 + AT + pseudo-labels & 1000 & **0.77\(\pm\)0.06** & **0.82\(\pm\)0.09** & **0.61\(\pm\)0.27** & **0.72\(\pm\)0.06** & **0.37\(\pm\)0.18** \\ \(\&\) (nnUNetTrainerV2 aug var + non-smooth dice) & & & & & & \\ \hline \end{tabular} \end{table} Table 2: Results from different nnUNet variants. The best model is indicated in bold and represents an ensemble of two experiments: nnUNet with augmentations and non-smooth dice loss, and nnUnet with augmentations and combined loss.“Fake hrT2” are generated from the CUT, “AT” represents images where tumor signal is reduced by 50%, “aug var” describes the augmentations variant, and “pseudo-labels” are inferences of the real hrT2 scans. The ASSD considers the distance between the boundaries of the ground truth and the predicted mask which is more sensitive to outliers in prediction. Thus, we can conclude that the improvement in ASSD indicates that the final model is more robust than the previous models. ## 7 Conclusion The application of domain adaptation enables ML models to aid in cases similar to the VS disease, where there is a growing interest in using hrT2 images, but not enough data. Both the settings of the GAN used and the segmentation network impacts the efficiency, especially for small-size structures, such as the cochleas. In some cases, weak supervision approaches may be computationally expensive. Regardless, they enable the model to learn different properties and views of the target domain scans. Thus, they maximize the learning of features and increase generalizability. Since the augmented hrT2 scans are predicted from the initial nnUNet network, improving it would significantly improve the quality of the prediction. Therefore, further work includes higher performing segmentation models, especially on the cochleas, in order to be combined with our weak supervision approach. Figure 3: The best and the worst segmentation according to the dice score for Cochlea (in green) and VS (in red).
2302.08089
Switch Operators for the Six-Vertex Model
In this paper, we introduce and analyze a new switch operator for the six-vertex model. This operator, derived from the Yang-Baxter equation, allows us to express the partition function with arbitrary boundaries in terms of a base case with domain wall boundary conditions. As an application, we derive explicit formulas for the factorial Schur functions and their generalizations. Our results provide new insights into the relationship between boundary conditions and partition functions in the six-vertex model.
Evelyn Choi, Jadon Geathers, Slava Naprienko
2023-02-16T05:01:07Z
http://arxiv.org/abs/2302.08089v3
# Switch operators for the six-vertex model ###### Abstract. In this paper, we introduce and analyze a new _switch operator_ for the six-vertex model. This operator, derived from the Yang-Baxter equation, allows us to express the partition function with arbitrary boundaries in terms of a base case with domain wall boundary conditions. As an application, we derive explicit formulas for the factorial Schur functions and their generalizations. Our results provide new insights into the relationship between boundary conditions and partition functions in the six-vertex model. ## 1. Introduction The six-vertex model is a widely studied statistical mechanics model due to its connections to various areas of mathematics, including representation theory, combinatorics, and integrable systems. It was first introduced by Pauling in [12]. In this paper, we view the six-vertex model as a combinatorial system of paths on a lattice model under prescribed boundary conditions. Each combination of paths forms an admissible state of the model, in which we assign vertex weights to the vertices in each state. The partition function, which is a weighted sum over all possible configurations, encodes the statistical properties of the model. A recurrent problem in the combinatorics of integrable lattice models is to find appropriate Boltzmann weights that lead to meaningful and useful values in the partition function. If these weights satisfy the Yang-Baxter equation (see [1]), then the partition function satisfies functional equations and can be both computed and identified with special functions from literature. For multiple instances, see [1, 1, 1, 1, 2, 3, 4, 5, 6, 7, 8] and references therein. One of the key challenges in studying the six-vertex model is understanding the dependence of the partition function on the boundary conditions, which are the states assigned to the vertices on the boundary of the lattice. Functional relations for the partition functions with different boundary conditions exist by the Yang-Baxter equation. In this paper, we introduce a novel method for analyzing this dependence: the _switch operators_. These operators, derived from the Yang-Baxter equation, allow us to express the partition function with arbitrary boundary conditions in terms of a base case with simple domain wall boundary conditions. The main result is the following: **Theorem**.: _The partition function \(Z_{\alpha,\beta}\) with right boundary \(\alpha\) and top boundary \(\beta\) is expressed in terms of the base case \(Z_{\delta,\delta}\) as follows:_ \[Z_{\alpha,\beta}=\partial_{\alpha}^{\mathrm{H}}\partial_{\beta}^{\mathrm{V}} \left(Z_{\delta,\delta}\right),\] _where \(\partial_{\alpha}^{\mathrm{H}}\) and \(\partial_{\beta}^{\mathrm{V}}\) are the switch operators._ As an application, we compute the explicit expression for the factorial Schur functions and their generalizations. We also prove that these functions are asymptotically symmetric in column parameters. These results extend and generalize those from Section 7 of [1]. **Acknowledgements.** This paper was created through the 2022 Stanford Undergraduate Research in Mathematics (SURIM) program. We would like to thank everyone involved in organizing SURIM, and in particular we thank Lernik Asserian for directing the program. ## 2. Six-vertex model In this section, we review the six-vertex model from statistical mechanics and introduce notation. For a treatment from the point of view of statistical mechanics, see [1]. Here, we represent the model as a rectangular lattice with paths traveling from northwest to southeast. That is, paths enter the model from the top or the left and leave the model on the right and bottom. Paths may intersect, but their movement is restricted to the right and downward directions. Thus, there are only six admissible states for a vertex, illustrated in Figure 1. Defining a six-vertex model requires specifying the size of the grid, as well as the boundary conditions (at what positions the paths enter or leave the grid). Given such a model, we have a system of admissible configurations of paths. Each configuration is called a _state_, and each state consists only of the six mentioned vertices. The weight of a given type of vertex at a certain position in the lattice is represented using the following weight functions: \[a_{1},a_{2},b_{1},b_{2},c_{1},c_{2}\colon\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}.\] For each state \(s\), its weight \(w(s)\) is given by computing the product of weights over all vertices in the state. Then the _partition function_\(Z(\mathfrak{S})\) of a six-vertex model \(\mathfrak{S}\) is the sum of weights of all admissible states: \[Z(\mathfrak{S})=\sum_{\text{states vertices}}\prod_{W(\text{vertex})}.\] More generally, we consider the six-vertex model with row labels \(I=(I_{1},I_{2},\ldots,I_{n})\), column labels \(J=(J_{1},J_{2},\ldots,J_{m})\), and boundary conditions \(\beta=(\beta^{l},\beta^{t},\beta^{r},\beta^{b})\) for the left, top, right, and bottom boundaries, respectively. We denote such a model by \[\mathfrak{S}(I;J;\beta)=\mathfrak{S}(I_{1},\ldots,I_{n};J_{1},\ldots,J_{m}; \beta^{l},\beta^{t},\beta^{r},\beta^{b}).\] For brevity, we denote the partition function of such a model by \[Z(I;J;\beta^{l},\beta^{t},\beta^{r},\beta^{b})=Z(\mathfrak{S}(I;J;\beta^{l}, \beta^{t},\beta^{r},\beta^{b})).\] Lastly, we use the notation \([n]=(1,2,\ldots,n)\). Figure 1. Six admissible states named \(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2}\) following [1] **Definition 2.1**.: [DWBC] The six vertex model with _domain wall boundary conditions_ (abbreviated as DWBC) is defined as \(\mathfrak{S}_{n}^{\text{DWBC}}=\mathfrak{S}([n];[n];\emptyset,[n],[n],\emptyset)\). This is an \(n\times n\) lattice with the following boundaries: paths enter from the top edge of each column and exit from the right edge of each row. The partition function for this model is denoted as \(Z_{n}^{\text{DWBC}}=Z(\mathfrak{S}_{n}^{\text{DWBC}})\). An illustration of the DWBC model is shown in Figure 2. **Example 2.2**.: Let \(n=3\). Then there are seven admissible configurations in \(\mathfrak{S}_{n}^{\text{DWBC}}\). See Figure 3 for the complete list of admissible states with their Boltzmann weights. The partition function \(Z_{3}^{\text{DWBC}}\) then is the sum of all the weights of the configurations. Note that we write the product of weights aligned with the positions where they occur in the model for convenience. With the seven admissible configurations of the \(3\times 3\) lattice and all the defined vertex weights, we can then compute the partition function by summing all seven products. **Definition 2.3** (Yang-Baxter).: We say that a six-vertex model is _integrable_ if, given its weight functions \(a_{1},\dots,c_{2}\), there exist new weight functions \[a_{1}^{\text{H}},a_{2}^{\text{H}},b_{1}^{\text{H}},b_{2}^{\text{H}},c_{1}^{ \text{H}},c_{2}^{\text{H}}\colon\mathbb{Z}\times\mathbb{Z}\to\mathbb{C},\] \[a_{1}^{\text{V}},a_{2}^{\text{V}},b_{1}^{\text{V}},b_{2}^{\text{V}},c_{1}^{ \text{V}},c_{2}^{\text{V}}\colon\mathbb{Z}\times\mathbb{Z}\to\mathbb{C},\] such that the _Yang-Baxter equation_ holds, i.e. we have the equality of the following partitions: In the graphical equation, \(i_{1},j_{1},k_{1},i_{2},j_{2},k_{2}\in\{0,1\}\) are fixed, and \(i_{3},j_{3},k_{3}\in\{0,1\}\) must iterate through all indices. An index of \(1\) indicates the existence of a path, while \(0\) indicates its absence. Figure 3. Admissible states with Boltzmann weights Similarly, we define vertically integrable weights using the graphical equation: where \(i_{1},j_{1},k_{1},i_{2},j_{2},k_{2}\in\{0,1\}\) are again fixed, and \(i_{3},j_{3},k_{3}\in\{0,1\}\) iterate through all their possibles values. The introduction of cross vertices to facilitate the Yang-Baxter equation, as illustrated in the above figure, gives rise to horizontal and vertical cross vertices. We provide the six allowable horizontal cross vertices of the six-vertex model in Figure 4. Similarly, the six allowable vertical cross vertices of the six-vertex model are shown in Figure 5. We now provide an extension of the Yang-Baxter equation, called the train argument. This argument is the basis of our construction of the switch operators. **Lemma 2.4**.: _[Train argument] Consider a six-vertex model and attach a cross vertex. Then the following relationship between partition functions holds._ Figure 4. Horizontal cross vertices Figure 5. Vertical cross vertices _For example,_ \[a_{1}^{\mathrm{H}}(1,2)Z(1,2;k;\emptyset,\beta^{t},(1,2),\beta^{b})=a_{2}^{ \mathrm{H}}(1,2)Z(2,1;k;\emptyset;\beta^{t},(1,2);\beta^{b}).\] _This argument also holds in the vertical case by instead using the vertical Yang-Baxter equation. Below is an example of the mechanics of the train argument._ Proof.: Since we assume that the weights of the horizontal (vertical) cross vertices do not depend on the column (row), we can repeatedly apply the Yang-Baxter equation until we reach the opposite boundary. ## 3. Switch operators In this section we define the generalized Demazure operators, which we call the switch operators, that can be applied to both horizontal and vertical boundaries. We then axiomatically develop the algebra of partition functions under the switch operators. Let \[a_{1},a_{2},b_{1},b_{2},c_{1},c_{2}\colon\mathbb{Z}\times\mathbb{Z} \to\mathbb{C},\] \[a_{1}^{\mathrm{H}},a_{2}^{\mathrm{H}},b_{1}^{\mathrm{H}},b_{2}^{ \mathrm{H}},c_{1}^{\mathrm{H}},c_{2}^{\mathrm{H}}\colon\mathbb{Z}\times \mathbb{Z}\to\mathbb{C},\] \[a_{1}^{\mathrm{V}},a_{2}^{\mathrm{V}},b_{1}^{\mathrm{V}},b_{2}^{ \mathrm{V}},c_{1}^{\mathrm{V}},c_{2}^{\mathrm{V}}\colon\mathbb{Z}\times \mathbb{Z}\to\mathbb{C},\] be the integrable weight functions, where \(b_{1}^{\mathrm{H}},b_{2}^{\mathrm{H}},b_{1}^{\mathrm{V}},b_{2}^{\mathrm{V}}\) are non-zero, and where \[a_{1}^{\mathrm{H}}(i+1,i)a_{1}^{\mathrm{H}}(i,i+1)=b_{1}^{ \mathrm{H}}(i+1,i)b_{2}^{\mathrm{H}}(i+1,i)+c_{1}^{\mathrm{H}}(i+1,i)c_{2}^{ \mathrm{H}}(i+1,i)\] \[a_{1}^{\mathrm{V}}(i+1,i)a_{1}^{\mathrm{V}}(i,i+1)=b_{1}^{ \mathrm{V}}(i+1,i)b_{2}^{\mathrm{V}}(i+1,i)+c_{1}^{\mathrm{V}}(i+1,i)c_{2}^{ \mathrm{V}}(i+1,i).\] Note that the last condition is equivalent to assuming that the \(R\)-matrix of the cross-vertex is invertible. This is demonstrated in detail in [13]. Let \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\in\mathbb{N}^{d}\) be a strictly decreasing signature of length \(d\). Let \(\delta_{d}=(d,d-1,\ldots,1)\) and let \(n,m\geq d\). We consider the six-vertex model \(\mathfrak{S}^{n,m}(\emptyset,\delta_{d},\alpha,\emptyset)\), that is, a model with \(n\) rows and \(m\) columns. The lattice consists of an empty left boundary, a dense top boundary, a boundary of \(\alpha\) on the right, and an empty bottom boundary. Let \(Z_{\alpha}\) denote the partition function of this model. The main aim of this section is to give connection between the partition functions \(Z_{\alpha}\) for different \(\alpha\)'s. The partition functions \(Z_{\alpha}\) depend on the spectral parameters \(I=(i_{1},\ldots,i_{n})\) and \(J=(j_{1},\ldots,j_{m})\). Let the permutation group \(S_{\infty}\) act on the spectral parameters as follows: \[\pi I=(i_{\pi(1)},\ldots,i_{\pi(n)}).\] We also define the action of the simple transposition \(s_{i}\) on the rightmost boundary as follows: 1. if \(i,i+1\in\alpha\), then \(s_{i}\alpha=\alpha\); 2. if \(i,i+1\not\in\alpha\), then \(s_{i}\alpha=\alpha\); 3. if \(i\in\alpha\) but \(i+1\not\in\alpha\), then \(s_{i}\alpha=\alpha^{\prime}\), where \(\alpha^{\prime}\) has the value \(i+1\) replaced by \(i\); 4. if \(i+1\in\alpha\) but \(i\not\in\alpha\), then \(s_{i}\alpha=\alpha^{\prime}\), where \(\alpha^{\prime}\) has the value \(i\) replaced by \(i+1\). In each case, we use the train argument to derive information about the relations that must hold between different horizontal cross vertex weights. In the first case, we have \[a_{1}^{\rm H}(i+1,i)Z_{\alpha}(s_{i}\,x;y)=a_{2}^{\rm H}(i+1,i)Z_{\alpha}(x;y).\] In the second case, \[Z_{\alpha}(s_{i}\,x;y)=Z_{\alpha}(x;y).\] That is, swapping empty rows has no effect on the partition function. The final two cases provide the most significant insights. In the third case, we have \[a_{1}^{\rm H}(i+1,i)Z_{\alpha}(s_{i}\,x;y)=b_{2}^{\rm H}(i+1,i)Z_{s_{i}\alpha} (x;y)+c_{1}^{\rm H}(i+1,i)Z_{\alpha}(x;y). \tag{3.1}\] Lastly, the fourth case gives us the following: \[a_{1}^{\rm H}(i+1,i)Z_{\alpha}(s_{i}\,x;y)=b_{1}^{\rm H}(i+1,i)Z_{s_{i}\alpha} (x;y)+c_{2}^{\rm H}(i+1,i)Z_{\alpha}(x;y). \tag{3.2}\] If we look at the third case in particular, we notice that we can rewrite the terms and define a Demazure-like operator: \[\partial_{i}^{\rm H}=\frac{a_{1}^{\rm H}(i+1,i)s_{i}^{\rm H}-c_{1}^{\rm H}(i+ 1,i)}{b_{2}^{\rm H}(i+1,i)},\] such that \[Z_{s_{i}\alpha}(x;y)=\partial_{i}^{\rm H}(Z_{\alpha}(x;y))=\left(\frac{a_{1}^ {\rm H}(i+1,i)s_{i}^{\rm H}-c_{1}^{\rm H}(i+1,i)}{b_{2}^{\rm H}(i+1,i)}\right) Z_{\alpha}(x;y).\] If \(\partial_{i}^{\rm H}\) is defined as above, then the inverse \(\overline{\partial}_{i}^{\rm H}=(\partial_{i}^{\rm H})^{-1}\) is defined by the fourth case: \[\overline{\partial}_{i}^{\rm H}=\frac{a_{1}^{\rm H}(i+1,i)s_{i}^{\rm H}-c_{2} ^{\rm H}(i+1,i)}{b_{1}^{\rm H}(i+1,i)}.\] By [19], we have the following relations between the weights: **Lemma 3.1** ([20]).: _For all \(i,j\in\mathbb{Z}\), we have_ \[b_{1}^{\mathrm{H}}(i,j) =-b_{1}^{\mathrm{H}}(j,i),\] \[b_{2}^{\mathrm{H}}(i,j) =-b_{2}^{\mathrm{H}}(j,i),\] \[c_{1}^{\mathrm{H}}(i,j) =c_{2}^{\mathrm{H}}(j,i),\] \[c_{2}^{\mathrm{H}}(i,j) =c_{1}^{\mathrm{H}}(j,i).\] _Moreover, we have_ \[a_{1}^{\mathrm{H}}(i,j)a_{1}^{\mathrm{H}}(j,i)+b_{1}^{\mathrm{H}}(i,j)b_{2}^{ \mathrm{H}}(i,j)=c_{1}^{\mathrm{H}}(i,j)c_{2}^{\mathrm{H}}(i,j). \tag{3.3}\] The property that these two operators are inverses of each other follows from the relations on the weights. By definition, \[\partial_{i}^{\mathrm{H}}\overline{\partial}_{i}^{\mathrm{H}} =\frac{a_{1}(i+1,i)s_{i}^{\mathrm{H}}-c_{1}^{\mathrm{H}}(i+1,i)} {b_{2}^{\mathrm{H}}(i+1,i)}\frac{a_{1}^{\mathrm{H}}(i+1,i)s_{i}^{\mathrm{H}}- c_{2}^{\mathrm{H}}(i+1,i)}{b_{1}^{\mathrm{H}}(i+1,i)}\] \[=\frac{c_{1}^{\mathrm{H}}(i+1,i)c_{2}^{\mathrm{H}}(i+1,i)-a_{1}^{ \mathrm{H}}(i+1,i)a_{1}^{\mathrm{H}}(i,i+1)}{b_{1}^{\mathrm{H}}(i+1,i)b_{2}^{ \mathrm{H}}(i+1,i)}\] \[=1.\] Throughout the computation, we use the properties from Lemma 3.1. We use a similar approach to define the vertical switch operator. Let \(s_{j}^{\mathrm{V}}\) be the vertical analog of \(s_{i}^{\mathrm{H}}\), where \(s_{j}^{\mathrm{V}}\) acts by transposing the spectral parameters corresponding to columns \(j\) and \(j+1\). We will use the notation \(s_{j}\) when it is clear we are acting in the vertical case. We may now define the operators we have derived from our applications of the train argument. **Definition 3.2**.: The _switch operators_\(\partial_{i}^{\mathrm{H}},\partial_{i}^{\mathrm{V}}\) and their inverses \(\overline{\partial}_{i}^{\mathrm{H}},\overline{\partial}_{j}^{\mathrm{V}}\) are defined as \[\partial_{i}^{\mathrm{H}}=\frac{a_{1}^{\mathrm{H}}(i+1,i)s_{i}^{ \mathrm{H}}-c_{1}^{\mathrm{H}}(i+1,i)}{b_{2}^{\mathrm{H}}(i+1,i)}, \quad\overline{\partial}_{i}^{\mathrm{H}}=\frac{a_{1}^{\mathrm{H}}(i+1,i)s_{ i}^{\mathrm{H}}-c_{2}^{\mathrm{H}}(i+1,i)}{b_{1}^{\mathrm{H}}(i+1,i)},\] \[\partial_{j}^{\mathrm{V}}=\frac{a_{1}^{\mathrm{V}}(j+1,j)s_{j}^{ \mathrm{V}}-c_{1}^{\mathrm{V}}(j+1,j)}{b_{1}^{\mathrm{V}}(j+1,j)}, \quad\overline{\partial}_{j}^{\mathrm{V}}=\frac{a_{1}^{\mathrm{V}}(j+1,j)s_{ j}^{\mathrm{V}}-c_{2}^{\mathrm{V}}(j+1,j)}{b_{2}^{\mathrm{V}}(j+1,j)}.\] Consider the general partition function \(Z_{\alpha,\beta}(I;J)\). The switch operators act on \(Z_{\alpha,\beta}(I;J)\) by switching the boundaries of adjacent rows and columns: \[Z_{s_{i}\alpha,\beta}(I;J) =\begin{cases}\partial_{i}^{\mathrm{H}}(Z_{\alpha}(I;J))&\text{if }i \in\alpha,i+1\notin\alpha\\ \overline{\partial}_{i}^{\mathrm{H}}(Z_{\alpha}(I;J))&\text{if }i\notin\alpha,i+1\in \alpha\end{cases},\] \[Z_{\alpha,s_{j}\beta}(x;y) =\begin{cases}\partial_{j}^{\mathrm{V}}(Z_{\beta}(I;J))&\text{if }j \in\beta,j+1\notin\beta\\ \overline{\partial}_{j}^{\mathrm{V}}(Z_{\beta}(I;J))&\text{if }j\notin\beta,j+1\in \beta\end{cases}.\] Note that we can use a composition of simple reflections \(s_{i}\) to bring any signature \((n,n-1,\ldots,1)\) to \(\alpha\). For example, if we look at the horizontal direction: \[(3,2,1)\xrightarrow{s3}(4,2,1)\xrightarrow{s4}(5,2,1)\xrightarrow{s2}(5,3,1) \xrightarrow{s1}(5,3,2).\] Hence, using the operator \(\partial_{i}^{\mathrm{H}}\), we can express \(Z_{\alpha}\) in terms of \(Z_{\delta}\), where \(\delta=(n,n-1,\ldots,1)\). We call \(Z_{\delta}\) the _base case_. Note that \(Z_{\delta}=Z_{n}\) from Definition 2.1. Thus, we can express \(Z_{\alpha}\) as \[Z_{\alpha}=\partial_{i_{1}}^{\mathrm{H}}\partial_{i_{2}}^{\mathrm{H}}\ldots \partial_{i_{k}}^{\mathrm{H}}(Z_{\delta}),\] for some \(\partial_{i_{m}}^{\mathrm{H}}\)'s. The vertical case follows this same property, but instead prescribes a strictly decreasing signature of \(\beta=(\beta_{1},\ldots,\beta_{n})\in\mathbb{N}^{n}\) to the top boundary. We express this concept with more precise notation. Let \(D=(d,d-1,\cdots,1)\) where \(d\) is the length of the signature \(\alpha\). Then let \[\partial_{\alpha}^{\mathrm{H}}=\prod_{k=1}^{d}\prod_{\ell=1}^{\alpha_{d-k+1}- D_{k}}\partial_{\alpha_{d-k+1}-\ell}^{\mathrm{H}}.\] Lastly, let \(M\) denote the number of columns in the lattice, and let \[\partial_{\beta}^{V}=\prod_{k=1}^{d}\prod_{\ell=\beta_{k}}^{M-k}\partial_{\ell }^{\mathrm{V}}.\] Then, consider the partition function being normalized in the \(a_{1}\) weight, so that the addition of extra empty columns or rows does not impact the partition function. Before we reach our theorem, we develop some useful notation regarding repeated use of the switch operators. For \(i<j\), denote by \(\partial_{[i,j]}\) the operator sequence \(\partial_{j-1,j}\partial_{j-2,j-1}\ldots\partial_{i,i+1}\) and \(s_{[i,j]}\) the sequence \(s_{j-1,j}s_{j-2,j-1}\ldots s_{i,i+1}\). Let \(\partial_{\alpha}^{\mathrm{H}}\) be the expression \(\partial_{[1,\alpha_{n}]}^{\mathrm{H}}\partial_{[2,\alpha_{n-1}]}^{\mathrm{H}} \ldots\partial_{[n,\alpha_{1}]}^{\mathrm{H}}\). Similarly, we denote \(\partial_{\alpha}^{\mathrm{V}}\) to be \(\partial_{[1,\alpha_{n}]}^{\mathrm{V}}\partial_{[2,\alpha_{n-1}]}^{\mathrm{V }}\ldots\partial_{[n,\alpha_{1}]}^{\mathrm{V}}\). Figure 6. The effect of the horizontal and vertical switch operators on a \(3\times 3\) lattice model. **Theorem 3.3**.: _The partition function \(Z_{\alpha,\beta}\) with right boundary \(\alpha\) and top boundary \(\beta\) is expressed in terms of the base case \(Z_{\delta,\delta}\) as follows:_ \[Z_{\alpha,\beta}=\partial_{\alpha}^{\mathrm{H}}\partial_{\beta}^{\mathrm{V}} \left(Z_{\delta,\delta}\right).\] Proof.: We have \[\partial_{\alpha}^{\mathrm{H}}(Z_{\delta,\delta}) =\partial_{[1,\alpha_{n}]}\partial_{[2,\alpha_{n-1}]}\dots \partial_{[n,\alpha_{1}]}(Z_{\delta,\delta})\] \[=\partial_{[1,\alpha_{n}]}\partial_{[2,\alpha_{n-1}]}\dots \partial_{[n-1,\alpha_{2}]}(Z_{s_{[n,\alpha_{1}]}\delta,\delta})\] \[=\dots\] \[=Z_{s_{[1,\alpha_{n}]}s_{[2,\alpha_{n-1}]}\dots s_{[n,\alpha_{1}] }\delta,\beta}.\] Since \(s_{[1,\alpha_{n}]}s_{[2,\alpha_{n-1}]}\dots s_{[n,\alpha_{1}]}\delta=\alpha\), we get the result. The vertical case is analogous. Thus we have proved it is possible to express the partition function of a lattice with arbitrary boundaries in terms of the operators acting on the base case. **Example 3.4**.: Let \(n=5\). Take \(Z_{\delta,\delta}\) in the \(5\times 5\) case, and take \(Z_{\alpha,\beta}\) to be the partition function with signatures \(\alpha=(5,3,2)\) and \(\beta=(4,2,1)\): \[Z_{\delta,\delta}=\begin{array}{c}1\\ 2\\ 3\\ 4\\ 5\end{array},\quad Z_{\alpha,\beta}=\begin{array}{c}1\\ 2\\ 3\\ 4\\ 5\end{array}.\] We can apply our switch operators first in the vertical case: \[Z_{\delta,\delta}=\begin{array}{c}1\\ 2\\ 3\\ 4\\ 5\\ 1\\ 2\\ 3\\ 4\\ 5\end{array}\] Then, we apply our switch operators in the horizontal case: From the illustrations, we see that the sequence of applications of switch operators that takes us from the base case to \(Z_{\alpha,\beta}\) is \[Z_{\alpha,\beta}=\partial_{1}^{\mathrm{H}}\partial_{3}^{\mathrm{H}}\partial_{2}^ {\mathrm{H}}\partial_{4}^{\mathrm{H}}\partial_{3}^{\mathrm{H}}\partial_{4}^{ \mathrm{V}}\partial_{2}^{\mathrm{V}}\partial_{3}^{\mathrm{V}}\partial_{1}^{ \mathrm{V}}\partial_{2}^{\mathrm{V}}(Z_{\delta,\delta}),\] which exactly matches our expectations from Theorem 5.3 based off of \(\partial_{\alpha}^{\mathrm{H}}\) and \(\partial_{\beta}^{\mathrm{V}}\). We now present an application of these results. Let the weight functions be as follows: \[\begin{array}{ll}a_{1}(i,j)=1-b_{j}x_{i},&b_{1}(i,j)=1+b_{j}y_{i},&c_{1}(i,j )=1-a_{j}b_{j},\\ a_{2}(i,j)=y_{i}+a_{j},&b_{2}(i,j)=x_{i}-a_{j},&c_{2}(i,j)=x_{i}+y_{i}.\end{array} \tag{3.4}\] These weights are from [13], where their partition functions were shown to generalize various families of the Schur functions. These functions are called _free fermionic Schur functions_. By Corollary 2.14, the partition function with the domain wall boundary conditions is \[Z_{n}^{\mathrm{DWBC}}(x,y;a,b)=\prod_{i<j}(x_{i}-y_{j})(1-a_{i}b_{j}).\] Let \(\alpha=(\alpha_{1},\alpha_{2},\dots,\alpha_{n})\). Consider the partition function \(Z_{\alpha,\delta}(x,y;a,b)\) which generalizes various non-supersymmetric Schur functions. Then we have the following result: **Proposition 3.5**.: _We have the following evaluation of the partition function:_ \[Z_{\alpha,\delta}(x,y;a,b)=\partial_{\alpha}^{\mathrm{V}}\left(\prod_{i=1}^{n} \prod_{j=n+1}^{\alpha_{1}}(1-b_{j}x_{i})\prod_{i<j}(x_{i}-y_{j})(1-a_{i}b_{j}) \right).\] Proof.: By Theorem 3.3 and the explicit value of the partition function with the domain wall boundary conditions. Note that the extra factor comes from the "empty" sites in the six vertex model. This result generalizes the relation for the factorial Schur functions as explored in equations (18) and (19) in [1]. In particular, the equation (19) can be written in terms of the switch operators as \[s_{\mu}(z|\sigma_{i}\alpha)=\partial_{i}^{\mathrm{V}}(s_{\lambda}(z|\alpha)).\] Similarly to Corollary 1 in [1], we prove that the partition function is asymptotically symmetric in the column parameters. **Corollary 3.6**.: _The free fermionic Schur functions \(Z_{\alpha,\delta}\) are asymptotically symmetric in variables \(a_{j},b_{j}\)._ Proof.: Indeed, for large enough indices which exceed all partis of \(\alpha\), the switch operators give the simplified expression \[Z_{\alpha,\beta}(x,y;a,b)=Z_{\alpha,\beta}(x,y;s_{i}a,s_{i}b).\] Hence, the partition function is asymptotically symmetric in the column parameters. We note that Theorem 3.3 provides the explicit expression for the partition functions \(Z_{\alpha,\beta}(x,y;a,b)\). While the specializations \(\beta=\delta\) and \(\alpha=\delta\) produce the generalizations of the factorial Schur functions, the meaning of the function \(Z_{\alpha,\beta}(x,y;a,b)\) remains unclear. This function could be seen as two sided interpolation of the two kinds of factorial Schur functions (or their generalizations).
2307.03072
Plane-filling curves of small degree over finite fields
A plane curve $C$ in $\mathbb{P}^2$ defined over $\mathbb{F}_q$ is called plane-filling if $C$ contains every $\mathbb{F}_q$-point of $\mathbb{P}^2$. Homma and Kim, building on the work of Tallini, proved that the minimum degree of a smooth plane-filling curve is $q+2$. We study smooth plane-filling curves of degree $q+3$ and higher.
Shamil Asgarli, Dragos Ghioca
2023-07-06T15:40:35Z
http://arxiv.org/abs/2307.03072v1
# Plane-filling curves of small degree over finite fields ###### Abstract. A plane curve \(C\) in \(\mathbb{P}^{2}\) defined over \(\mathbb{F}_{q}\) is called plane-filling if \(C\) contains every \(\mathbb{F}_{q}\)-point of \(\mathbb{P}^{2}\). Homma and Kim, building on the work of Tallini, proved that the minimum degree of a smooth plane-filling curve is \(q+2\). We study smooth plane-filling curves of degree \(q+3\) and higher. Key words and phrases:Plane curve, space-filling curve, smooth curve, finite field 2020 Mathematics Subject Classification: Primary: 14G15, 14H50; Secondary: 11G20, 14G05 ## 1. Introduction The study of space-filling curves in \(\mathbb{R}^{2}\) starts with the work of Peano [10] in the 19th century. About 100 years later, Nick Katz [11] studied space-filling curves over finite fields and raised open questions about their existence. One version of Katz's question was the following. Given a smooth algebraic variety \(X\) over a finite field \(\mathbb{F}_{q}\), does there always exist a _smooth_ curve \(C\subset X\) such that \(C(\mathbb{F}_{q})=X(\mathbb{F}_{q})\)? In other words, is it possible to pass through all of the (finitely many) \(\mathbb{F}_{q}\)-points of \(X\) using a smooth curve? Gabber [1] and Poonen [14] independently answered this question in the affirmative. We will consider the special case when \(X=\mathbb{P}^{2}\). We say that a curve \(C\subset\mathbb{P}^{2}\) is _plane-filling_ if \(C(\mathbb{F}_{q})=\mathbb{P}^{2}(\mathbb{F}_{q})\). Equivalently, \(C\) is a plane-filling curve \(C\) if \(\#C(\mathbb{F}_{q})=q^{2}+q+1\). In a natural sense, plane-filling curves are extremal. There are other classes of extremal curves with respect to the set of \(\mathbb{F}_{q}\)-points, including blocking curves [1] and tangent-filling curves [1]. From Poonen's work [14], we know that there exist smooth plane-filling curves of degree \(d\) over \(\mathbb{F}_{q}\) whenever \(d\) is sufficiently large with respect to \(q\). It is natural to ask for the minimum degree of a smooth plane-filling curve over \(\mathbb{F}_{q}\). Homma and Kim [13] proved that the minimum degree is \(q+2\). More precisely, by building on the work of Tallini [11, 11], they showed that a plane-filling curve of the form \[(ax+by+cz)(x^{q}y-xy^{q})+y(y^{q}z-yz^{q})+z(z^{q}x-zx^{q})=0\] is smooth if and only if the polynomial \(t^{3}-(ct^{2}+bt+a)\in\mathbb{F}_{q}[t]\) has no \(\mathbb{F}_{q}\)-roots. In a sequel paper [10], Homma investigated further properties of plane-filling curves of degree \(q+2\). The automorphism group of these special curves was studied by Duran Cunha [1]. As another direction, Homma and Kim [13] investigated space-filling curves in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). In this paper, we investigate the existence of smooth plane-filling curves of degree \(q+3\) and higher. The guiding question for our paper is the following. **Question 1.1**.: Let \(q\) be a prime power. Does there exist a smooth plane-filling curve of degree \(q+3\) defined over \(\mathbb{F}_{q}\)? The three binomials \(x^{q}y-xy^{q},y^{q}z-yz^{q}\), and \(z^{q}x-zx^{q}\) generate the ideal of polynomials defining plane-filling curves; see [13, Proposition 2.1] for proof of this assertion. Thus, any plane-filling curve of degree \(q+3\) must necessarily be defined by \[Q_{1}(x,y,z)\cdot(x^{q}y-xy^{q})+Q_{2}(x,y,z)\cdot(y^{q}z-yz^{q})+Q_{3}(x,y,z) \cdot(z^{q}x-zx^{q})=0\] for some homogeneous quadratic polynomials \(Q_{1},Q_{2},Q_{3}\in\mathbb{F}_{q}[x,y,z]\). The difficulty is finding suitable \(Q_{1},Q_{2},Q_{3}\) for which the corresponding curve is smooth. Our first result gives a necessary and sufficient condition for the plane-filling curve \(C_{k}\) to be smooth at all the \(\mathbb{F}_{q}\)-points. **Theorem 1.2**.: _For each \(k\in\mathbb{F}_{q}\), consider the plane-filling curve \(C_{k}\) defined by_ \[x^{2}(x^{q}y-xy^{q})+y^{2}(y^{q}z-yz^{q})+(z^{2}+kx^{2})(z^{q}x-zx^{q})=0. \tag{1}\] _Then \(C_{k}\) is smooth at every \(\mathbb{F}_{q}\)-point of \(\mathbb{P}^{2}\) if and only if the polynomial \(x^{7}+kx^{3}-1\) has no zeros in \(\mathbb{F}_{q}\)._ To ensure that the previous theorem is not vacuous, we need to show that there exists some \(k\in\mathbb{F}_{q}\) such that \(x^{7}+kx^{3}-1\) has no zeros in \(\mathbb{F}_{q}\). **Proposition 1.3**.: _There exists a value \(k\in\mathbb{F}_{q}\) such that \(x^{7}+kx^{3}-1\in\mathbb{F}_{q}[x]\) has no zeros in \(\mathbb{F}_{q}\)._ Proof.: When \(x=0\), there is no \(k\in\mathbb{F}_{q}\) such that \(x^{7}+kx^{3}-1=0\). For each \(x\in\mathbb{F}_{q}^{*}\), there is a _unique_ value of \(k\in\mathbb{F}_{q}\) such that \(x^{7}+kx^{3}-1=0\). Thus, there are at most \(q-1\) values of \(k\in\mathbb{F}_{q}\) such that the polynomial \(x^{7}+kx^{3}-1\) has a zero in \(\mathbb{F}_{q}\). The next result improves Proposition 1.3. **Theorem 1.4**.: _There exist at least \(\frac{q}{6}-1-\frac{28}{3}\sqrt{q}\) many values of \(k\in\mathbb{F}_{q}\) such that \(x^{7}+kx^{3}-1\in\mathbb{F}_{q}[x]\) has no zeros in \(\mathbb{F}_{q}\)._ Note that Theorem 1.2 and Proposition 1.3 together yields that for each odd \(q\), there exists at least one value \(k\in\mathbb{F}_{q}\) for which the corresponding curve \(C_{k}\) has no singular \(\mathbb{F}_{q}\)-points. Furthermore, we expect that the curves in Theorem 1.2 are smooth if and only if they are smooth at all their \(\mathbb{F}_{q}\)-points. Our main conjecture below restates this prediction. **Conjecture 1.5**.: _Suppose \(q\) is odd. The plane-filling curve \(C_{k}\) defined by (1) is smooth if and only if the polynomial \(x^{7}+kx^{3}-1\) has no zeros in \(\mathbb{F}_{q}\)._ We have verified Conjecture 1.5 using Macaulay2 [GS] for all odd prime powers \(q<200\). When \(q=2^{m}\) is even, the curve \(C_{k}\) defined by (1) turns out to be singular (for _every_\(k\in\mathbb{F}_{q}\)). As a replacement, we consider another curve \(D_{k}\) in this case: \[x^{2}(x^{q}y-xy^{q})+y^{2}(y^{q}z-yz^{q})+(z^{2}+kxy)(z^{q}x-zx^{q})=0. \tag{2}\] We make a similar conjecture regarding the smoothness of the curves \(D_{k}\). **Conjecture 1.6**.: _Suppose \(q\) is even. The plane-filling curve \(D_{k}\) defined by (2) is smooth if and only if the polynomial \(x^{7}+kx^{5}+1\) has no zeros in \(\mathbb{F}_{q}\)._ The polynomial \(x^{7}+kx^{5}+1\) featured above is prominent because one can show, similar to Theorem 1.2, that a plane-filling curve \(D_{k}\) is smooth at all of its \(\mathbb{F}_{q}\)-points (when \(q\) is even) if and only if \(x^{7}+kx^{5}+1\) has no \(\mathbb{F}_{q}\)-roots. We have verified Conjecture 1.6 using Macaulay2 [GS] for \(q=2^{m}\) when \(1\leq m\leq 9\). We prove the following as partial progress towards Conjecture 1.5. **Theorem 1.7**.: _Suppose \(q\) is odd. There exists a suitable choice of \(k\in\mathbb{F}_{q}\) such that the plane-filling curve \(C_{k}\) defined by by (1) is smooth at all \(\mathbb{F}_{q^{2}}\)-points._ A similar argument as the one employed in Theorem 1.7 yields an analogous result when \(q\) is even, and the curve \(C_{k}\) is replaced by \(D_{k}\). To prove Theorem 1.7, we will prove that any plane-filling curve of degree \(q+3\) which is smooth at \(\mathbb{F}_{q}\)-points and has no \(\mathbb{F}_{q}\)-linear component must be smooth at each of its \(\mathbb{F}_{q^{2}}\)-points. We also investigate plane-filling curves of degree \(q+r+1\) where \(r\geq 2\) is arbitrary. **Theorem 1.8**.: _For each \(k\in\mathbb{F}_{q}\), consider the plane-filling curve \(C_{k,r}\) defined by_ \[x^{r}(x^{q}y-xy^{q})+y^{r}(y^{q}z-yz^{q})+(z^{r}+kx^{r})(z^{q}x-zx^{q})=0.\] _Then \(C_{k,r}\) is smooth at every \(\mathbb{F}_{q}\)-point of \(\mathbb{P}^{2}\) if and only if the polynomial \(x^{r^{2}+r+1}+kx^{r+1}-1=0\) has no zeros in \(\mathbb{F}_{q}\)._ ### Structure of the paper In Section 2, we prove Theorem 1.4. We devote Section 3 to Theorem 1.7, and Section 4 to Theorem 1.8. ## 2. Proof of Theorem 1.4 We begin this section by noting that Theorem 1.2 is a special case of Theorem 1.8 which will be proven in Section 4. Our Theorem 1.2 provides a criterion that tests whether the plane-filling curve \(C_{k}\) defined by (1) is smooth at every \(\mathbb{F}_{q}\)-point. The following technical result will be employed in our proof of Theorem 1.4. **Lemma 2.1**.: _The polynomial \(x^{3}y^{3}(x+y)(x^{2}+y^{2})+(x^{2}+xy+y^{2})\) is irreducible in \(\overline{\mathbb{F}_{q}}[x,y]\)._ Proof.: The proof employs a technique seen in Eisenstein's criterion. First, suppose \(p=\operatorname{char}(\mathbb{F}_{q})\neq 3\). Assume, to the contrary, that \(f(x,y):=x^{3}y^{3}(x+y)(x^{2}+y^{2})+(x^{2}+xy+y^{2})\) is reducible over the algebraic closure \(\overline{\mathbb{F}_{q}}\). Write \(f(x,y)=g(x,y)\cdot h(x,y)\), and express \[g(x,y) =g_{m}(x,y)+g_{m+1}(x,y)+\cdots+g_{s}(x,y)\] \[h(x,y) =h_{n}(x,y)+h_{n+1}(x,y)+\cdots+h_{t}(x,y)\] where \(g_{i}(x,y)\) and \(h_{j}(x,y)\) are homogeneous of degree \(i\) and \(j\), respectively, for \(m\leq i\leq s\) and \(n\leq j\leq t\). From \(f(x,y)=g(x,y)\cdot h(x,y)\), we see that \[\begin{cases}g_{m}h_{n}=x^{2}+xy+y^{2}\\ g_{s}h_{t}=x^{3}y^{3}(x+y)(x^{2}+y^{2})\\ \sum_{i+j=k}h_{i}g_{j}=0\text{ for }2<k<9\end{cases}\] Since the characteristic \(p\neq 3\), the polynomial \(x^{2}+xy+y^{2}\) factors into distinct linear factors in \(\overline{\mathbb{F}_{q}}[x,y]\). Let \(x+\lambda y\) be one of those linear factors with \(\lambda\in\overline{\mathbb{F}_{q}}\). Then \(x^{2}+xy+y^{2}\) is divisible by \(x+\lambda y\) but not by \((x+\lambda y)^{2}\). Thus, exactly one of \(g_{m}\) or \(h_{n}\) is divisible by \(x+\lambda y\). Without loss of generality, assume \(x+\lambda y\) divides \(g_{m}\), and not \(h_{n}\). Then using \(\sum_{i+j=k}h_{i}g_{j}=0\) for \(2<k<9\), we inductively see that \(x+\lambda y\) divides \(g_{j}\) for each \(m\leq j\leq s\). In particular, \(x+\lambda y\) divides \(g_{s}h_{t}\). This is a contradiction because \(x+\lambda y\) does not divide \(x^{3}y^{3}(x+y)(x^{2}+y^{2})\). Indeed, \(x^{2}+xy+y^{2}\) and \(x^{3}y^{3}(x+y)(x^{2}+y^{2})\) are relatively prime. When \(p=3\), a similar argument works from the other end of the polynomial: the leading term \(x^{3}y^{3}(x+y)(x^{2}+y^{2})\) is divisible by \(x+y\) but not by \((x+y)^{2}\). We deduce that \(f(x,y)\) is irreducible over \(\overline{\mathbb{F}_{q}}\) for every prime power \(q\) Proof of Theorem 1.4.: Our goal is to give a lower bound on the number of \(k\in\mathbb{F}_{q}\) such that the polynomial \(x^{7}+kx^{3}-1\) has no roots in \(\mathbb{F}_{q}\). As \(x\) ranges in \(\mathbb{F}_{q}^{*}\) (note that there is no \(k\in\mathbb{F}_{q}\) for which \(x=0\) would be a root of \(x^{7}+kx^{3}-1\)), the number of "bad" choices of \(k\) are parametrized by \(\frac{1-x^{7}}{x^{3}}\). We will show that there are many choices of \(x\) and \(y\) such that \(\frac{1-x^{7}}{x^{3}}\) and \(\frac{1-y^{7}}{y^{3}}\) give rise to the same value of \(k\). Setting these expressions equal to each other, we obtain the following. \[\frac{1-x^{7}}{x^{3}}=\frac{1-y^{7}}{y^{3}}\ \ \Rightarrow\ \ x^{7}y^{3}-y^{3}=y^{7}x^{3}-x^{3}\] After rearranging and dividing both sides by \(x-y\), we obtain an affine curve \(\mathcal{C}\subset\mathbb{A}^{2}\) defined by \[x^{3}y^{3}(x+y)(x^{2}+y^{2})+x^{2}+xy+y^{2}=0,\] for \(x,y\in\mathbb{F}_{q}^{*}\)_and_\(x\neq y\). Let \(G\) be a graph whose vertex set is \(\mathbb{F}_{q}^{*}\), and there is an edge between \(x\) and \(y\) if \((x,y)\) lies on the affine curve \(\mathcal{C}\). We consider undirected edges, so the pairs \((x,y)\) and \((y,x)\) correspond to the same edge. **Claim 1.** The number of edges of \(G\) is at least \(\frac{q}{2}-6-28\sqrt{q}\). Let \(\tilde{\mathcal{C}}\subset\mathbb{P}^{2}\) be the projectivization of \(\mathcal{C}\). By Lemma 2.1, the curve \(\tilde{\mathcal{C}}\) is geometrically irreducible. By Hasse-Weil inequality for geometrically irreducible curves [1, Corollary 2.5], \(\#\tilde{\mathcal{C}}(\mathbb{F}_{q})\geq q+1-56\sqrt{q}\). Since the line at infinity \(z=0\) can contain at most \(5\) distinct \(\mathbb{F}_{q}\)-points, we have \(\#C(\mathbb{F}_{q})\geq q-4-56\sqrt{q}\); furthermore, we exclude the points for which \(xy=0\) and there is only one such point \([0:0:1]\in\tilde{\mathcal{C}}\). We also need to rule out the points on the diagonal, namely \(x=y\); in this case, \(4x^{9}+3x^{2}=0\) which contributes at most \(7\) additional points with \(x\neq 0\). Thus, the number of \((x,y)\in C(\mathbb{F}_{q})\) with \(x\neq y\) is at least \(q-12-56\sqrt{q}\). The claim follows since the edges are undirected. **Claim 2.** Every connected component of \(G\) is a complete graph \(K_{n}\) where \(n\in\{1,2,3,4,5,6\}\). If \((x,y)\) and \((x,z)\) are both edges of \(G\), then \(\frac{1-x^{7}}{x^{3}}=\frac{1-y^{7}}{y^{3}}\) and \(\frac{1-x^{7}}{x^{3}}=\frac{1-z^{7}}{z^{3}}\). Consequently, \(\frac{1-y^{7}}{y^{3}}=\frac{1-z^{7}}{z^{3}}\) and \((y,z)\) lies on the curve \(\mathcal{C}\), so \((y,z)\) is an edge in \(G\) too. Thus, each connected component of \(G\) is a clique. In addition, from the equation of \(\mathcal{C}\), the degree of each vertex \(x\in G\) is at most \(6\). For each \(1\leq i\leq 6\), let \(m_{i}\) denote the number of cliques of size \(i\) in \(G\). Counting the number of edges in \(G\) leads to the following equality. \[\#E(G)=\sum_{i=1}^{6}\frac{i(i-1)}{2}\cdot m_{i}.\] Each clique of size \(i\) in \(G\) increases the number of "good" values of \(k\) by an additive factor of \(i-1\) because each clique corresponds to one "bad" value of \(k\), i.e., a value \(k\in\mathbb{F}_{q}\) for which the equation \(x^{7}+kx^{3}-1=0\) is solvable for some \(x\in\mathbb{F}_{q}\). More precisely, \[\#\{k\in\mathbb{F}_{q}\mid x^{7}+kx^{3}-1\text{ has no zeros in }\mathbb{F}_{q}\}\] \[=q-\sum_{i=1}^{6}m_{i}\] \[=1+(q-1)-\sum_{i=1}^{6}m_{i}\] \[=1+\sum_{i=1}^{6}i\cdot m_{i}-\sum_{i=1}^{6}m_{i}\] \[=1+\sum_{i=1}^{6}(i-1)\cdot m_{i}\] \[\geq 1+\frac{1}{3}\sum_{i=1}^{6}\frac{(i-1)i}{2}\cdot m_{i}\geq 1+ \frac{1}{3}\#E(G)\geq 1+\frac{1}{3}\left(\frac{q}{2}-6-28\sqrt{q}\right)\] as desired. ## 3. Smoothness at \(\mathbb{F}_{q^{2}}\)-points In this section, we show that a plane-filling curve \(C\) of degree \(q+3\) has the following special property: being smooth at \(\mathbb{F}_{q}\)-points implies being smooth at \(\mathbb{F}_{q^{2}}\)-points under a mild condition. **Proposition 3.1**.: _Suppose \(C\) is a plane-filling curve of degree \(q+3\) such that_ 1. _The curve_ \(C\) _is smooth at all the_ \(\mathbb{F}_{q}\)_-points._ 2. _The curve_ \(C\) _has no_ \(\mathbb{F}_{q}\)_-linear component._ _Then \(C\) is smooth at each \(\mathbb{F}_{q^{2}}\)-point._ Proof.: Assume, to the contrary, that \(C\) is singular at some \(\mathbb{F}_{q^{2}}\)-point \(Q\). Then \(Q\) is not an \(\mathbb{F}_{q}\)-point due to the hypothesis (i). Let \(Q^{\sigma}\) denote the Galois conjugate of \(Q\) under the Frobenius automorphism. More explicitly, if \(Q=[x:y:z]\in\mathbb{P}^{2}\), then \(Q^{\sigma}=[x^{q}:y^{q}:z^{q}]\). Note that \(Q^{\sigma}\) is also contained in \(C\) (since \(C\) is defined over \(\mathbb{F}_{q}\)). Moreover, \(Q^{\sigma}\) is also a singular point of \(C\). Consider the line \(L\) joining \(Q\) and \(Q^{\sigma}\), which is an \(\mathbb{F}_{q}\)-line by Galois theory. By hypothesis (ii), the line \(L\) must intersect \(C\) in exactly \(q+3\) points (counted with multiplicity). However, \(L\) already contains \(q+1\) distinct \(\mathbb{F}_{q}\)-points of \(C\) (because \(C\) is plane-filling), and passes through the two singular points \(Q\) and \(Q^{\sigma}\), each contributing intersection multiplicity at least \(2\). Thus, the total intersection multiplicity between \(L\) and \(C\) is at least \((q+1)+2+2=q+5\), a contradiction. **Remark 3.2**.: We can weaken the hypothesis of Proposition 3.1 by replacing the condition \(\deg(C)=q+3\) with \(\deg(C)\leq q+4\). Indeed, the same proof works verbatim. Next, we show that the plane-filling curves \(C_{k}\) of degree \(q+3\) considered in equation (1) indeed satisfy condition (ii) when \(q\) is odd. **Proposition 3.3**.: _The curve \(C_{k}\) defined by (1) has no \(\mathbb{F}_{q}\)-linear components when \(q\) is odd._ Proof.: There are three types of \(\mathbb{F}_{q}\)-lines in \(\mathbb{P}^{2}\). **Type I.** The line \(L\) is given by \(z=0\). The curve \(C_{k}\) meets the line \(\{z=0\}\) at finitely many points determined by \(x^{2}(x^{q}y-xy^{q})=0\). In particular, \(\{z=0\}\) is not a component of \(C\). **Type II.** The line \(L\) is given by \(x=az\) for some \(a\in\mathbb{F}_{q}\). The curve \(C_{k}\) meets the line \(\{x=az\}\) at finitely many points determined by \[(az)^{2}((az)^{q}y-(az)y^{q})+y^{2}(y^{q}z-yz^{q})+(z^{2}+k(az)^{2})(z^{q}(az)- z(az)^{q})=0.\] After simplifying and using \(a^{q}=a\), the last term cancels and we obtain: \[a^{3}z^{q+2}y-a^{3}z^{3}y^{q}+y^{q+2}z-y^{3}z^{q}=0\] In particular, \(\{x=az\}\) is not a component of \(C\). **Type III.** The line \(L\) is given by \(y=ax+bz\) for some \(a,b\in\mathbb{F}_{q}\). If \(a=0\) or \(b=0\), then \(y=bz\) or \(y=ax\), and the analysis is very similar to the previous case. We will assume that \(a\neq 0\) and \(b\neq 0\). We substitute \(y=ax+bz\) into the equation (1) and collect terms to obtain: \[(b+a^{3}-k)x^{q+2}z+(2a^{2}b)x^{q+1}z^{2}+(b^{2}a-1)x^{q}z^{3}+\] \[(-b-a^{3}+k)x^{3}z^{q}+(-2ab)x^{2}z^{q+1}+(-ab^{2}+1)xz^{q+2}=0\] The coefficient of \(x^{q+1}z^{2}\) is \(2a^{2}b\), which is nonzero since \(q\) is odd (so \(2\neq 0\)), \(a\neq 0\) and \(b\neq 0\). Thus, \(L\) is not a component of \(C_{k}\). We are now in a position to prove Theorem 1.7 on the existence of \(k\in\mathbb{F}_{q}\) such that the plane-filling curve \(C_{k}\) is smooth at all its \(\mathbb{F}_{q^{2}}\)-points. Proof of Theorem 1.7.: The result follows immediately from Proposition 1.3, Proposition 3.1, and Proposition 3.3. ## 4. Higher degree plane-filling curves We begin by establishing Theorem 1.8, which provides a necessary and sufficient condition for the plane-filling curve \(C_{k,r}\) to be smooth at all the \(\mathbb{F}_{q}\)-points. Proof of Theorem 1.8.: We consider the curve \(C_{k,r}\) given by the equation: \[x^{r}\cdot(x^{q}y-xy^{q})+y^{r}\cdot(y^{q}z-yz^{q})+(z^{r}+kx^{r})\cdot(z^{q}x -zx^{q})=0. \tag{3}\] We analyze the singular locus of \(C_{k,r}\) and get the equations: \[rx^{r-1}\cdot(x^{q}y-xy^{q})+x^{r}\cdot(-y^{q})+krx^{r-1}\cdot(z^{q}x-zx^{q})+ (z^{r}+kx^{r})\cdot z^{q}=0 \tag{4}\] \[x^{r}\cdot x^{q}+ry^{r-1}\cdot(y^{q}z-yz^{q})+y^{r}\cdot(-z^{q})=0 \tag{5}\] \[y^{r}\cdot y^{q}+rz^{r-1}\cdot(z^{q}x-zx^{q})+(z^{r}+kx^{r})\cdot(-x^{q})=0. \tag{6}\] We next analyze the possibility that we have a singular point when \(xyz=0\). If \(x=0\), then equation (4) yields \(z=0\), which is then employed in (6) to derive \(y=0\), contradiction. If \(y=0\), then equation (5) yields \(x=0\) and then equation (4) yields \(z=0\), contradiction. If \(z=0\), then equation (5) yields \(x=0\) and then equation (6) yields \(y=0\), contradiction. So, the only possible singular points are of the form \([x:1:z]\). We search for possible singular points \([x:1:z]\in\mathbb{P}^{2}(\mathbb{F}_{q})\). Then equations (4), (5) and (6) read: \[-x^{r}+z^{r+1}+kx^{r}z=0 \tag{7}\] \[x^{r+1}-z=0 \tag{8}\] \[1-z^{r}x-kx^{r+1}=0. \tag{9}\] Substituting \(z=x^{r+1}\) from equation (8) into equations (7) and (9), we obtain \[-x^{r}+x^{r^{2}+2r+1}+kx^{2r+1}=0\text{ and }1-x^{r^{2}+r+1}-kx^{r+1}=0,\] that is, there exists a singular \(\mathbb{F}_{q}\)-rational point on \(C_{k,r}\) if and only if there exists \(x\in\mathbb{F}_{q}^{*}\) such that \[x^{r^{2}+r+1}+kx^{r+1}-1=0, \tag{10}\] as desired. We end the proof by mentioning that some care is needed to treat the case when the characteristic \(p\) of the field divides the degree of the curve (i.e., \(p\) divides \(r+1\) in this setting). Indeed, the singular locus of any projective curve \(\{f=0\}\) is defined by \(\{f=\frac{\partial f}{\partial x}=\frac{\partial f}{\partial y}=\frac{\partial f }{\partial z}=0\}\). When \(p\) divides \(\deg(f)\), it is _not_ enough to consider the points in the locus \(\{\frac{\partial f}{\partial x}=\frac{\partial f}{\partial y}=\frac{\partial f }{\partial z}=0\}\). Fortunately, in our case, the \(\mathbb{F}_{q}\)-point \([x:1:z]\) is automatically on the curve \(C_{k,r}\) because \(C_{k,r}\) is plane-filling. It may be natural to make a prediction identical to Conjecture 1.5 for higher-degree curves. However, some care is needed, as the following two examples show. We found these examples using Macaulay2 [GS]. **Example 4.1**.: Let \(r=5\), \(q=11\), and \(k=9\). The plane-filling curve \(C_{9,5}\) over \(\mathbb{F}_{11}\) is smooth at all the \(\mathbb{F}_{11}\)-points because the polynomial \(x^{31}+9x^{6}-1\) is an irreducible polynomial over \(\mathbb{F}_{11}\). However, \(C_{9,5}\) is singular at two Galois-conjugate \(\mathbb{F}_{11^{2}}\)-points. In the previous example, the curve \(C_{9,5}\) is irreducible over \(\mathbb{F}_{11}\). Thus, \(C_{9,5}\) satisfies the two conditions of Theorem 3.1 and yet it is singular at two \(\mathbb{F}_{11^{2}}\)-points. Since \(\deg(C_{9,5})=q+6\), we see that Remark 3.2 is close to being sharp. **Example 4.2**.: Let \(r=7\), \(q=5\). In this case, the plane-filling curve \(C_{k,7}\) defined over \(\mathbb{F}_{5}\) is singular for each \(k\in\mathbb{F}_{5}\). Indeed, the associated polynomial \(x^{57}+kx^{8}-1\) has an \(\mathbb{F}_{5}\)-root for \(k\in\{0,2,3,4\}\). For these values of \(k\), the curve \(C_{k,r}\) is singular at an \(\mathbb{F}_{5}\)-point. For \(k=1\), the curve \(C_{1,7}\) is singular at four points, namely, two pairs of Galois-conjugate \(\mathbb{F}_{5^{2}}\)-points. The two examples above illustrate that Conjecture 1.5 needs to be modified for plane-filling curves of degree \(q+r+1\) when \(r\) is arbitrary. We propose two related conjectures on the smoothness of the curve \(C_{k,r}\) from Theorem 1.8. Recall that \(C_{k,r}\subset\mathbb{P}^{2}\) is defined by \[x^{r}(x^{q}y-xy^{q})+y^{r}(y^{q}z-yz^{q})+(z^{r}+kx^{r})(z^{q}x-zx^{q})=0\] where \(r\geq 2\) is a positive integer and \(k\in\mathbb{F}_{q}\). **Conjecture 4.3**.: _Let \(r\geq 2\). There exists an integer \(m:=m(r)\) with the following property. For all finite fields \(\mathbb{F}_{q}\) with cardinality \(q>m\) and characteristic not dividing \(r\), there exists some \(k\in\mathbb{F}_{q}\) such that the curve \(C_{k,r}\) is smooth._ Using Macaulay2 [GS], we enumerated through values of \(r\) in the range \([2,17]\) and \(q\) in the range \([2,100]\) with \(\gcd(r,q)=1\). We found only the following pairs \((r,q)\) for which \(C_{k,r}\) is singular for _every_\(k\in\mathbb{F}_{q}\): \((r,q)=(7,5)\), \((13,3)\), \((16,9)\), and \((17,7)\). **Conjecture 4.4**.: _Let \(r\geq 2\). There exists an integer \(s:=s(r)\) with the following property. For all finite fields \(\mathbb{F}_{q}\) with characteristic not dividing \(r\), and for all \(k\in\mathbb{F}_{q}\), if \(C_{k,r}\) is smooth at all of its \(\mathbb{F}_{q^{s}}\)-points, then \(C_{k,r}\) is smooth._ As a motivation for Conjecture 4.4, we mention the following general fact about pencils of plane curves. The family of plane curves \(C_{k}\) forms a _pencil_ of plane curves since the parameter \(k\in\mathbb{F}_{q}\) appears linearly in the defining equation. If \(\mathcal{L}\) is a pencil of plane curves in \(\mathbb{P}^{2}\) parametrized by \(\mathbb{A}^{1}\), then \(\mathbb{F}_{q}\)-members of \(\mathcal{L}\) are defined by \(f(x,y,z)+kg(x,y,z)=0\) where \(k\in\mathbb{F}_{q}\) is arbitrary. We will use \(X_{k}\) to denote this plane curve in the following proposition. **Proposition 4.5**.: _Let \(\mathcal{L}\) be a pencil of plane curves \(\{X_{k}\}_{k\in\mathbb{F}_{q}}\) of degree \(d\) defined over a finite field \(\mathbb{F}_{q}\). Suppose that for every \(s\geq 1\), there exists some \(k\in\mathbb{F}_{q}\) such that \(X_{k}\) is smooth at all of its \(\mathbb{F}_{q^{s}}\)-points. Then there exists some \(\ell\in\mathbb{F}_{q}\) such that \(X_{\ell}\) is smooth._ Proof.: Assume, to the contrary, that \(X_{k}\) is singular for each \(k\in\mathbb{F}_{q}\). For each \(k\in\mathbb{F}_{q}\), let \(n_{k}\in\mathbb{N}\) such that the curve \(X_{k}\) is singular at some \(\mathbb{F}_{q^{n_{k}}}\)-point. Let \(N:=\prod_{k\in\mathbb{F}_{q}}n_{k}\). By construction, no \(X_{k}\) is smooth at all of its \(\mathbb{F}_{q^{N}}\)-points, contradicting the hypothesis. Proposition 4.5 asserts that to find a smooth member of any pencil \(\mathcal{L}\) defined over \(\mathbb{F}_{q}\), it is sufficient to find a member which is smooth at all points of an (arbitrary) finite degree. Conjecture 4.4 strengthens the conclusion by predicting that for a pencil of plane-filling curves, one finds a smooth member by only checking smoothness at all points of _fixed_ finite degree.
2305.02515
A Study of Static Warning Cascading Tools (Experience Paper)
Static analysis is widely used for software assurance. However, static analysis tools can report an overwhelming number of warnings, many of which are false positives. Applying static analysis to a new version, a large number of warnings can be only relevant to the old version. Inspecting these warnings is a waste of time and can prevent developers from finding the new bugs in the new version. In this paper, we report the challenges of cascading warnings generated from two versions of programs. We investigated program differencing tools and extend them to perform warning cascading automatically. Specifically, we used textual based diff tool, namely SCALe, abstract syntax tree (AST) based diff tool, namely GumTree, and control flow graph (CFG) based diff tool, namely Hydrogen. We reported our experience of applying these tools and hopefully our findings can provide developers understandings of pros and cons of each approach. In our evaluation, we used 96 pairs of benchmark programs for which we know ground-truth bugs and fixes as well as 12 pairs of real-world open-source projects. Our tools and data are available at https: //github.com/WarningCas/WarningCascading_Data.
Xiuyuan Guo, Ashwin Kallingal Joshy, Benjamin Steenhoek, Wei Le, Lori Flynn
2023-05-04T02:57:48Z
http://arxiv.org/abs/2305.02515v1
# A Study of Static Warning Cascading Tools (Experience Paper) ###### Abstract. Static analysis is widely used for software assurance. However, static analysis tools can report an overwhelming number of warnings, many of which are false positives. Applying static analysis to a new version, a large number of warnings can be only relevant to the old version. Inspecting these warnings is a waste of time and can prevent developers from finding the new bugs in the new version. In this paper, we report the challenges of _cascading warnings_ generated from two versions of programs. We investigated program differencing tools and extend them to perform warning cascading automatically. Specifically, we used textual based diff tool, namely _SCALe_, abstract syntax tree (AST) based diff tool, namely _GumTree_, and control flow graph (CFG) based diff tool, namely _Hydrogen_. We reported our experience of applying these tools and hopefully our findings can provide developers understandings of pros and cons of each approach. In our evaluation, we used 96 pairs of benchmark programs for which we know ground-truth bugs and fixes as well as 12 pairs of real-world open-source projects. Our tools and data are available at [https://github.com/WarningCas/WarningCasading_Data](https://github.com/WarningCas/WarningCasading_Data). 2012 Xiuyuan Guo, Ashwin Kallingal Joshy, Benjamin Steenhoek, Wei Le, and Lori Flynn. 2023A Study of Static Warning Cascading Tools (Experience Paper). In _Proceedings of ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2023)_. ACM, New York, NY, USA, 11 pages. [https://doi.org/10.1145/nnnnnn.nnnnnnnn](https://doi.org/10.1145/nnnnnn.nnnnnnnn) 2 ## 1. Introduction In an agile software development setting, there is a need to deliver reliable new software releases in a rapid fashion. The big challenge is how we can only analyze and report the software quality issues related to the new version, as the issues of old versions have been addressed previously when shipping the old versions. In particular, static analysis tools, as an important software assurance technique, often generate an overwhelming number of warnings for each version of software [1; 2]. It is confusing which warnings are related to only the old code and have been reviewed in the previous versions, which warnings report new issues in the updated release, and which warnings are about the issues of fixing the old warnings. Consequently, tremendous time and manual efforts can be wasted and not spent on the right problems of the current version. The goal of _static warning cascading_ (also called _matching_ or _aligning_ static warnings) is to help developers classify warnings into several categories: (1) the cascaded warnings report a same issue in the old and new versions (so we don't need to handle it), (2) the warnings in the old version are fixed in the new version (we can inspect them together to confirm if the fix is indeed correct), (3) the warnings are changed from the old version but the old and new warnings are related (we should inspect them together to understand the problem), and (4) the warnings only report issues in the new version (we should inspect them in the new version of software). For most of static analysis tools, a warning is reported as a line in the source code file. Thus the warning cascading problem can be reduced to map a source code line from the old version to the new version and classify the mapping to be one of the above categories. There exist a spectrum of program differencing tools [3; 4; 5] that can match source code lines. Some representative categories include _textual based diff_, _syntax based diff_ and _control flow based diff_. Textual based diff is typically performed on two source code files using the _longest common sequence_ algorithm like the one implemented in the _Unix diff_ tool. Syntactic based diff tools like GumTree [4] are performed on the _abstract syntax trees (ASTs)_. It compares ASTs from two versions of a source code file and determines if the AST nodes in the two versions should be matched. The control flow based diff uses a representation of _multiple version interprocedural control flow graphs (MVICFG)_[3]. The MVICFG is a union of _Interprocedural Control Flow Graphs (ICFGs)_ for a set of program versions. The common nodes and edges in versions are represented only once and each edge is marked with the versions it belongs to. In this paper, we conducted a study of the three representative program differencing techniques for cascading static warnings. Our goal is to evaluate the pros and cons of each tool and report which tools are the most _useful_ and _successful_ for cascading warnings. By useful and successful, we mean when a bug in a program is fixed in the new version, the tool is able to report the warning of the bug does not cascade to the one in the fixed version; when a bug in a program still exists after adding changes, the tool is able to report that the warning in the buggy version is cascaded to the new version. They are the same or related warnings. Specifically, we used an existing texual based warning cascading tool _SCALe_1 developed and used at CERT 2; we also designed and implemented warning cascading routines on top of other two open source tools, GumTree and Hydrogen. We applied the three tools for static analysis warnings generated for two program versions. To compare different tools' behavior when apply warning cascading, we first constructed studies on the 96 pairs of benchmark programs where the ground truth are known. We then collected 12 pairs of real-world open source projects and investigate the use of the three tools in practice. Footnote 1: [https://github.com/cmu-sei/SCALe/tree/scaife-scale](https://github.com/cmu-sei/SCALe/tree/scaife-scale) Footnote 2: [https://www.sei.cmu.edu/about/divisions/cert/](https://www.sei.cmu.edu/about/divisions/cert/) Our results show that Hydrogen has a slight advantage compared to other two tools for the ground truth benchmarks. When used for real-world programs such as find, grep, make and coreutils, Hydrogen is more successful for cascading same "bugs" across versions compared to two other tools, where SCALe shows more advantages in cascading the cases where the warnings for the first version are fixed in the second versions. We sampled a set of our results and reported the analysis on these examples (see SS4 for details). Our experience and findings can provide developers knowledge on warning cascading as well as a more general problem of programming differencing. In summary, this paper made the following contributions: 1. We reported practical challenges of cascading warnings across program versions and proposed what is considered as a successful warning cascading (SS2); 2. We used and extended three types of programming differencing tools to perform warning cascading (SS3); 3. We designed and performed comprehensive empirical studies to compare the three types of approaches to discover whether, when and why each type of the tools work best for warning alignment (SS4); and 4. We open source our tools and datasets at [https://github.com/WarningCas/WarningCascoding_Data](https://github.com/WarningCas/WarningCascoding_Data). ## 2. Motivation and Challenges Warning cascading is challenging because when programs are updated in the new versions, the function and variable names may change, and the line numbers of the same statements are also likely changed. Directly performing string matching for the output from static analysis tools cannot work because of the change of context in the newer version. In this section, we provide some examples to explain the challenges of warning cascading, and we also more precisely define what it means by a useful and successful cascading. ### Challenging examples find.71f10368 has a bug of "crashing in some locales for find -printf "%AX" and a newer version of find added a fix for this bug and also included many additional new changes. If we run static analysis tool, we will get 1600+ warnings for each version. Without a proper warning cascading tool, we cannot easily find which warnings in the previous version are changed in the new version, and determine whether the fix is successful and whether there are more new issues introduced in the fix and other newly added code. Warning cascading is challenging for several aspects. First, there are many identical warnings between the two versions; however, the same warnings across versions may be reported as different locations of the same files due to the new code added or old code deleted, and there can be changes of variable/function names, e.g., via refactoring, which do not affect the warning semantics. Second, there are often dead code in the project, e.g., corebench 3 have gnulib-tests folders within the projects which did not affect program behaviors. But static analysis scans all the code to output the warnings. Developers have to filter out those warnings irrelevant to the newly developed code. Such dead code can be project specific and hard to exclude, and thus increase the overhead of warning cascading. Footnote 3: [https://www.comp.nus.edu.sg/](https://www.comp.nus.edu.sg/) release/corebench/ Here, we further show some real-world examples discovered in our study. In the first case, when many new lines are added before the target line, the text diff tools cannot match the warnings. See Figure 1. In the second case, a line added in the new version (green at line 5 in Figure 2) is the same as the target line (blue at line 7 in Figure 2). The diff tools can be confused and mistakenly match newly added line 5 with i++ in the old version instead of line 7. In the third case, there are non-semantic changes, e.g., changing function name between the two versions or adding a new comment to the target line. As an example, in Figure 3, fprintf is changed to checked_fprintf, and the text diff tools cannot match them. In Figure 4, a statement at line 7 didn't change at all in the second version but an extra comment is added. These cases can challenge the warning cascading tools. ### What is a useful and successful warning cascading? In the problem of warning cascading, the tool takes static warnings generated from one version and determines if there is a _match_ for the warnings in another version. We consider a warning is successfully cascaded for the following two cases. First, the cascading tool reports the two warnings as _same "bugs"_ if both versions contain the same "bugs" located at the target line (we say the "bugs" in two versions are the same if the sequences of root cause statements along the paths are semantically equivalent). In this case, the warnings have been reviewed in the old version, and developers do not need to further investigate these warnings. Here, "bug" is not confirmed but is the output warning from static analysis static reg_errcode_t internal_function re_string_reconstruct(re_string_t*pstr,Idxidx,inteflags) {... case KIND_FORMAT: switch(segment>>format_char[0]) case'a:'a: -fprintf(fp,segment>text,ctime_format (get_stat_atime(stat_buf))); +checked_printf(fp,segment>text,ctime_format (get_stat_atime(stat_buf))); break; case'b': -fprintf(fp,segment>text,human_readable((unintmax_t) ST_BLOCCS(*stat_buf))); +checked_fprintf(fp,segment>txt,human_readable ((unintmax_t) ST_BLOCCS(*stat_buf))); break;... } staticbool record_exec_dir(structexec_val*execp) { ++ifdefined_HSOCs_||defined(__EMX__) +if(uniry_shell) +else +if(i_bourne_compatibleshell(shell)) +#endif... //morelinesareaddedhere...... command_ptr=sp;... } ``` Figure 1. Challenge: many lines (lines 9–17) are added before the target line (line 25) Figure 3. Challenge: new name is applied during refactoring tools. "bug" can be false positives, and we can match them if the changes newly added do not affect the semantics of the warnings. A variant of the first case is that the root cause statements of the "bug" are not exactly the same but have some changes--for a successful cascading, the tool should report the two warnings as _relevant__bugs"_. So developers can inspect the two warnings together for diagnosis. In the second case, one version contains the "bug" at the target line and the other version added a fix for the "bug". There is no longer warning reported for this "bug" in the second version. Here, a successful cascading should report _"bug" fix_. This case includes a special situation, where the buggy code is deleted in the second version. The warning cascading in this case is useful especially when the second version aims to fix the bugs in the first version. Warning cascading is able to help determine if the issues in the first version likely are addressed in the new versions, and what are the new issues added in the new version. If the cascading tools fail to match the same "bugs" (in the first case) or match any "bug" in one version with irrelevant "bugs" in another version (in both first and second cases), we consider such cascading as unsuccessful. In our evaluation, we used benchmarks that are known with ground truth bugs and fixes to evaluate such metrics. For the real-world benchmarks where there is no ground truth, we performed manual inspection to determine if the warning cascading is successful (details see SS4). ## 3. Three techniques of cascading warning In this paper, we used three different types of program differencing tools, namely _textual based diff_, _AST based diff_ and _CFG based diff_, for warning cascading. Specifically, _SCALe_ is a tool developed by CERT and used Umix diff for cascading warnings. _GumTree_ is a syntactic differencing tool based on abstract syntax trees (ASTs). _Hydrogen_ compares programs based on control flow graphs integrated in MVICFG. We extended GumTree and Hydrogen for warning cascading. We compare the output of these tools to understand the pros and cons of these techniques. We hope our findings can help developers better select warning cascading tools and more efficiently improve their code quality in the continuous integration. In the following, we provide some technical details of the three tools. Figure 4. Challenge: add a comment Figure 3. Challenge: new name is applied during refactoring ### Textual based diff tool: SCALe SCALe [5] takes warnings reported by multiple static analysis tools and first group the warnings of the same issue (reported by different static analysis tools) into one warning. It then applies the textual _diff_ tool 4 to maps lines between versions and assess whether the source code line associated with a warning has undergone modification in the updated version. Footnote 4: [https://www.gnu.org/software/diffutils](https://www.gnu.org/software/diffutils) To apply SCALe, we deployed docker-based virtual machine 5. For each program analyzed, the static analysis tools were run, and the output was formatted in a way that SCALe could process and understand. These output files were uploaded into SCALe via their web-based interface. The warnings of the second version of the program are added in the same way. To compare the different versions of the program, we leveraged the browser interface's built-in functions and employed the _diff_ tool for cascading the differences. This was executed internally by the browser's backend, which allowed for the comparison of the program and performed warning cascading on the program versions. After that, we can find the output of it on the GUI page, and the information contains the _verdict_ value of each warning. Footnote 5: [https://github.com/cmu-sei/SCALe/tree/scaife-scale](https://github.com/cmu-sei/SCALe/tree/scaife-scale) If the warning's verdict value is true, that means there is no adjudication on the previous version for the line corresponding to the warning, and there is matched warning in the first version. If such a value is false, that means there is a change in the previous version. The warning in the second version is not matched. We review any warnings present in the first version but not matched as _same 'bug'_ in the second version and label them as _'bug' fix_ cascading. We implemented a script to automate the process of generating warnings from multiple static analysis tools, uploading the results to SCALe, and cascading warnings between two versions of a program. ### AST based diff tool: GumTree GumTree [4] is a syntactic differencing tool that operates based on the Abstract Syntax Tree (AST). Unlike SCALe, which uses the diff tool to compare file versions on a text line level, GumTree parses each file version into an AST representation and directly matches the nodes in the two AST versions. By utilizing the AST representation, GumTree is able to bypass the influence of minor changes that may surround warnings, such as changes in spacing or refactoring of variable names, which are unlikely to affect the warnings. When the warnings in the two versions correspond to the aligned nodes identified by GumTree, we consider the warning is cascaded to the new version. To perform syntactic warning cascading, we built a custom client interface for the code release of GumTree6. GumTree's objective is to compute an _edit script_, a sequence of edit actions made to a source file, which is short and close to the developer's intent. It follows a 2-step process. Step 1 computes mappings between similar nodes in the two ASTs; the main contribution of GumTree is to maximize the number of mapped nodes. Step 2 deduces the edit script from the AST mapping using the algorithm of Chawathe et al [6]. We only used the AST mapping and did not compute the edit script, since our application only needs to match the AST nodes between two versions of a program. Footnote 6: [https://github.com/GumTreeDiff/gumtree](https://github.com/GumTreeDiff/gumtree) Given two versions of a source code file \(P_{1}\) and \(P_{2}\), the GumTree parser produces their respective ASTs \(T_{1}\) and \(T_{2}\). Then, GumTree computes the mapping \(M_{T}\) between the similar nodes in \(T_{1}\) and \(T_{2}\). Finally, our client checks the mapping \(M_{T}\) to determine the set of warnings to cascade. Algorithm 1 defines our cascading algorithm built on top of the GumTree. Since all warnings are placed on concrete lines in the code, we traverse only the leaf nodes of the AST. A warning is cascaded between nodes \(t_{1}\in T_{1},t_{2}\in T_{2}\) if and only if there is at least one warning on the same line as both \(t_{1}\) and \(t_{2}\) (line 3), \(t_{1}\) is mapped to \(t_{2}\) by GumTree (line 4), and the warnings attached to \(t_{1}\) and \(t_{2}\) have the same CWE (Common Weakness Enumeration) condition (line 5). Similar to SCALe's implementation of diff cascading, our implementation uses the results of running GumTree to cascade the warnings without modifying the GumTree AST parser and differencing algorithm. Our implementation preserves the stable and efficient implementation and adds only the little overhead which is necessary for cascading warnings. ``` Input :AST for 2 versions \(T_{1},T_{2}\), node mapping \(M_{T}=\{t_{1},t_{2}\}\) Output :\(M_{W}=\{w_{1},w_{2}\}\) matched from \(T_{1}\) to \(T_{2}\) 1\(M_{W}\leftarrow\emptyset\) 2for\(w_{1}\in W_{1}\)do 3for\(t_{1}\in leavesT_{1}|t_{1}\_line=w_{1}\_line\)do 4if\(t_{1},t_{2}\in M_{T}\)then 5\(M_{W}=\{w_{2}\in W_{2}|t_{2}\_line=w_{2}.line\wedge w_{1}.condition=w_{2}. condition\}\) 6return\(M_{W}\) ``` **Algorithm 1**Cascade Warnings with GumTree ### CFG based diff tool: MVICFG Hydrogen [3] is a tool that differentiates programs with regards to its control flow. It first generates the ICFG based on IR produced by LLVM [7]. It then combines ICFGs of each version into one via a graph union. The nodes and edges are shared across multiple versions and are marked with versions they belong to. In the end, it builds a program representation for multiple versions of ICFGs, called MVICFG, which shows different control flows and paths between two different program versions. To perform warning cascading, we developed an extension to Hydrogen's original algorithm. This extended algorithm utilizes various graph traversals to detect the cascading of warnings, shown in Algorithms 2 and 3. Algorithm 2 takes as input the two program versions (\(V_{1}\) & \(V_{2}\)) and their respective collection of static warnings (\(SW_{1}\) & \(SW_{2}\)). The algorithm outputs (\(W_{m}\) & \(W_{u}\)) as a matched warning (cascading bugs) and unmatched warning (cascading fixes) respectively. We generate MVICFG of the two versions of the program at line 3 of Algorithm 2. Then we embed the warnings from both versions of \(SW_{1}\) and \(SW_{2}\) into the MVICFG at line 4, based on its location in the code, including the file path, file name, function name, and line number. After embedding, each warning contains meta-data that specifies the version from which it originates, the type and message associated with the warning, and its node in MVICFG. At lines 5 and 6, we iterate through all the warnings from \(SW_{1}\). For each warning, we obtain its corresponding MVICFG node based on the metadata provided by the warning, and we then see if node \(n\) is a common node shared across two versions [8]. If so, the warnings at these locations have the possibility of being cascaded. If the node is a common node and it also contains the warning from the second version, we add it into \(W_{m}\). Otherwise, if the node is not a common node, we put it into _CheckBetween_ (discussed later) function to further identify whether it can be cascaded. ``` Input : Program versions \([V_{1},V_{2}]\), Resp. warning sets \([SW_{1},SW_{2}]\) Output : Matched warnings \([W_{m}]\), Exclusive warnings \([W_{u}]\) 1 Initialize \(W_{m},W_{u}\), MVICFG. 2FunctionCascadeWarning(\(V_{1},V_{2},SW_{1},SW_{2}\)) MVICFG \(\leftarrow\)GenMVICFG(\(V_{1},V_{2}\)); EmbedInMVICFG(\(SW_{1},SW_{2}\)); while\(SW_{1}\neq\emptyset\)do 3 Remove a warning w from \(SW_{1}\); 4 n \(\leftarrow\)GetNodeFromWarningData(MVICFG, w); ifn_IsSharedNode and n_HasWarning_2then 5 Add w to \(W_{m}\); ``` **Algorithm 2**Cascade warnings Since there is a possibility of the line being marked as modified due to changes surrounding it, Unix diff tool will report the line as changed. MVICFG used Unix diff tool and thus the changed lines will be represented in two different nodes. But we can recover such weakness of Unix diff tools by further checking if the (buggy) paths lead to this line in two versions are actually the same. If so, we can category the warnings as matched. See Algorithm 3_CheckBetween_. In Algorithm 3, at lines 2 and 3, we traverse MVICFG to get the _divergent/convergent_ nodes nearest to \(n\). A _divergent_ node of \(n\) on the MVICFG is defined as a nearest _matched_ node (matched across two versions) found by traversal of the predecessor edges from \(n\). A _convergent_ node of \(n\) is a nearest matched node found by traversal of a successor edge of \(n\). We provide an example to further clarify the two definitions. Figure 9 showed a snippet of MVICFG. Node \(n_{1}\) is a matched node shared between two versions. From this node, there are two edges which are called _version branches_ in the MVICFG. Nodes \(n\) and \(n_{2}\) on the left version-branch belong to version 1. Node \(n\) on the right version-branch belongs version 2. Here, node 1 is the divergent node for \(n\) and \(n2\), and \(n_{3}\) is the convergent node for \(n\) and \(n2\). The two nodes \(n\) along the two version branches have the same statements. We will use algorithm 3 to mark those two as matched by leveraging the use of divergent and convergent nodes. Specifically, at line 2 in Algorithm 3, FirstDivNodeInMVICFG performs a breadth-first search backwards from n and returns the divergent node with the shortest path to n. Similarly, at line 3, FirstConvNodeInMVICFG performs a breadth-first search forwards from n and returns the convergent node with the shortest path from n. Because n is a modified node, there will be at least one divergent node in its ancestors and at least one convergent node in its successors. At line 10, we extract the statement as a string and trim whitespace characters. Then, at line 11, StmtInMVICFG searches all nodes between \(DivN\) and \(ConvN\) for a node in \(V_{2}\) whose text exactly matches \(Stmt\) after trimming whitespace. If such a node exists and it contains a warning for the second version, we consider it as matched with n and categorize the warning as cascaded \(w_{m}\); otherwise, we add it into \(W_{u}\) ## 4. Evaluation In the evaluation, we plan to investigate: 1. Which approaches perform the best for static warning cascading? 2. When and why each approach does not perform well? Figure 5. Divergent and Convergent Nodes on the MVICFG ### Experimental setup ExperimentsWe designed two experiments, namely a _ground-truth setting_ and a _real-world setting_. In the ground-truth setting, we collected a set of buggy programs, where we know the location of the bugs in each program. Each buggy program has two variants: a _buggy-buggy variant_ consisting of the original buggy program and a version in which a refactoring irrelevant to the bug is introduced, and a _buggy-fix_ variant, consisting of the original buggy program and a version in which the bug is fixed. We selected the buggy programs such that at least one of our static analysis tools can correctly report the warnings for the known bugs. That way, we can count and analyze how these warnings are cascaded in its variants, and compare the cascading results with the ground truth. In the real-world setting, we collected a set of programs consisting of real-world bugs and their fixed versions from open source projects. We then observed the cascading of static warnings from the first version to the second version. This setting helps us understand the usefulness and challenges of cascading approaches in the real-world application settings. Software subject selectionTo fulfill the two experiment settings, we used C programs from two benchmarks: SARD [9] and CoREBench [10]. From the SARD dataset, we used ABM and Toyota as the ground truth setting. The two benchmarks consist of 96 pairs of synthetic programs, where static analysis tools are able to report warnings for the buggy version. CoREBench consists of a total 12 pairs of real-world projects, including make, find, grep, and coreutils. These are open-source programs with a long contribution history of over 33k commits. Each program is documented with real-world bug reports and their corresponding fix introducing commit. The program represents a wide variety of project sizes, ranging in size from 9.4k LOC (grep) to 83.1k LOC (coreutils). Static analysis toolsTo generate the static warnings for cacading, we used five different tools: GCC [11], Clang [12], Cppcheck [13], Rosecheckers [14] and CodeSonar[15]. These tools are currently supported by SCALe and frequently used in CERT [16] for scanning vulnerabilities. We used SCALe first to aggregate the warnings generated from different static analysis tools into one warning. We then cascade the warning using the three tools. Metrics and confirmation of the resultsFor the benchmarks where we know the ground-truth bugs, we used the warnings reported at the buggy lines in the first version as subjects and determined the successfulness of warning cascading based on our criteria given in Section 2.2. For the real-world programs, we sampled 12% of total warnings from one version and manually inspected if the warnings are cascaded successfully. For each pair of programs, we report (1) the warnings of the two versions are matched as 'bugs', and (2) the warnings in the first version are removed in the second versions. We then compared the results from three tools and also performed manual inspection to evaluate whether such two types of match are performed correctly by the three tools. For example, in case (1), a mistake is reported if the two warnings are not the same but paired incorrectly by a tool, or the two warnings are supposed to be matched, but one tool fails to do so; in case (2), the warning is supposed to be removed in the second version; however, it is matched with some random warning incorrectly. All the manual inspection is done by two code reviewers. The code reviewers first inspect the cascaded warnings by themselves and then compared and discussed the results with another code reviewer so that we report confident results. Running experimentsWe ran all of our experiments on RedHat 20.4 Linux distribution on a virtual machine with 32 GB of memory and 32 cores available. We implemented our tools using LLVM-8.0, Python 3 and Bash scripts. a successful cascading if each warning of the buggy version in the first version is aligned with the warning of the same bug in the second version. For Table 2, we say a tool made a successful cascading if each warning of the buggy line (first versions) did not find any match on a fixed version (second version). Here, the bug is fixed, and there is no warning for the same bug in the second version. Thus the warning in the first version should not match any other random warnings in the second version. As shown in Table 1, out of the 57 buggy-buggy program pairs, Hydrogen and GumTree cascaded 56 paired programs successfully, followed by SCALe at 54. For buggy-fixed pairs, as shown in Table 2, out of the total 39 pairs of programs, Hydrogen was able to correctly cascade all 39 of them, followed by GumTree at 33 and SCALe at 8. The results based on the known ground truths show that Hydrogen outperformed the other baselines by successfully cascading 95 out of the total 96 pairs, followed by GumTree of 89 successful pairs and SCALe with 62 successful pairs. SCALe performed very poorly for cascading warnings for buggy-fixed pairs. #### 4.2.2 Results for the real-world setting In this section, we compared the output generated from the three tools by cascading the warnings from real-world benchmarks and displayed the results using the Venn diagrams. We ran static analysis tools to generate the warnings for 12 pairs of programs. After obtaining the warnings, we did a preprocessing step before providing the warnings to the cascading tools, which included: 1) removing all the irrelevant warnings that have no effect on the execution of the programs, e.g., files from testing folders, obsolete library code, 2) aggregating warnings from different static analysis tools and removing all the duplicate warnings that are reported by different static analysis tools. After the preprocessing step, the warnings (of the first version) were reduced from 19305 to 2113. These are the warnings we used for cascading. In Figures 6 and 7, the blue, red, and green circles represent GumTree, Hydrogen, and SCALe results respectively. Figure 6 is a Venn diagram to show how many warnings are cascaded to the same "bugs" between the two versions. Figure 7 shows how many of warnings are cascaded as the _"bug" fix_ between the two versions. The numbers located in the intersections of circles represent the shared cascading results across multiple tools. For example, 1101 on the center of Figure 6 represents warnings that have the same cascading across the three tools. 1254 at the intersection of the blue and red circles represents warnings that have the same cascading between Hydrogen and GumTree, including the ones shared by the three tools. The number located in the circle alone (not in the intersection area) represents the number of warnings cascaded by that tool, including the ones shared with other tools. For example, 1301 in the center of the red circle in Figure 6 means the total number of warnings cascaded by Hydrogen. Venn diagrams show that the three tools share a large part of warning cascading results, which brings in confidence that these cascading results are correct ### Results for RQ2 To answer RQ2, we further analyzed and grouped the warning cascading results from the three tools into the following categories: 1. GumTree failed to cascade 2. SCALe failed to cascade 3. SCALe and GumTree failed to cascade 4. Hydrogen failed to cascade #### 4.3.1. GumTree failed to cascade GumTree can fail for two reasons. The first reason is that GumTree cannot process macros in the programs. In the presence of macros, the ASTs sometimes are parsed incorrectly and do not match the source code. This prevents GumTree from matching the warnings correctly. The second reason is that GumTree used a heuristic algorithm to map the AST nodes based on their syntax, regardless of their semantic meaning. This approach can lead to incorrect mapping of the AST nodes across versions, causing warning cascading to fail. Figure 8 shows an example that only GumTree made the wrong cascading among the three tools. In this example, a section of code is surrounded by a conditional compilation using the macro_AMIGA. The presence of such a macro, caused the AST to parse the information in the incorrect way. In the AST diff, this region of code is considered as deleted in the second version, which fails to match the rest of the AST nodes in the two versions. SCALe and Hydrogen both can handle such a case and made a correct cascading. GumTree makes mistakes also because of its syntactic diff algorithm. In Figure 11, we showed diffs of two functions: wrong_001 (abbreviated) and wrong_014. GumTree AST diff algorithm incorrectly matched the first version of wrong_001 to the second version of wrong_014 (an irrelevant function), instead of the second version of wrong_001. The blue arrow shows the nodes which GumTree considers to be matched between the two versions. The nodes a, fptr, and arr in function wrong_001 were mapped to identify nodes in the irrelevant function wrong_014, but they should have been mapped to the version 2 of wrong_001. This incorrect mapping caused the warning on line 8 to be incorrectly cascaded to line 26 (version 2 of wrong_014) instead of line 10 (version 2 of wrong_001). #### 4.3.2. SCALe failed to cascade SCALe can generate two types of errors when performing warning cascading. First, two snippets of code may match textually but a change in referenced elements, e.g., a change in the called function, can cause different execution behaviors. SCALe can incorrectly match the warnings. Second, some textual differences have no impact on program behaviors related to the warnings, but SCALe would falsely report the warnings cannot match. In the following, we further provide two examples to demonstrate the weakness of SCALE. Figure 10 shows the diff between versions 401d8194 (shown in red) and 54d55bba (green) of kuset_c in Grep. The static analysis tool reports a warning at line 22 (shown in blue) for both the versions as _-1 is coered from int to unsigned long_. SCALe, however, reports the lines 7-22 as being modified instead of just the lines 7-18. Due to the line containing the warning (line 22) being misclassified as modified, SCALe fails to cascade this warning. However, this warning should have been matched because (1) line 22 was not modified between the versions and (2) adding the keywords "register" at lines 13-19 should not change the semantics related to this type of bug. \begin{table} \begin{tabular}{l||c|c|c|c} \hline Benchmark & Total & Hydrogen & SCALe & GumTree \\ \hline \hline find & 21 & 20 & 24 & 1 \\ \hline grep & 24 & 24 & 21 & 20 \\ \hline make & 10 & 10 & 10 & 10 \\ \hline coreutil & 65 & 31 & 57 & 60 \\ \hline Total & 120 & 85 & 112 & 91 \\ \hline \end{tabular} \end{table} Table 4. Successful cascading of “bug” fixes in real-world program pairs \begin{table} \begin{tabular}{l||c|c|c|c} \hline Benchmark & Total & Hydrogen & SCALe & GumTree \\ \hline \hline find & 33 & 33 & 31 & 32 \\ \hline grep & 22 & 22 & 2 & 20 \\ \hline make & 25 & 25 & 25 & 20 \\ \hline coreutil & 52 & 52 & 41 & 27 \\ \hline Total & 132 & 132 & 99 & 99 \\ \hline \end{tabular} \end{table} Table 3. Successful cascading of same “bug” in real-world program pairs Figure 8. GumTree failed due to macro Figure 11 shows an example that only SCALe made the wrong match cascading among the three tools. Similar to the example shown in Figure 10, this program has a newly added line below the target line that we aim to cascade. Due to this change, the Unix diff reported that the target line has changed in the new version, which caused SCALe to fail to match warnings where AST and CFG-based diff tools can cascade successfully. #### 4.3.3 SCALe and GumTree failed to cascade Figure 12 illustrates a scenario in which both GumTree and SCALe made incorrect warning cascading. In the case of GumTree, the failure is due to a large macro that surrounds the warning location on the second version. This macro makes it difficult for GumTree to parse the code into an AST, which results in a failure to match the warnings. In the case of SCALe, the failure is caused by a difference in the text located directly above the warning statement. This difference affects the results of the Unix diff tool but does not change the semantics of code. On the other hand, Hydrogen will not be affected by such changes because 1) it successfully parses the code within macro and built it into the MVICFG; 2) it uses control flow graphs to perform diff. It can confirm that this change does not affect program control flow and semantics, and thus will mark the blue statement as the matched warnings in MVICFG (See algorithm 2). The main reason Hydrogen fails in some cases is that it used LLVM to compile the program, and some code that cannot be handled by LLVM is excluded from MVICFG. For example, if statements that do not have brackets, a statement expanding across multiple lines (it covers 46.8% undetected cascading for Hydrogen), the internal function Figure 11. SCALe fails to cascade Figure 12. GumTree and SCALe fail to cascade Figure 10. SCALe failed to cascade grep warnings Figure 11. SCALe fails to cascade Figure 9. GumTree failed to cascade Toyota warnings because AST diff algorithm cannot align ASTs of the two versions correctly used for library and some conditional compilation code is not covered by this build. Hydrogen used textual diff tool to build MVICFG and sometimes has the disadvantage similar to SCALe. In Section 2.1, Figure 3 shows a snippet of code where function fprintf is refactored to checked_fprintf. In this example, the text has changed between two versions of a related statement. However, the functionality still remains the same. Thus the warnings should be cascaded as the same "bug". Hydrogen fails to cascade this warning correctly because the target line has been marked as modified (unmatched node in MVICFG). If the textual statement wasn't changed, Hydrogen could possibly make a correct detection by using algorithm 3. However, since the function names in the statements have been changed, Hydrogen is not able to handle it. GumTree is able to make a correct cascading by leveraging the AST structures. ## 5. Threats to Validity To address the external threats to validity, we applied 12 pairs of C real-world projects as well as 96 pairs of benchmark programs where we know ground truth. We applied a set of static analysis tools often used by CERT to make sure we generated all types of practical static warnings. The benchmarks also had varying commits between versions to ensure heterogeneous diffs. To address the internal threat to validity, we first inspected the output of three tools to make sure the implementations of the warning cascading algorithms are correct. We inspected 100% of cascaded warnings from the ground truth and 12% for real-word programs across by two code reviewers to confirm our findings. ## 6. Related Work There have been many works that focus on matching and prioritizing warnings or faults between multiple versions of a program [17; 18; 19; 20; 21; 22; 23; 24; 25]. [17; 18; 19; 24] uses GNU diff, AST, and Verification Modulo Versions to provide matching, while [21; 22] use source control revisions to prioritize static warnings. To the best of our knowledge, there has been no study on tools of warning cascading. Spacco et al. [20] developed two methods to match warnings in the static analysis tool FindBugs at the line granularity level. The first approach, 'pairing', matches warnings based on their source code location. First, it identifies exact matches of package, class, and method name. Then, for those warnings that do not match exactly, the approach uses progressively fuzzier' criteria. The second approach is called 'warning signatures'. This approach transforms each warning into a string format that includes information about the warning, and then matches string-formatted warnings with the same MD5 code. Both the 'pairing' and 'warning signature' methods perform best when using a single static analyzer tool and use textual diff to identity places to do the cascading. Logozzo et al. [18] present a solution to the common problem of verifying software with multiple versions. To tackle this challenge, the authors introduce a novel verification framework called Verification Modulo Versions (VMV), which is specifically designed to enhance the efficiency and effectiveness of software verification for multi-version systems. While this work does verification and relies on their own framework, our work tackles cascading warnings generated by off-the static analyzers and compares the usefulness and efficiency of different cascading methods independently of the specific analyzer. Palix et al. [19] build an AST based on code changes to improve tracking changes similar to GumTree[4]. However, their work focuses on tracking changes between multiple versions, while our work focuses on studying how different change-tracking methodologies affect warning cascading. Finally, there are many methods to track changes [4; 26; 27; 28; 29; 30] which can be roughly separated into textual, syntactic, and semantic methods. None of them directly deals with the problem of matching warnings between versions, but some of them are used in other works about matching warnings. Here, we discuss the state of the art for each category. Syntactic methods such as GumTree [4] work at the granularity of ASTs, which reflects the source code structure and hence can be more precise than textual diff. AST helps in avoiding common pitfalls of textual based diff like missing refactoring based changes and spacing issues. Yang et al. [26] developed a syntactic-based comparing method for dynamic programming languages like scheme. Similar to GumTree, Fluri et al. [28] also proposed an AST based approach using a tree-differencing algorithm to detect source code changes. We choose GumTree because it is recent, open-source, and has been widely used by other works. Huang et al. [31] present an approach called ClDiff to linked code differences with the aim of simplifying code review. While these works aim to improve tracking changes between versions, our work focuses on studying the impact of using different tracking methods for cascading warnings. ## 7. Conclusions and Future Work Cascading static warnings is a practical but challenging problem. This paper applied three tools to explore their pros and cons of addressing this problem. We found that SCALe, the textual diff based tool, fails when there are textual changes but not semantics changes related to the bugs. It also fails when the referred calls or global variables have changed outside the current functions. GumTree has the weakness of not being able to handle macros, and the AST tree matching algorithm faces some failures due to its heuristic nature. Hydrogen relied on LLVM and cannot process all the code in the repositories due to the requirement of building the project. It used textual diff tool to build MVICFG and sometimes has the disadvantage similar to SCALe. In the future, we plan to integrate more static analysis tools like CodeSonar. Such tools produce paths as static warnings, and we envision that CFG based diffs can have greater advantages.
2308.12922
Electron trapping in graphene quantum dots with magnetic flux
It is known that the appearance of Klein tunneling in graphene makes it hard to keep or localize electrons in a graphene-based quantum dot (GQD). However, a magnetic field can be used to temporarily confine an electron that is traveling into a GQD. The electronic states investigated here are resonances with a finite trapping time, also referred to as quasi-bound states. By subjecting the GDQ to a magnetic flux, we study the scattering phenomenon and the Aharonov-Bohm effect on the lifetime of quasi-bound states existing in a GQD. We demonstrate that the trapping time increases with the magnetic flux sustaining the trapped states for a long time even after the flux is turned off. Furthermore, we discover that the probability density within the GQD is also clearly improved. We demonstrate that the trapping time of an electron inside a GQD can be successfully extended by adjusting the magnetic flux parameters.
Mohammed El Azar, Ahmed Bouhlal, Abdulaziz D. Alhaidari, Ahmed Jellal
2023-08-24T16:54:41Z
http://arxiv.org/abs/2308.12922v1
# Electron trapping in graphene quantum dots with magnetic flux ###### Abstract It is known that the appearance of Klein tunneling in graphene makes it hard to keep or localize electrons in a graphene-based quantum dot (GQD). However, a magnetic field can be used to temporarily confine an electron that is traveling into a GQD. The electronic states investigated here are resonances with a finite trapping time, also referred to as quasi-bound states. By subjecting the GDQ to a magnetic flux, we study the scattering phenomenon and the Aharonov-Bohm effect on the lifetime of quasi-bound states existing in a GQD. We demonstrate that the trapping time increases with the magnetic flux sustaining the trapped states for a long time even after the flux is turned off. Furthermore, we discover that the probability density within the GQD is also clearly improved. We demonstrate that the trapping time of an electron inside a GQD can be successfully extended by adjusting the magnetic flux parameters. ## I Introduction An arrangement of carbon atoms in a hexagonal shape, which forms covalent chemical bonds, makes up the two-dimensional material known as graphene. It is one of the most amazing materials due to its unique physical (mechanical, electrical and thermal) properties and the fact that charged particles behave as massless Dirac fermions at low energies [1; 2]. As a result, electrons with typical nonrelativistic energy may now be used to investigate relativistic effects. Since the creation of graphene, and thanks to its extraordinary electrical properties, many studies have been carried out into how it interacts with external fields [3; 4]. Surprisingly, graphene exhibits a variety of special properties showing that it can provide the ideal framework for the study of fundamental physics, such as the quantum Hall effect [5; 6; 7], the Aharonov-Bohm effect [8; 9; 10], Landau quantization [11; 12], the Hofstadter butterfly spectrum [13], etc. Another exciting area of condensed matter physics is the prospect of localizing electrons in a given region of space despite the Klein tunneling phenomenon associated with massless spinors [14; 15]. Klein tunneling is a relativistic phenomenon involving full transmission of electrons over a potential barrier, whatever the height of the barrier, which explains the use of finite-dimensional circular quantum dots to trap electrons. However, in the case of GQDs subjected to specific favorable circumstances brought about by applying external electrostatic potentials, electron trapping has been documented for brief periods of time [16; 17; 18; 19]. Nonetheless, quasi-bound states are the electronic states of interest here. Such a state is often characterized by a finite lifetime (trapping time), in contrast to true bound states that live forever as in the case of an atom. For example, the brief confinement of the electron in the GQD is called quasi-localization, which disappears due to the Klein tunneling effect that causes electrons to leave the GQD. In addition, it has been demonstrated that the mass term [20], twisted light [21], magnetic fields [22; 23; 24] or polarized light [25] can be utilized to create quasi-bound states in GQDs and prolong their duration (trapping time). Based on these studies and in particular [20; 26], we investigate the effect of a magnetic flux on the electron scattering phenomenon in a GQD placed in a uniform magnetic field and on how it affects the trapping time (lifetime) of quasi-bound states. This can be achieved by analyzing the scattering efficiency \(Q\), the probability density \(\rho\) and the trapping time \(\tau\) of electrons in GQDs. For that, we first determine the solutions of the Dirac equation and use the continuity condition at the interface to obtain the associated scattering efficiency outside and inside the GQD. Subsequently, we derive the trapping time from the imaginary part of the complex energy of the trapped electrons. Our numerical results show that an increase in the magnetic flux affects different physical quantities: (1) the scattering efficiency \(Q\) reaches non-vanishing and significant values at zero magnetic field, (2) quasi-bound states start to be generated at low values of the GQD radius, (3) the probability of finding the electron inside the GQD is significantly improved, and (4) the trapping time of the quasi-bound states is significantly extended. The structure of this paper is as follows. The exact solutions of the Dirac equation are derived in Sec. II for an electron passing through a magnetic GQD placed in a magnetic flux. The analysis of electron scattering in the present system is briefly presented in Sec. III where we also specify the metrics needed to describe the scattering process. In Sec. IV, we present our numerical results based on the theory introduced in the previous sections. This numerical analysis highlights the main conclusions and provides clear justifications of our findings. Finally, we summarize our main results in the last section. Theoretical model Let us consider a graphene quantum dot (GQD) subjected to a a magnetic flux, which is made up of two regions as depicted in Fig. 1. We suggest the following single valley one-electron Hamiltonian to characterize the system: \[H=v_{F}\vec{\sigma}\cdot(\vec{p}-e\vec{A}) \tag{1}\] where \(v_{F}=10^{6}\) ms\({}^{-1}\) is the Fermi velocity, \((-e)\) is the electron charge, and \(\vec{\sigma}=(\sigma_{x},\sigma_{y})\) are the Pauli matrices. The vector potential \(\vec{A}=\vec{A_{1}}+\vec{A_{2}}\) is chosen to be a linear combination of the symmetric gauge and the magnetic flux written in the polar coordinates \((r,\varphi)\)[27] \[\vec{A_{1}}=\frac{Br}{2}\vec{\varphi},\quad\vec{A_{2}}=\frac{\phi_{AB}}{2\pi r }\vec{\varphi} \tag{2}\] where \(\vec{\varphi}\) is unit vector. At this point, it is worthwhile emphasizing that outside the GQD, \(\vec{A}\) does not have to vanish but could be a nonphysical gauge field of the form \(\vec{A}=\vec{\nabla}F\) where \(F\) is some scalar space-time function. Because of the cylinderical symmetry, we can write the Hamiltonian in polar coordinates knowing that \(\sigma_{r}=\cos\varphi\sigma_{x}+\sin\varphi\sigma_{y}\) and \(\sigma_{r}=-\sin\varphi\sigma_{x}+\cos\varphi\sigma_{y}\). This yields \[H=\begin{pmatrix}0&-i\hbar v_{F}e^{-i\varphi}\left[\partial_{r}-\frac{i}{r} \partial_{\varphi}-\frac{eBr}{2\hbar}-\frac{e\phi_{AB}}{2\pi hr}\right]\\ -i\hbar v_{F}e^{i\varphi}\left[\partial_{r}+\frac{i}{r}\partial_{\varphi}+ \frac{eBr}{2\hbar}+\frac{e\phi_{AB}}{2\pi hr}\right]&0\end{pmatrix} \tag{3}\] Since the Hamiltonian (3) commutes with the total angular momentum \(J_{z}=-i\hbar\partial_{\theta}+\frac{\hbar}{2}\sigma_{z}\), then we look for the eigenspinors that form a common basis to \(H\) and \(J_{z}\). They are \[\psi(r,\varphi)=e^{im\varphi}\binom{\chi^{A}(r)}{ie^{i\varphi}\chi^{B}(r)} \tag{4}\] where the azimuthal quantum number \(m=0,\pm 1,\pm 2\cdots\). These will play a crucial role in analyzing the scattering phenomenon associated with the system. To get the solutions of the problem, we solve the energy eigenvalue equation in each region. Indeed, for the region outside the GQD where \(B=0\) and \(\phi_{AB}=0\), electrons scatter off the GQD of radius \(R\) in the absence of a magnetic Figure 1: (color online) A graphene quantum dot (GQD) of radius \(R\) is confined by a constant magnetic field \(B\) and exposed to a magnetic flux \(\phi_{AB}\) in the \((x,y)\)-plane. The plane wave \(\psi_{k}^{t}\) describes the state of the incident electron. Either an electron with energy \(E\) is transmitted (wave function \(\psi_{q}^{t}\)) or reflected (wave function \(\psi_{k}^{r}\)). flux. Consider an incident electron beam traveling in the \(x\)-direction under normal incidence with energy \(E=\hbar v_{F}k\), where \(k\) is the wave number. Consequently, a plane wave may be used to describe the incident electron \[\psi_{k}^{i}(r,\varphi)=\frac{1}{\sqrt{2}}\sum_{m=-\infty}^{\infty}i^{m}e^{im \varphi}\binom{J_{m}(kr)}{ie^{i\varphi}J_{m+1}(kr)} \tag{5}\] where \(J_{m}(z)\) is the Bessel function of the first kind and the incident boundary conditions gives the upper component as \(\frac{1}{\sqrt{2}}e^{ikx}=\frac{1}{\sqrt{2}}e^{ik\cos\varphi}\). Moreover, the scattering boundary conditions lead to the following form of the reflected electron wavefunction that splits into partial waves [28; 29; 30; 31] \[\psi_{k}^{r}(r,\varphi)=\frac{1}{\sqrt{2}}\sum_{m=-\infty}^{\infty}a_{m}^{r}i ^{m}\binom{H_{m}(kr)e^{im\varphi}}{iH_{m+1}(kr)e^{i(m+1)\varphi}} \tag{6}\] where \(H_{m}(x)=J_{m}(x)+iY_{m}(x)\) are the Hankel functions of the first kind as linear combinations of the Bessel functions\(J_{m}\) and the Neumann functions \(Y_{m}\), whereas \(a_{m}^{r}\) could be determined by using the asymptotic behavior \[H_{m}(x)\underset{x\geq 1}{\sim}\sqrt{\frac{2}{\pi x}}e^{i(x-\frac{m \pi}{2}-\frac{\pi}{4})}. \tag{7}\] As far as the region inside the GQD that includes the magnetic field and the magnetic flux, one can obtain the transmitted solution starting with the Dirac wave equation \(H\psi_{q}(r,\varphi)=E\psi_{q}(r,\varphi)\) to obtain \[\left(\partial_{r}-\frac{m-\mu}{r}+\frac{r}{2l_{B}^{2}}\right) \chi_{q}^{A}(r)=-q\chi_{q}^{B}(r) \tag{8a}\] \[\left(\partial_{r}+\frac{m-\mu+1}{r}-\frac{r}{2l_{B}^{2}}\right) \chi_{q}^{B}(r)=q\chi_{q}^{A}(r) \tag{8b}\] where we have set the magnetic length \(l_{B}=(\hbar/eB)^{1/2}\), \(\mu=\phi_{AB}/\phi_{0}\) and \(\phi_{0}=h/e\). The wave number is linked to the energy \(E=sv_{F}\hbar q\), where \(s=+1\) stands for positive energy states (conduction band) and, respectively, \(s=-1\) for negative energy states (valence band). By injecting (8a) into (8b), we end up with a second differential equation for \(\chi_{q}^{A}(r)\) \[\left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}+\frac{m-\mu+1}{l_{B}^{2}}- \frac{r^{2}}{4l_{B}^{4}}-\frac{(m-\mu)^{2}}{r^{2}}+q^{2}\right)\chi_{q}^{A}(r) =0. \tag{9}\] On the other hand, by injecting (8b) into (8a) we obtain the equation for \(\chi_{q}^{B}(r)\) and conclude that the solution of \(\chi_{q}^{B}(r)\) is simply obtained from \(\chi_{q}^{A}(r)\) by the parameter maps \(m-\mu\longmapsto m-\mu+1\) and \(q^{2}\longmapsto q^{2}-\frac{2}{l_{B}^{2}}\). Therefore, the energy gap for \(\chi_{q}^{A}(r)\) is due to the mass term \(\frac{\mu-m-1}{l_{B}^{2}}\), whereas the energy gap for \(\chi_{q}^{B}(r)\) is due to the mass term \(\frac{\mu-m}{l_{B}^{2}}\). In the limiting cases \(r\longrightarrow 0\) and \(r\longrightarrow\infty\), (9) can be written, respectively, as \[\left(\partial_{r}^{2}+\frac{1}{r}\partial_{r}-\frac{(m-\mu)^{2}} {r^{2}}\right)\chi_{q}^{A}(r)=0 \tag{10}\] \[\left(\rho\partial_{\rho}^{2}+\partial_{\rho}-\rho\right)\chi_{q} ^{A}(\rho)=0 \tag{11}\] showing that the solutions are proportional to \(r^{|m-\mu|}e^{-\rho}\), with \(\rho=\frac{r^{2}}{4l_{B}^{2}}\). These can be used to propose the following ansatz: \[\chi_{q}^{A\pm}(r)=r^{\pm(m-\mu)}e^{-r^{2}/4l_{B}^{2}}\phi_{q}^{A\pm}(r) \tag{12}\] as general solution of (9) such that the "+" sign of the exponent is chosen for \(m-\mu\geq 0\), while the "-" sign is chosen for the opposite case \(m-\mu<0\). Now, we perform the variable change \(\eta=r^{2}/2l_{B}^{2}\), which transforms (9) into the Kummer-type differential equations for \(\phi_{q}^{A\pm}(\eta)\) \[\eta\partial_{q}^{2}\phi_{q}^{A+}(\eta)+\left(m-\mu+1-\eta\right) \partial_{\eta}\phi_{q}^{A+}(\eta)+\frac{l_{B}^{2}q^{2}}{2}\phi_{q}^{A+}(\eta)=0 \tag{13a}\] \[\eta\partial_{q}^{2}\phi_{q}^{A-}(\eta)+\left(1-m+\mu-\eta\right) \partial_{\eta}\phi_{q}^{A-}(\eta)+\left(m-\mu+\frac{l_{B}^{2}q^{2}}{2}\right) \phi_{q}^{A-}(\eta)=0. \tag{13b}\] As a result, they have as solutions the confluent hypergeometric functions \[\phi_{q}^{A+}(\eta) ={}_{1}F_{1}\left(-\frac{l_{B}^{2}q^{2}}{2},m-\mu+1,\eta\right) \tag{14a}\] \[\phi_{q}^{A-}(\eta) ={}_{1}F_{1}\left(-m+\mu-\frac{l_{B}^{2}q^{2}}{2},1-m+\mu,\eta\right) \tag{14b}\] Combining all the above results, we obtain the following solutions to the second-order differential equation (9) \[\chi_{q}^{A+}(\eta) =\eta^{|m-\mu|/2}e^{-\eta/2}\,{}_{1}F_{1}\left(-\frac{l_{B}^{2}q^ {2}}{2},m-\mu+1,\eta\right) \tag{15a}\] \[\chi_{q}^{A-}(\eta) =\eta^{|m-\mu|/2}e^{-\eta/2}\,{}_{1}F_{1}\left(-m+\mu-\frac{l_{B} ^{2}q^{2}}{2},1-m+\mu,\eta\right) \tag{15b}\] As noted below (9) the second spinor component \(\chi_{q}^{B\pm}(r)\) is obtained from the first \(\chi_{q}^{A\pm}(r)\) by the parameter map \(m-\mu\longmapsto m-\mu+1\) and \(q^{2}\longmapsto q^{2}-\frac{2}{l_{B}^{2}}\). However, the parameter map does not determine the overall normalization that could be obtained using the differential relations (8a) or (8b). As a result, we get \[\chi_{q}^{B+}(\eta) =\frac{ql_{B}/\sqrt{2}}{|m-\mu|+1}\eta^{(|m-\mu|+1)/2}e^{-\eta/2} \,{}_{1}F_{1}\left(1-\frac{l_{B}^{2}q^{2}}{2},m-\mu+2,\eta\right) \tag{16a}\] \[\chi_{q}^{B-}(\eta) =-\frac{|m-\mu|}{ql_{B}/\sqrt{2}}\eta^{(|m-\mu|-1)/2}e^{-\eta/2} \,{}_{1}F_{1}\left(-m+\mu-\frac{l_{B}^{2}q^{2}}{2},-m+\mu,\eta\right) \tag{16b}\] Finally, the solution inside the GQD can be obtained from the above analysis as \[\psi_{q}^{t}(r,\varphi)=\sum_{m=-\infty}^{\infty}a_{m}^{t}\begin{pmatrix}\chi _{q}^{A\pm}(r)e^{im\varphi}\\ i\chi_{q}^{B\pm}(r)e^{i(m+1)\varphi}\end{pmatrix} \tag{17}\] where the coefficients \(a_{m}^{t}\) are to be determined by the boundary conditions at \(r=R\). Next, we will show how these results can be used to study the scattering problem and related matters. ## III Scattering phenomenon To study the scattering problem of the system, we need to determine the scattering coefficients \(a_{m}^{r}\) and \(a_{m}^{t}\) using the continuity of eigenspinors at the boundary condition \(r=R\) \[\psi_{k}^{i}(R,\varphi)+\psi_{k}^{r}(R,\varphi)=\psi_{q}^{t}(R,\varphi). \tag{18}\] After simplification, we end up with two equations of \(a_{m}^{r}\) and \(a_{m}^{t}\) \[\frac{1}{\sqrt{2}}i^{m}J_{m}(kR)+\frac{1}{\sqrt{2}}i^{m}a_{m}^{r}H _{m}(kR)=a_{m}^{t}\chi_{q}^{A\pm}(R) \tag{19a}\] \[\frac{1}{\sqrt{2}}i^{m+1}J_{m+1}(kR)+\frac{1}{\sqrt{2}}i^{m+1}a_ {m}^{r}H_{m+1}(kR)=ia_{m}^{t}\chi_{q}^{B\pm}(R) \tag{19b}\] and they can be solved to obtain \[a_{m}^{t\pm}(\mu) =\frac{i^{m}}{\sqrt{2}}\ \frac{J_{m}(kR)H_{m+1}(kR)-J_{m+1}(kR)H_{m}( kR)}{H_{m+1}(kR)\chi_{q}^{A\pm}(R)-H_{m}(kR)\chi_{q}^{B\pm}(R)} \tag{20a}\] \[a_{m}^{r\pm}(\mu) =\frac{-J_{m}(kR)\chi_{q}^{B\pm}(R)+J_{m+1}(kR)\chi_{q}^{A\pm}(R) }{H_{m}(kR)\chi_{q}^{B\pm}(R)-H_{m+1}(kR)\chi_{q}^{A\pm}(R)} \tag{20b}\] which are magnetic flux \(\mu=\phi_{AB}/\phi_{0}\)-dependent. Note that, the following identity is used to simplify (20a) for \(a_{m}^{t\pm}(\mu)\) \[J_{m+1}(z)H_{m}(z)-J_{m}(z)H_{m+1}(z)=\frac{2i}{\pi z}. \tag{21}\] We close by defining the main quantities used to describe the diffusion process. We consider the density of states \[\rho=\psi^{\dagger}\psi \tag{22}\] and the current density \[\widetilde{j}=\psi^{\dagger}\widetilde{\sigma}\psi \tag{23}\] where the function \(\psi\) depends on the region with \(\psi=\psi_{q}^{t}\) inside the GQD and \(\psi=\psi_{k}^{i}+\psi_{k}^{r}\) outside. The diffusion efficiency is calculated by dividing the diffusion cross-section [30; 31; 32] \[Q=\frac{\sigma}{2R}=\frac{4}{kR}\sum_{m=-\infty}^{+\infty}|a_{m}^{r}(\mu)|^{2}. \tag{24}\] In order to estimate the trapping time (lifetime) of the quasi-bound states inside the GQD, we make another analysis in terms of the complex incident energy [33] \[E=E_{r}-iE_{i} \tag{25}\] where \(E_{r}\) represents the resonance energy and \(E_{i}\) gives the trapping time of the quasi-bound state \(\tau\), with consideration for the unique graphene linear dispersion law \(\tau=\frac{\hbar}{E_{i}}\). As a result, the wave number \(k\) also becomes complex \[k=k_{r}-ik_{i} \tag{26}\] and the trapping time is defined by \[\tau=\frac{1}{v_{F}k_{i}}. \tag{27}\] To perform this analysis, we use the continuity to determine the complex energy for the incidence by finding the complex poles of the transmission and reflection coefficients (20). As the kinetic energy of the incident electron is unaffected by the magnetic field and magnetic flux, we assume \(q=k\) and deal with the following transcendental equation for \(k\)[28]: \[\frac{\chi_{k}^{A\pm}(R)}{\chi_{k}^{B\pm}(R)}=\frac{H_{m}(kR)}{H_{m+1}(kR)}. \tag{28}\] The above results will be numerically analyzed to identify the main features of the system. We plot all scattering quantities under various conditions and compare to results in the literature. ## IV Results and discussions We present the main numerical results to describe the electron scattering phenomenon in a practical way. That is why we make the analysis in terms of scattering modes, each mode corresponds to a number of angular momentum \(m\) indexed by \(m\in\mathbb{Z}\). We will concentrate on the modes involved in the scattering process and neglect the others. In Fig. 2, the incident energy \(E=20\) meV. In Figs. 2(a,b,c), we plot the scattering efficiency \(Q\) as a function of the magnetic field intensity \(B\) and the radius \(R\) of the GQD for the following AB-flux field values \(\mu=0,1/2,3/2\), and in Figs. 2(d,e) where \(B\) takes the values 1.2 T and 2.2 T respectively, we plot \(Q\) as a function of the radius \(R\) for the same previous magnetic flux values \(\mu\). In Fig. 2a, we can clearly see that in the absence of magnetic flux, the interaction is very weak in a radius range from 0 to 32 nm. This radius range is reduced when we increase the magnetic flux, as shown in Figs. 2(b, c), and above this radius range, wide and narrow bands start to appear. The notable increase in scattering efficiency \(Q\) is due to the excitations of the specific scattering modes, each mode corresponds to a state of the angular momentum number \(m\). In Figs. 2(d,e), we see that as the magnetic flux increases, the most relevant resonance peaks are shifted to smaller values of the GQD radius \(R\). We also see that the scattering efficiency is improved with increasing magnetic flux and takes 8.4 as the maximum value at \(B=1.2\) T and \(\mu=3/2\) as shown in Fig. 2d, until it reaches value 9.5 at \(B=2.2\) T and \(\mu=3/2\) as shown in Fig. 2e. In Fig. 3, we plot the scattering efficiency \(Q\) as a function of the incident energy \(E\) and the magnetic field \(B\) for a GQD of radius \(R=50\) nm and three values of magnetic flux (a): \(\mu=0\), (b) :\(\mu=1/2\) and (c) :\(\mu=3/2\). In Fig. 3a with \(\mu=0\), we observe six somewhat oscillating bands of very large values of \(Q\), which correspond to the number of angular momentum \(m=0,1,2,3,4,5\). It is clearly seen that there is no interaction below the magnetic field value \(B\approx 1\) T. In Fig. 3b, one always sees the appearance of six bands, but the interaction inside the GQD starts with smaller values of \(B\) compared to the results exhibited in Fig. 3a. Fig. 3c shows the suppression of the band corresponding to the \(m=0\) scattering mode with the appearance of a narrow band corresponding to the \(m=6\) scattering mode. The most important point to note here is that the interaction is significant even in the absence of a magnetic field (\(B=0\) T), as displayed in Fig. 3c. Figure 3: (color online) Scattering efficiency \(Q\) as a function of magnetic field \(B\) and incident energy \(E\) for \(R=50\) nm and different values of magnetic flux (a): \(\mu=0\), (b): \(\mu=\frac{1}{2}\), (c): \(\mu=\frac{3}{2}\). Figure 2: (color online) (a,b,c): Scattering efficiency \(Q\) is plotted versus the radius \(R\) and the magnetic field \(B\) for an incident energy \(E=20\) meV and different values of magnetic flux (a): \(\mu=0\), (b): \(\mu=\frac{1}{2}\), (c): \(\mu=\frac{3}{2}\). (d,e): Scattering efficiency \(Q\) as a function of \(R\) for \(E=20\) meV, \(\mu=0\) (red color), \(\mu=\frac{1}{2}\) (green color), \(\mu=\frac{3}{2}\) (blue color), and (d): \(B=1.2\) T, (e): \(B=2.2\) T. In Fig. 4, we now fix the radius at \(R=50\) nm and the energy of the incident electron at values \(E=7,20,30\) meV, and examine the scattering efficiency \(Q\) as a function of the magnetic field \(B\) with three values of magnetic flux \(\mu=0,1/2,3/2\), as depicted. For \(E=7\) meV, Fig. 4a shows that at \(B=0\) T, \(Q\) is null in the absence of magnetic flux, but it starts to increase once the flux is applied, and specifically, it takes the value \(1\) at \(\mu=3/2\). We also see that the higher the magnetic flux value, the larger the resonance peaks start to appear at smaller values of \(B\). In Figs. 4(b,c), where the energy of the incident electron \(E\) is increased, the same conclusions are valid as seen in Fig. 4a except that the minimum efficiency at \(B=0\) T takes significant values until it reaches the value \(4.6\) at \(E=30\) meV and \(\mu=3/2\), as displayed in Fig. 4c. Another analysis of the scattering phenomenon is shown in Fig. 5, where we plot the scattering efficiency \(Q\) as a function of the incident energy \(E\) for a \(B=2.2\) T and three values of the GQD radius \(R=40,50,60\) nm, with magnetic flux (a): \(\mu=\), (b): \(\mu=1/2\), (c): \(\mu=3/2\). In Fig. 5, we see firstly that \(Q\) is always zero at \(E=0\) meV whatever the value of \(\mu\). Secondly, we can clearly see that if we increase \(E\), \(Q\) shows an oscillatory behavior with peaks of large amplitude at small values of \(E\). As long as the energy \(E\) is increased, these oscillations are damped until \(Q\) takes a constant value, i.e., \(Q\approx 5\). A very important remark that can be drawn here is that the maximum value of \(Q\) increases with magnetic flux, as clearly indicated in Figs. 5(a,b,c). Next, we carry out another study of the scattering phenomenon based on the density in real space, for which we examine in Fig. 6 the density in the field near the GQD for an incident energy \(E=20\) meV and three values of magnetic flux (a): \(\mu=0\), (b): \(\mu=1/2\), (c): \(\mu=3/2\). We choose the scattering mode \(m=3\) where scattering is made with a very sharp resonance peak, and consequently we expect notable electron trapping effects. The values of the magnetic field \(B\) are those corresponding to the peaks indicated by the labels \((1,2,3)\) in Fig. 4b, the geometry of the GQD is indicated by the black circle. In Fig. 6a with \(B=3.8\) T and zero magnetic flux, we see that most of the electron density is concentrated inside and at the boundary of the GQD with a high scattering efficiency. Figs. 6(b,c) with \((B=3.34\) T, \(\mu=1/2)\), \((B=2.4\) T, \(\mu=3/2)\), respectively, show that the electron density inside the GQD is enhanced very clearly with the increase in magnetic flux and starts to form a very intense cloud around the center of the GQD, with an extremely high scattering efficiency, and consequently the probability of trapping the electron inside the GQD also becomes high. We focus on the two scattering modes \(m=0\), where the scattering is non-resonant, and \(m=3\), where the scattering has a very clear resonance peak (noticeable trapping effect), in order to study the effect of the magnetic Figure 5: (color online) Scattering efficiency \(Q\) as a function of incident energy \(E\) for magnetic field \(B=2.2\) T, radius \(R=40\) (blue color), \(50\) (green color), \(60\) (red color) nm, and three values of magnetic flux (a): \(\mu=0\), (b): \(\mu=\frac{1}{2}\), (c): \(\mu=\frac{3}{2}\). Figure 4: (color online) Scattering efficiency \(Q\) as a function of magnetic field \(B\) for AB-flux field \(\mu=0\) (red color), \(\frac{1}{2}\) (green color), \(\frac{3}{2}\) (blue color) and three values of incident energy \(E\). (a): \(E=7\) meV, (b): \(E=20\) meV and (c): \(E=30\) meV. flux on the trapping time of the electrons inside the GQD. The numerical solution of the transcendental equation (28) for each resonance allows us to determine the sets of values (\(E_{\tau}\),\(E_{i}\)) as well as their corresponding magnetic field \(B\) and consequently to deduce the trapping time \(\tau\). Fig. 7 shows the trapping time \(\tau\) as a function of the magnetic field \(B\) for a GQD radius \(R=50\) nm, the two modes (\(m=0\) (blue color), \(m=3\) (green color)), and the three values of magnetic flux (first column: \(\mu=0\), second: \(\mu=1/2\), third: \(\mu=3/2\)). In general, we observe that the trapping time increases with the magnetic field \(B\). In contrast to \(\ m=3\), we notice that the trapping time in the \(m=0\) mode begins to be visible at lower values of magnetic field \(B\), and this is reinforced by what was found in [26]. The most important observation of interest in our work is that the increase in the magnetic flux leads to a notable increase in the trapping time of the electrons inside the GQD. For example, at \(B=4.5\) T, \(\tau\) takes the following values: (\(\tau=0.008\) ns at \(\mu=0\), \(\tau=0.0245\) ns at \(\mu=1/2\) and \(\tau=0.46\) ns at \(\mu=3/2\)) for \(m=0\) and (\(\tau=0.023\) ns at \(\mu=0\), \(\tau=0.10\) ns at \(\mu=1/2\) and \(\tau=6\) ns at \(\mu=3/2\)) for \(m=3\). Figure 6: (color online) Density in real space corresponds to scattering mode \(m=3\) for incident energy \(E=20\) meV, radius \(R=50\) nm, and different values of magnetic flux and magnetic field (a): (\(\mu=0\), \(B=3.8\) T), (b): (\(\mu=0.5\), \(B=3.34\) T), (c): (\(\mu=1.5\), \(B=2.4\) T). The geographical GQD is indicated by the black circle. Figure 7: (color online) The trapping time \(\tau\) as function of magnetic field \(B\) for two scattering modes \(m=0\) (blue color), \(m=3\) (green color), and three values of magnetic flux (a,d): \(\mu=0\), (b,e): \(\mu=\frac{1}{2}\) and (c,f): \(\mu=\frac{3}{2}\). Conclusion In summary, we carried out a theoretical study of the elastic diffusion of electrons in magnetic graphene quantum dots (GQDs) subjected to a magnetic flux. We showed that the influence of the magnetic flux on improving the scattering efficiency and, in particular, the trapping time of quasi-bound states that can be induced in GQDs immersed in a homogeneous external magnetic field. To this end, we have developed a theoretical model that describes the behavior of Dirac fermions that allowed us to achieve our main objective. To treat the problem in detail, we first solved the Dirac equation analytically to determine the eigenspinors. Then, we carried out scattering analysis. In this respect, we defined the physical quantities used in our study to investigate scattering, such as the scattering coefficients, the probability density, the scattering efficiency, and the trapping time. On the basis of the numerical results, we investigated the GQD system under several values of the physical parameters: the incident electron energy, the magnetic field intensity, the GQD radius, the angular momentum, and the magnetic flux. Indeed, we found that with increased magnetic flux, the scattering efficiency takes significant non-zero values at zero magnetic field, and the quasi-bound states also start to become measurable at smaller values of the GQD radius. The magnetic flux can be considered an important parameter to control scattering and excitation of the quasi-bound states. Secondly, in terms of the density, we have shown that it increases inside the GQD with an increasing magnetic flux. As a result, the probability of trapping the electrons inside the GQD becomes very high. Finally, in terms of complex incident energy, we showed that a significant increase in the electron trapping time inside the GQDs can be acheived with an increase in the magnetic flux.
2303.01392
Pricing in Ride-sharing Markets : Effects of network competition and autonomous vehicles
Autonomous vehicles will be an integral part of ride-sharing services in the future. This setting differs from traditional ride-sharing marketplaces because of the absence of the supply side (drivers). However, it has far-reaching consequences because in addition to pricing, players now have to make decisions on how to distribute fleets across network locations and re-balance vehicles in order to serve future demand. In this paper, we explore a duopoly setting in the ride-sharing marketplace where the players have fully autonomous fleets. Each ride-service provider (RSP)'s prices depend on the prices and the supply of the other player. We formulate their decision-making problems using a game-theoretic setup where each player seeks to find the optimal prices and supplies at each node while considering the decisions of the other player. This leads to a scenario where the players' optimization problems are coupled and it is challenging to find the equilibrium. We characterize the types of demand functions (e.g.: linear) for which this game admits an exact potential function and can be solved efficiently. For other types of demand functions, we propose an iterative algorithm to compute the equilibrium. We conclude by providing numerical insights into how different kinds of equilibria would play out in the market when the players are asymmetric. Our numerical evaluations also provide insights into how the regulator needs to consider network effects while deciding regulation in order to avoid unfavorable outcomes.
Diptangshu Sen, Arnob Ghosh
2023-03-02T16:22:31Z
http://arxiv.org/abs/2303.01392v2
# Pricing in Ride-sharing Markets : Effects of network competition and autonomous vehicles ###### Abstract Autonomous vehicles will be an integral part of ride-sharing services in the future. This setting differs from traditional ride-sharing marketplaces because of the absence of the supply side (drivers). However, it has far-reaching consequences because in addition to pricing, players now have to make decisions on how to distribute fleets across network locations and re-balance vehicles in order to serve future demand. In this paper, we explore a duopoly setting in the ride-sharing marketplace where the players have fully autonomous fleets. Each ride-service provider (RSP)'s prices depend on the prices and the supply of the other player. We formulate their decision-making problems using a game-theoretic setup where each player seeks to find the optimal prices and supplies at each node while considering the decisions of the other player. This leads to a scenario where the players' optimization problems are coupled and it is challenging to find the equilibrium. We characterize the types of demand functions (e.g.: linear) for which this game admits an exact potential function and can be solved efficiently. For other types of demand functions, we propose an iterative algorithm to compute the equilibrium. We conclude by providing numerical insights into how different kinds of equilibria would play out in the market when the players are asymmetric. Our numerical evaluations also provide insights into how the regulator needs to consider network effects while deciding regulation in order to avoid unfavorable outcomes. ## I Introduction ### _Motivation_ When Uber started operations in San Francisco in 2010, a new era of ride-sharing systems was ushered in. Ride-sharing systems are examples of the'sharing economy' where users log onto a platform (like Uber) to request rides and the platform matches them with potential drivers nearby. Since 2010, the ride-sharing market has taken off rapidly and is expected to be worth 456\(bn\) $ by the end of 2023 ([1]). The traditional ride-sharing marketplace is two-sided and the ride-service provider (RSP) can control both supply and demand sides by using the price signal. However, competition pushes passenger fares down and driver wages up and reduces the margin for the RSPs. With significant technological advances over the last few years, autonomous vehicles are the next big step in the ride-sharing marketplace. Lyft has already introduced autonomous fleets in Las Vegas, Miami and more recently in Austin, Texas ([2, 3]). Uber also has similar plans by the end of 2022 ([4]). While such futuristic developments are exciting, they alter the dynamic of the marketplace entirely. The supply side of the marketplace is now non-existent and platforms are required to maintain their own autonomous fleets. Hence, it is profitable for the RSPs to have a such fleet as they do not need to incentivize drivers. A fixed supply also means that apart from pricing, there are other important considerations to be made, like what should be the optimal fleet size and how to dispatch vehicles optimally. Thus, the decision-making process becomes more convoluted. In this paper, our goal is to study the following : 1. How can players with autonomous vehicles make decisions about how to price, dispatch, and re-balance effectively? 2. How will the above decisions be affected when there is competition in the marketplace? 3. What kind of effects would imposing regulations (like parking costs, and congestion taxes) have on the RSPs? The readers may be wondering why these are questions worth answering. Although the first two questions have been studied extensively for the traditional ride-sharing marketplace, analysis of the case where autonomous vehicles are involved is in its nascent stage. We have already explained how the case with autonomous vehicles is different from the traditional case. Also, there are examples in recent times where replacing human components of systems with autonomous components, can lead to unfavorable consequences ([5]). So it is imperative that we study these hybrid systems exhaustively before deploying them in the real world. Further, we also need to understand how to regulate those marketplaces to increase consumers' welfare. Now, we highlight why some of these problems are challenging. [6] has shown that under restrictive assumptions (linearity) on the demand function, there is an easy way to compute the equilibrium of the problem using potential functions. However, the problem becomes challenging when we relax the linearity assumption (Refer Section III). It is also not clear apriori how these hybrid systems will respond to player asymmetries and regulations. We investigate all these aspects of the problem in the paper. We answer the questions using a _duopoly setting_ where only two platforms are involved in the marketplace. While a duopoly setting might sound restrictive, however we observe that many major ridesharing markets around the world have evolved into duopolies like Uber-Lyft in the US ([7]), Uber-Ola in India ([8]). Further, [6] also considers a duopoly setting. ### _Contributions_ We summarize the main contributions of our paper below. 1. We consider a generic demand model (linear or non-linear in prices) to capture the dynamics of a ride-sharing marketplace. To the best of our knowledge, the existing literature considers linear demand function which is indeed a restrictive assumption because, in real life, demand functions are estimated using data and are hardly ever linear. 2. Our modeling choice makes the problem more challenging because it no longer admits a potential function (which is one of the key solution concepts in network games). So we develop an iterative algorithm to compute the equilibrium which applies to any general demand function. Even though we cannot provide formal convergence guarantees on the algorithm at this time, empirically our algorithm converges to the equilibrium quickly. 3. We investigate the properties of the equilibrium in a variety of settings. We show through numerical experiments that when the players in the market are asymmetric and demands are unbalanced (unbalanced demand means that different nodes in the network have significantly different levels of demand), the smaller player may be forced to exit the market meaning that they will not serve certain origin-destination pairs leading to 'localized monopolies'. 4. We also show that network effects play significant roles in selecting regulations. This helps us to generate useful insights on how regulators can design price regulations that can achieve desired outcomes like increasing passenger welfare and decreasing idle vehicles. ### _Literature Review_ In recent years, the ride-hailing marketplace has been an area of active research. The pricing problem is a consistent theme in the existing literature and two notable papers which explore this are [9] and [10]. [9] uses a queuing theory framework for matching passengers with rides. In contrast to studying the temporal variation, [10] considers the effect of spatial variation. There have also been other work which has explored pricing for ride-sharing platforms using tools like reinforcement learning ([11]). However, this line of work is with respect to a single platform (which operates in a traditional two-sided marketplace) and does not consider the competition. Hence, those analyses can not be extended to our setting. The network effect on a single platform or market maker has been studied [12, 13, 14, 15]. In [12, 13] the market maker or platform procures supply across multiple locations and then transports those supplies to meet the demand. Multiple firms compete at each node as in the Cournot competition model. In [14, 15] the platform creates a network by assigning edges between the supply and the demand side where suppliers/firms can only serve the customers connected through edges. However, in our setup, multiple platforms (RSPs) are competing across multiple locations instead of a single platform. Further, the competition model we consider is different from the networked Cournot model considered in the above papers. More recently, researchers have considered how to optimally re-balance (sending idle vehicles from one location to serve the demand at other locations) [16], and selecting prices [17] when the RSP has autonomous vehicles. [18] considers the setup where the RSP selects optimal prices and rebalances the vehicles jointly. However, these papers do not consider the effect of competition of multiple RSPs. There has also been some work that considers competition among ride-sharing platforms. [19] explores how competition on the supply side of the marketplace can affect driver wages and passenger welfare. However, as we pointed out earlier, the competition with autonomous vehicles is a different scenario. In terms of context, the works which are closest to ours are [20] and [6]. [20] looks at price competition in a duopoly setting with two platforms that own fully autonomous fleets. However, it assumes that the platforms have identical operation costs which leads to a symmetric equilibrium (i.e., identical prices for both players for each source-destination pair). However, the firms may differ significantly in terms of fleet sizes and this can lead to asymmetric equilibria which we have considered in our paper, but has not been taken into account in [20]. This allows us to get insights into the properties of asymmetric equilibria. As we mentioned earlier, [6] also investigates the competition between two ride-sharing platforms with autonomous fleets. However, [6] considers a linear model. Compared to both [6, 20] we consider generic demand functions. We also provide an iterative algorithm on how to find equilibrium for these generic demand functions. For a special non-linear demand model, we observe that such an algorithm indeed converges quickly. Further, unlike the above papers, we also investigate the impact of various forms of regulation which guides us on how to attain outcomes that will be beneficial to the passengers. ## II Modeling ### _Network_ We consider a simple two-node network which is represented by the complete directed graph \(G=(\mathcal{N},\mathcal{A})\) (Refer Fig. 1). Clearly, \(\mathcal{N}=\{1,2\}\) and \(\mathcal{A}=\{e_{11},e_{12},e_{21},e_{22}\}\). We also assume that transit times along all arcs are the same. Note that our analysis and insights go through for larger networks (with \(|\mathcal{N}|>2\)) and different arc transit times. The primary reason for considering a small version of the network is to visualize and interpret the network effects on the decisions more meaningfully. Fig. 1: Network Structure ### _Players & Interactions Model_ There are RSPs \(A\) and \(B\) who operate in the marketplace (i.e., a duopoly setting). Each RSP has a fixed number of vehicles in its fleet, however, the number may be different across the RSPs (asymmetric players). Vehicles are used in one of three ways: i) they serve passengers, ii) they are routed empty (which we denote as _re-balancing_ throughout this paper) to places of high expected demand, and iii) they stand idle at one or multiple locations (equivalent to be 'parked'). We assume that a vehicle can only serve one passenger in a trip. Each RSP earns revenue by serving demand. They also incur costs for operating the fleet (fuel costs/ 'congestion taxes') or for keeping vehicles idle ('parking costs'). Therefore, the RSP has to decide how to set prices and use vehicles judiciously so that they can maximize their profits. Market competition makes the pricing decision more convoluted because players 1 now have to take into consideration prices set by the other player. We will capture the effect of one player's prices on the other using the _demand function_ which we explain subsequently. The players are assumed to be rational and selfish, so we model their interaction as a _simultaneous_, _non-cooperative game_. Note that our model does not consider any temporal aspects as our focus is on investigating the spatial effect. The characterization of the model for the temporal variation of demand has been left for the future. Footnote 1: We will use the terms ride-service providers, players, and RSPs interchangeably. ### _Demand Function Modeling_ The demand function outputs the fraction of the market share acquired by an RSP, given its own price and the price of the competitor. Let \(f(p_{A},p_{B})\) represent the demand function for player \(A\) which depends on the prices of both players \(A\) and \(B\), given by \(p_{A}\) and \(p_{B}\) respectively (similarly, player \(B\)'s demand function would be given by \(f(p_{B},p_{A})\)). We assume that all prices are normalized by \(P\) and hence are in the interval \([0,1]\). \(P\) can be thought of as the price of an alternative commuting option, so no price in the network should exceed \(P\), otherwise, passengers will avail of the outside option. In order to qualify as a candidate demand function, it is necessary that \(f(\cdot)\) have the following desirable properties : 1. \(0\leq f(p_{A},p_{B})\leq 1\) : It is not possible to capture a market share that is negative or greater than 1. 2. \(f(p_{A},p_{B})+f(p_{B},p_{A})\leq 1\) : The total market share captured by both players cannot exceed 1. 3. \(p_{A}=p_{B}\implies f(p_{A},p_{B})=f(p_{B},p_{A})\) : This means that if both players set equal prices, they should capture identical market shares. It also reflects an inherent assumption in our model that no passenger has specific preferences for a particular ride-service provider. Also, note \(f(0,0)=\frac{1}{2}\). 4. \(f(p,p)\) is non-increasing in \(p\) : This captures the price sensitivity of passengers. Even if both players set the same price \(p\), as \(p\) increases, fewer and fewer people would be availing of a ride as it exceeds their willingness-to-pay threshold. 5. \(p_{A}>p_{B}\implies f(p_{A},p_{B})\leq f(p_{B},p_{A})\) : If player \(A\) sets a higher price than \(B\), then \(A\) cannot capture a strictly larger market share than \(B\). 6. \(p_{A}>p_{A}^{\prime}\implies f(p_{A},p_{B})\leq f(p_{A}^{\prime},p_{B})\) : This means that with player \(B\)'s price fixed, if player \(A\) increases its price, then its market share will reduce. If \(f(\cdot)\) is differentiable with respect to \(p_{A}\), this condition is equivalent to \(\frac{\partial f}{\partial p_{A}}\leq 0\) 7. \(p_{B}<p_{B}^{\prime}\implies f(p_{A},p_{B})\geq f(p_{A},p_{B}^{\prime})\) : If the competitor \(B\) increases its price, then player \(A\)'s market share should increase for the same price \(p_{A}\). If \(f(\cdot)\) is differentiable with respect to \(p_{B}\), it is equivalent to \(\frac{\partial f}{\partial p_{B}}\geq 0\) 8. \(f(1,p_{B})=0\) : If player \(A\) sets \(p_{A}=1\) (the highest possible price), then its market share goes to _zero_, irrespective of \(p_{B}\). Alternatively, \(p_{A}=1\) represents a scenario where player \(A\) does not compete in the market and \(B\) has a monopoly. 9. \(f(0,1)=1\) : If player \(A\) has a monopoly, then it can capture the whole market by setting \(p_{A}=0\). This is again intuitive. In recent ride-sharing literature, [6] uses a specific piecewise linear form. It can be easily verified that it satisfies all the aforementioned properties. However, linearity is a very strong assumption because, in reality, demand functions are hardly ever linear. Thus, our goal is to analyze _any_ demand model which captures all the above properties. As an example of a non-linear model for which we will evaluate our numerical results, we consider the following bi-linear form: \[f(p_{A},p_{B})=\frac{1}{2}(1-p_{A})(1+p_{B})\quad\forall\ 0\leq p_{A},p_{B}\leq 1 \tag{1}\] ### _RSP's Optimization Problem_ Each RSP seeks to maximize its profit. Profit for an RSP is expressed as the difference between total revenues earned and operation costs incurred. Revenues are earned from serving demand in the network. Operation costs are classified into two components : 1. _Re-balancing costs_ : The RSP has the option to route empty vehicles to other locations where demand is higher. Thus, re-balancing costs may represent fuel costs for the empty cars plying on the network. However, sometimes, re-balancing vehicles can also cause a nuisance by contributing to higher congestion. To prevent such behavior, the central planner may impose penalties on empty routing vehicles. All such penalties are also included in this cost. 2. _Parking costs_ : If demand is low, the RSP will be forced to keep some of its fleets idle. Sometimes, the RSP might also have the incentive to keep vehicles idling deliberately to create an artificial lack of supply in the market and jack up the price. If the RSP chooses to keep vehicles parked at any node, it has to pay parking costs. For example, during certain periods of the day, the central planner may choose to impose high parking fees at specific locations to specifically avoid idling behavior. Each ride-service provider has to make the following decisions : 1. [leftmargin=*] 2. _How to choose ride fares \(p_{(\cdot)}^{ij}\) for every arc in the network?_ The RSP wants to earn higher revenues, so it may be tempted to set high prices but there is a trade-off because high prices mean that a lower number of passengers would be willing to ride and there is the risk of losing out of market share to the competitor. Pricing is also important for matching supply with demand. Since the supply is limited, pricing too low would mean that demand exceeds supply and many passengers who want to avail of a ride, do not get matched with a vehicle. 3. _Given a fixed fleet (m.) size, how to allocate supplies \(m_{(\cdot)}^{i}\) optimally across all the network nodes?_ This decision depends on a lot of factors. Intuitively, it is more profitable to allocate larger supplies to nodes with high expected demand. However, it might happen that competition at the other node is low. In case some part of the fleet is not in use and will be kept idle (low demand regime), it may be more favorable for the RSP to place those vehicles at a location that has lower parking costs. 4. _How to choose re-balancing flows of empty vehicles (given by \(r_{(\cdot)}^{ij}\)) throughout the network effectively?_ Again, it is not apriori straightforward to optimally decide. Whenrides carry passengers from one location to another, it leads to the accumulation of vehicles at the destination node. If demand along the reverse direction is scarce, the RSP has to re-route some of these empty vehicles to meet demand in other locations because supply is limited. #### Iii-B1 Assumptions We make the following assumption : We treat vehicles like divisible commodities. This enables us to model them as continuous variables and retain the convexity of the solution space (ideally, they should be modeled as integer variables because the number of passengers or vehicle supply cannot be fractional). It is well-known that mixed-integer models are difficult to solve and solution methods often do not scale well. #### Iii-B2 Notation Key In this segment, we introduce the notation for our formulation. Unless otherwise specified, this notation applies to the rest of the paper. (Refer Table I) #### Iii-B3 Formulation We now introduce our formulation for player \(A\)'s optimization problem (given by \(\mathcal{F}_{\mathcal{A}}\)). Player \(B\)'s optimization problem will be similar, so we will omit it here. \[\mathcal{F}_{\mathcal{A}}:=\max_{\begin{subarray}{c}p_{A}^{ij},p_{A }^{ij},m_{A}^{i}\end{subarray}} \sum_{e_{ij}\in A}(p_{A}^{ij}-p_{e}^{ij})x_{A}^{ij}-\sum_{e_{ij} \in A}p_{e}^{ij}r_{A}^{ij}\] \[-\sum_{i\in N}p_{e}^{i}\left(m_{A}^{i}-\sum_{j\in N}(x_{A}^{ij}+r _{A}^{ij})\right)\] \[s.t. x_{A}^{ij}=D^{ij}\cdot f(p_{A}^{ij},p_{B}^{ij})\quad\forall\ e_{ij}\in \mathcal{A} \tag{2a}\] \[\sum_{j\in N}\left(x_{A}^{ij}+r_{A}^{ij}\right)\leq m_{A}^{i}\quad \forall\ i\in\mathcal{N}\] (2b) \[\sum_{\begin{subarray}{c}j\in N\\ j\neq i\end{subarray}}\left(x_{A}^{ij}+r_{A}^{ij}\right)=\sum_{\begin{subarray}{ c}j\in N\\ j\neq i\end{subarray}}\left(x_{A}^{ji}+r_{A}^{ji}\right)\quad\forall\ i\in \mathcal{N}\] (2c) \[\sum_{i\in N}m_{A}^{i}=m_{A}\] (2d) \[0\leq p_{A}^{ij}\leq 1\quad\forall\ e_{ij}\in\mathcal{A}\] (2e) \[0\leq r_{A}^{ij}\quad\forall\ e_{ij}\in\mathcal{A},\quad 0\leq m_{A}^{i }\quad\forall\ i\in\mathcal{N} \tag{2f}\] As described earlier, player \(A\) seeks to maximize its profit (Revenue - rebalancing costs - parking costs). (2a) is the demand constraint (note that we can omit this constraint by substituting \(x_{A}^{ij}\)'s throughout, however, it makes the problem non-linear in \(p_{A}^{ij}\)). (2b) is a supply constraint at every node because the total outflow rate of vehicles from node \(i\in\mathcal{N}\) cannot exceed the supply rate of vehicles at the node. The equality constraint in (2c) represents a flow balance constraint at each node. This is needed as the total inflow rate must equal to the total outflow rate from any given node. Observe that we deliberately omit flows along the self-loops (edges \(e_{ii}\)) because it cancels out on both sides of the equation. The next constraint (2d) implies that the total supply across the network must add up to the fleet size of the RSP. Finally, we have the bounds on the prices (2e) and the non-negativity constraints (2f). Since prices are normalized, they cannot exceed 1. #### Iii-B4 Essential Insights Even before we proceed to finding the equilibrium of the game, several interesting observations can be made about player \(A\)'s optimization problem \(\mathcal{F}_{\mathcal{A}}\). **Observation 1**: _When player \(B\)'s prices are known and we consider the form of the demand function in Eq.1, player \(A\)'s optimization problem admits a unique solution (and vice-versa)._ This is easy to verify. When we use the demand function form in Eq. 1, the objective function is concave in \(p_{A}^{ij}\) (quadratic in \(p_{A}^{ij}\) with negative leading co-efficient), \(r_{A}^{ij}\) and \(m_{A}^{i}\). Also, the feasible set is convex and compact. Maximizing a concave function over a convex compact set always leads us to a unique solution. Note that Observation 1 will be used later to ensure the uniqueness of the equilibrium in Section III-B for any given price vector \(p_{B}\) and the demand function form in Eq. 1. Note that if \(f(\cdot)\) is non-linear in \(p_{A}\), the constraint in (2a) makes the problem non-convex for a given price in \(p_{B}\), hence, we can not guarantee the above observation. However, we can still use any non-linear solver to find \(p_{A}\) for a given price of the other player \(B\). The next observation tells us that even if the lower bound of the price is 0, under a competitive setup, no player would choose a price smaller than \(1/2\). **Observation 2**: _All prices on the network will be \(\geq\frac{1}{2}\)._ Observe that \(p_{A}^{ij}\geq p_{c}^{ij}\) always. This is because setting a price lower than \(p_{c}^{ij}\) would lead to negative revenues from serving demand on \(e_{ij}\) and is not favorable to the RSP. \(p_{A}^{ij}=1\) is equivalent to player \(A\) not competing on \(e_{ij}\) at all, so this case is not very interesting. (Also, when \(p_{A}^{ij}=1\), the observation is trivially correct.) So, we now look at cases when \(p_{c}^{ij}\leq p_{A}^{ij}<1\). Using the KKT-conditions on Problem 2, we can show that : \[p_{A}^{ij}=\frac{1}{2}(1+Q^{ij})\] where \(Q^{ij}\) is the Lagrange multiplier associated with the non-negativity constraint \(r_{A}^{ij}\geq 0\). Since \(Q^{ij}\geq 0\), this implies \(p_{A}^{ij}\geq\frac{1}{2}\). Equality holds when \(Q^{ij}=0\), that is whenever \(r_{A}^{ij}>0\) (using complementary slackness). For a detailed analysis of the KKT-conditions, please refer to Appendix section of the paper. _Remarks :_ Although our main focus in this paper is to study competition in the duopoly setting, note that our model can also be used to simulate monopoly scenarios in the ride-sharing marketplace. This can be done by simply setting the price vector of one of the players equal to \(1\). By property P8 of the demand function, it ensures that the market share of this player goes to _zero_, irrespective of the price point of the other player. This is very convenient because it allows us to compare the properties of the monopoly market with the competitive market. ## III Computing Equilibria From player \(A\)'s optimization problem in Eq. 2, it is clear that \(A\) cannot solve for the optimal decisions without considering the prices set by player \(B\). Similarly, player \(B\)'s decisions are also dependent on prices set by \(A\). Thus, their optimization problems are coupled. In this setting, we define an _equilibrium_ as the combination of decisions \(\big{(}\{p_{A}^{ij}\},\{r_{A}^{ij}\},\{m_{A}^{i}\},\{p_{B}^{ij}\},\{p_{B}^{ij}\},\{m_{B}^{i}\}\big{)}\) from where neither player \(A\) nor player \(B\) has the incentive to deviate unilaterally. We assume a complete information setup where both players have knowledge of the other player's fleet sizes. The characterization of equilibria for the incomplete information setup is an interesting future direction and is out of the scope of this paper. ### _Potential functions_ Nash equilibrium in general is difficult to obtain computationally even if a game admits equilibrium. Potential functions are extremely useful tools to compute the equilibria of multi-player games. They involve finding a global potential function \(\Phi(\cdot)\) which tracks the change in payoffs whenever any player unilaterally changes their strategy. It has been established that the local optima of the potential function correspond to the pure Nash equilibria of the underlying game. [6] has already shown that there exists an exact potential game corresponding to a 2-player duopoly setting in the ride-sharing marketplace with fully autonomous vehicles, for a specific choice of the demand function. Our aim is to investigate if and when the potential game approach can be applied more generally. **Observation 3**: _For the bi-linear demand function defined in Eq. 1, this game does not have an exact potential function._ The proof is by contradiction. Details can be found in the Appendix section of the paper. Now, we have already seen an example where if the structure considered in [6] is not satisfied, we may not have a potential function. Naturally, this leads to the question: Under what conditions on the demand function \(f(\cdot)\) can the game admit a potential function? We take a small step towards answering this question which leads to the following result. **Theorem 1**: _When the demand function \(f(p_{A},p_{B})\) is of the form \(f(p_{A},p_{B})=g(p_{A})+h(p_{B})\), the game does have an exact potential function if and only if \(h(p_{B})\) is a linear function in \(p_{B}\) (that is, \(h(p_{B})=Cp_{B}\) for some \(C>0,C\in\mathbb{R}\))._ Detailed proof can be found in the Appendix. Note that when the game admits a potential function, we can solve the potential function to obtain the equilibrium as has been done in [6]. _Remark :_ It is easy to see that the demand function used in [6] is a special case of the functional form in Theorem 1 where \(g(p_{A})\) is linear in \(p_{A}\). Hence, it must admit an exact potential function. ### _Algorithm for Finding Equilibrium_ In this segment, we propose a heuristic algorithm (Refer Algorithm 1) to compute the equilibrium of the game for general demand functions which do not admit potential functions. In particular, we consider that the demand function is of the non-linear form in Eq. 1. Unfortunately, we are unable to provide formal convergence guarantees on the algorithm at this time. However, empirically, we find that the algorithm converges quickly to the equilibrium. We introduce some more notation here. Let \(\mathcal{F}_{\mathcal{A}}\) and \(\mathcal{F}_{\mathcal{B}}\) be the optimization problems of players \(A\) and \(B\) respectively. Now, **Definition 1**: \(\mathcal{BR}(\mathcal{F}_{\mathcal{A}}\mid p_{B})\) _represents the best response price of player \(A\) to the price \(p_{B}\) set by player \(B\). Similarly, for player \(B\), we denote \(\mathcal{BR}(\mathcal{F}_{\mathcal{B}}\mid p_{A})\) as the best response price of player B when player A sets its price \(p_{A}\)._ Note that finding the best response price given the other player's price just involves solving a convex optimization problem and hence, can be done very efficiently using any standard convex-opt solver. **The high-level idea**: We can think of \(\mathcal{BR}(\textit{OPT}_{A}\mid p_{B})\) as a function \(\Theta(p_{B})\) and similarly \(\mathcal{BR}(\textit{OPT}_{B}\mid p_{A})\) as a function \(\zeta(p_{A})\). Then from the iteration algorithm, we have \(p_{A}^{(k)}=\Theta(p_{B}^{(k)})\) and \(p_{B}^{(k)}=\zeta(p_{A}^{(k-1)})\) which implies \(p_{A}^{(k)}=\Theta\circ\zeta(p_{A}^{(k-1)})\). **Why is it difficult to show convergence guarantee?** Proving convergence of the algorithm is equivalent to showing that the function composition \(\Theta\circ\zeta\) has a fixed point and by symmetry, \(\zeta\circ\Theta\) also has a fixed point. However, that requires us to show that \(\zeta(\cdot)\) and \(\theta(\cdot)\) are continuous mappings which is difficult in this setting. **Comment on convergence from empirical evaluations** 1. Even though we can not show theoretical convergence, the algorithm was empirically found to converge quickly to the equilibrium, usually within 5 iterations for \(\epsilon=0.01\). 2. For a given price vector of the other player (say \(B\)), the algorithm will always produce a _unique solution_ to the player \(A\). This follows directly from Observation 1. 3. The algorithm was empirically found to converge to the same equilibrium, irrespective of the initial points. _Remark_: For non-linear functions other than Eq. 1, we can still apply this heuristic algorithm. However, since the optimization problem for a player may not be convex (given the price of the other player), we need to rely on a non-linear optimization problem solver and we can not guarantee uniqueness of the equilibrium. ## IV Numerical Experiments In this section, we will explore the properties of the equilibrium in a variety of settings using numerical experiments. We will start by describing the simulation setup for the experiments. ### _Simulation Setup_ We fix the total supply on the network to be \(\mathcal{S}=1000\) for all our experiments. We use a parameter \(m\) to determine the total demand \(\mathcal{D}\) across the network according to the relation \(\mathcal{D}=m\mathcal{S}\). For example, \(m=0.5\) indicates that \(\mathcal{D}=500\) and it is a low-demand regime. \(m=1\) indicates that total demand and supply are exactly matched while \(m>1\) indicates high demand regimes. The distribution of demand across the different arcs is controlled by another parameter \(\alpha\) (explained in detail in Section IV-B). We also have a parameter \(\beta\) which determines what fraction of total supply \(\mathcal{S}\) is owned by which player. Player \(A\) has a fleet of size \(\beta\mathcal{S}\) while player \(B\) has a fleet of size \((1-\beta)\mathcal{S}\). \(\beta=0.5\) indicates that both players are symmetric while \(\beta<0.5\) indicates that \(B\) is the larger player in the market (vice-versa for \(\beta>0.5\)). ### _Demand Patterns_ We consider the following demand patterns in our experiments: 1. _Pattern 1_ : Here, the demand is equally split between the two nodes. We use control parameter \(\alpha\) to choose how much of the demand at each node flows towards node 1. \(\alpha\) is varied in the range \([0.5,1]\). \(\alpha=0.5\) represents a perfectly balanced network while \(\alpha=1\) represents the scenario where all demand is concentrated on arcs which end in node 1. Let \(\mathcal{D}\) represent the total demand across the network. Then the demand distribution can be represented by matrix \(D\) where \(D_{ij}\) is the demand on arc \(e_{ij}\) : \[D=\begin{bmatrix}0.5\alpha\mathcal{D}&0.5(1-\alpha)\mathcal{D}\\ 0.5\alpha\mathcal{D}&0.5(1-\alpha)\mathcal{D}\end{bmatrix}\] The case with \(\alpha\) close to 1 may be similar to the busy downtown area (\(node\) 1) of a city which attracts most of the traffic in the network. 2. _Pattern 2_ : This demand pattern is used to model scenarios where demand originates out of a single node increasingly with increase in \(\alpha\). At \(\alpha=0.5\), the demand is perfectly balanced, but at \(\alpha=1\), all demand originates out of node 1 and is equally split between arcs \(e_{11}\) and \(e_{12}\). The demand matrix \(D\) is as follows : \[D=\begin{bmatrix}0.5\alpha\mathcal{D}&0.5\alpha\mathcal{D}\\ 0.5(1-\alpha)\mathcal{D}&0.5(1-\alpha)\mathcal{D}\end{bmatrix}\] One possible example of this type of demand pattern at \(\alpha\) close to 1 could be evening traffic from a busy office area. 3. _Pattern 3_ : This demand pattern captures two extreme scenarios. When \(\alpha=0\), demand is localized only along the cross-arcs \(e_{12}\) and \(e_{21}\). When \(\alpha=1\), all demand is split equally between the two self-looping arcs and represents a setting where there is no network effect at all. The demand matrix \(D\) is given by : \[D=\begin{bmatrix}0.5\alpha\mathcal{D}&0.5(1-\alpha)\mathcal{D}\\ 0.5(1-\alpha)\mathcal{D}&0.5\alpha\mathcal{D}\end{bmatrix}\] This could be similar to a scenario where each node represents a professional hub. During the daytime, people travel mostly between the two places for work (\(\alpha=0\)), but at night, traffic is localized at the individual nodes (\(\alpha=1\)). ### _Duopoly Setting_ #### Iii-C1 Symmetric Players For this setting, we have \(\beta=0.5\), so each player has a fleet size of 500 autonomous vehicles. We study the effects of the demand multiplier \(m\) and the demand distribution parameter \(\alpha\) on player profits. * _Nature of the equilibrium :_ When the players are symmetric, we end up with a _symmetric equilibrium_ where both players choose identical prices, supply patterns and rebalancing flows. This observation is intuitive and in line with the findings of [20] and [6]. * _Effect of \(m\) :_ As the multiplier \(m\) increases, player profits are found to increase across all demand patterns. This can be attributed to higher price points and higher utilization of vehicles as \(m\) increases (Refer Fig. 3). Intuitively, for small values of \(m\) (like \(m=0.5\)), the demand is much smaller compared to supply, so the players cannot operate at full capacity which leads to costs either in terms of re-balancing or parking costs. Additionally, the smaller demand forces the players to lower their prices because higher prices will alienate most of the passengers. However, in the high demand regime (\(m=2\)), players have the flexibility to set high prices and still capture a sizeable portion of the market. Also, their fleets are operating close to full capacity which leads to higher revenues and lower incurred costs. * _Effect of \(\alpha\) :_ For demand patterns 1 and 2, as we increase \(\alpha\) from \(\frac{1}{2}\) gradually to 1, the demand across the network becomes **unbalanced**. This leads to lowering of player profits. This observation is aligned with the findings of [10]. One possible reason for this finding is that as demand gets more unbalanced, the operating inefficiencies for the players increase. For example, let us consider the case with \(\alpha=1\) for pattern 2. All the demand is restricted to arcs \(e_{11}\) and \(e_{12}\). So, many vehicles which serve customers along \(e_{12}\) are forced to re-route empty back to node 1 (Refer Fig. 4). Re-balancing incurs costs and reduces the profits. However, in demand pattern 3, player profits remain unchanged with variation in \(\alpha\). This is because there is no need for re-balancing (the demand in the cross-arcs match exactly). So, the player just has to choose the steady state supply at each node optimally. #### Iii-C2 Asymmetric Players For this setting, we will look at values of \(\beta<0.5\), so player \(A\) has a smaller fleet than player \(B\). Note that we do not consider \(\beta>0.5\) because it is identical to the \((1-\beta)\) scenario (where the identities of players are reversed). * _Big player, Big profits_ : We find that as we decrease \(\beta\) from 0.5 (asymmetry increases), player \(B\)'s profits increase. The increase in profit is much more significant in the higher demand regime (\(m=2\)) compared to the low demand regime (\(m=0.5\)). This outcome is expected because at high demand, owning a larger fleet provides player \(B\) with a large competitive advantage over player \(A\). With increase in asymmetry, the market approaches a monopoly market for player \(B\). It is well-known that monopoly markets are inefficient, so intuitively it appears that the total size of the market served would decrease with decrease in \(\beta\) Fig. 4: Demand Pattern 1 under \(m=2\). As \(\alpha\) increases from 0.5 to 1, demand gets restricted to \(e_{11}\) and \(e_{12}\). \(r_{21}\) increases rapidly leading to decline in profits. Fig. 3: Variations with \(m\) and \(\alpha\) for demand pattern 2. For the plot on the right, when \(\alpha=0.5\), we report only one price because all edges have same price. When \(\alpha=1\), we report \(p_{11}\) and \(p_{12}\) because other arcs have zero demand. Fig. 2: Demand Patterns 1 (left) and 3 (right) However, _surprisingly_, that does not seem to be the case. Player \(B\)'s market share definitely increases when it has a larger fleet at its disposal, but the total market served remains unchanged with changes in \(\beta\). * _Forced market exits :_ Asymmetry creates other adverse effects. In the high demand regime, the player with the smaller fleet might be forced to exit certain markets because serving out of multiple locations may not be profitable with a small fleet. This gives rise to 'localized monopolies' where the larger player has unilateral control. This leads to high prices and is not favorable for the passengers. Refer to Fig. 5 where we identify a scenario where this phenomenon is observed. ### _Regulations_ In this segment, we will explore how different forms of regulation affect the dynamics of the marketplace. We will specifically look at _two_ types of price regulation : 1. _Parking costs at nodes_ 2. _Penalties on re-balancing vehicles_ Also, note that we will study regulations only in the high demand regime (\(m=2\)). The purpose of regulations is primarily to prevent strategic behavior from players (like unnecessary re-balancing of empty vehicles) that leads to unfavorable outcomes for passengers or the network as a whole (lesser passengers served, more congestion on the network). That is why regulations generally would not be imposed in the low demand regime (because demand \(<\) supply and fleets are already under-utilized, incentive to engage in strategic behavior is small) and there is no motivation to study those scenarios. 3. _Regulations affect players' profits disparately :_ We find that when players are highly asymmetric (small \(\beta\)) and demand is unbalanced, regulations affect the larger player's profits much more than the smaller player. Since the regulations are in the form of high parking costs or penalties for re-balancing vehicles, they have negligible effect on the smaller player because it has no re-balancing or parked vehicles (its entire fleet is serving demand). However, regulations do not necessarily lead to passenger welfare because prices increase and number of rides completed, decreases. Refer Tables III and IV. Consider the scenario with demand pattern 2 and \(\alpha=1\). This means that there is high demand originating in node 1. The intuition here is that players would try to capture the high demand at node 1 by re-routing empty vehicles from node 2 to node 1 or keep a large supply at node 1. So, one possible form of regulation could be to increase parking costs at node 1 and penalize empty vehicles on \(e_{21}\). We simulate the scenario with and without regulation. We set \(\beta=0.2\). where \(p_{b}^{ij}\) represents the transit cost on the edge (that applies to all vehicles) while \(v^{ij}\) is a penalty that applies only to re-balancing vehicles on \(e_{ij}\). For now, we set \(p_{b}^{ij}=0.1\) for all \(e_{ij}\in\mathscr{A}\). When there is no regulation, \(v^{ij}=0\)\(\forall\)\(e_{ij}\). We report our solutions in the form of a table. Variables/parameters which are arc-dependent are reported as \(2\times 2\) matrices while node-dependent quantities are reported as vectors of size \(2\times 1\). **Scenario 1 (No regulation) :** Observe that player \(B\) has a large number of vehicles (95) idling at node 1 (Table V). Total supply at node 1 is 495, out of which \(200+111=311\) serve passengers while 89 are routed to node 2 for re-balancing. All the supply at node 2 is used to serve demand. _The most intuitive regulation here is to impose parking costs at node 1_. **Scenario 2 :** We have now increased the parking cost at node 1 to 0.5 (Table VI). As we can see, it has no impact on player \(B\) whatsoever in terms of prices or passengers served. Player \(B\) only adjusts the supply in such a way that the idling vehicles from earlier are now parked at node 2. This also highlights that _localized regulations may not always produce the desired effect._ **Scenario 3 :** We now impose an additional regulation in terms of parking costs at node 2. This does force player \(B\) to use some of the extra supply for serving demand along \(e_{22}\) (rides served increases from 105 to 116). However, this is not a good outcome because it has created a scenario where the number of vehicles moving around without any passengers has increased significantly. The intuition here is that the RSP finds it more favorable to route the extra supply on the network (because there are no penalties currently) than actually serve demand. The key takeaways from this discussion are as follows : 1. Regulations do not affect all players in the same way. They affect the larger player in the market significantly more than the smaller player. 2. Localized regulations often may not have the desired outcome. This is primarily due to network effects. Since the RSP has control over its whole supply, it can circumvent local price regulations by diverting supplies in the most cost-effective way possible. 3. This also gives us some insights into how we can design effective regulations for ride-sharing marketplaces with autonomous vehicles. The regulations need to be coordinated over the entire network to achieve the best possible outcome. ## V Conclusion & Future Scope In this paper, we use game theory to study the networked competition of two players in the ride-sharing marketplace where the players have fully autonomous fleets. We propose a non-linear demand function that captures the effect of one player's price on the other. Unlike linear demand functions in literature, we show that the non-linear form does not admit a potential function, so it is technically challenging to solve. We propose an iterative algorithm to compute the equilibrium of the game for any general form of the demand function. We use our model to empirically study properties of the equilibrium under a variety of settings like asymmetric competition and price regulations and develop insights that can help regulators design informed policies/regulations for these markets that can achieve desired outcomes like increased passenger welfare. There are several interesting avenues of future work. One immediate extension is to investigate the equilibrium of this game under incomplete information settings. It may also be worthwhile to explore if it is possible to provide formal convergence guarantees or find necessary conditions for the convergence of our iteration algorithm. In our work, we highlight why it is important to coordinate regulations across the network. Designing effective price regulations that take network effects into account, could be another interesting direction of work. ## VI Acknowledgement We thank Dr. Parinaz Naghizadeh at the Ohio State University for her insightful inputs at different stages of the work.
2304.05153
Regression-based Deep-Learning predicts molecular biomarkers from pathology slides
Deep Learning (DL) can predict biomarkers from cancer histopathology. Several clinically approved applications use this technology. Most approaches, however, predict categorical labels, whereas biomarkers are often continuous measurements. We hypothesized that regression-based DL outperforms classification-based DL. Therefore, we developed and evaluated a new self-supervised attention-based weakly supervised regression method that predicts continuous biomarkers directly from images in 11,671 patients across nine cancer types. We tested our method for multiple clinically and biologically relevant biomarkers: homologous repair deficiency (HRD) score, a clinically used pan-cancer biomarker, as well as markers of key biological processes in the tumor microenvironment. Using regression significantly enhances the accuracy of biomarker prediction, while also improving the interpretability of the results over classification. In a large cohort of colorectal cancer patients, regression-based prediction scores provide a higher prognostic value than classification-based scores. Our open-source regression approach offers a promising alternative for continuous biomarker analysis in computational pathology.
Omar S. M. El Nahhas, Chiara M. L. Loeffler, Zunamys I. Carrero, Marko van Treeck, Fiona R. Kolbinger, Katherine J. Hewitt, Hannah S. Muti, Mara Graziani, Qinghe Zeng, Julien Calderaro, Nadina Ortiz-Brüchle, Tanwei Yuan, Michael Hoffmeister, Hermann Brenner, Alexander Brobeil, Jorge S. Reis-Filho, Jakob Nikolas Kather
2023-04-11T11:43:51Z
http://arxiv.org/abs/2304.05153v1
# Regression-based Deep-Learning predicts molecular biomarkers from pathology slides ###### Abstract We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain in the brain. We propose a novel _Dielectric_ model for the brain brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the in the brain in the brain in the brain in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the in the brain in the brain in the brain in the in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the brain in the brain in the in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the brain in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the in the brain in the in the brain in the in the brain in the in the in the brain in the in the brain in the in the in the brain in the in the brain in the in the brain in the in the brain in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the brain in the in the in the in the brain in the in the in the in the brain in the in the in the in the brain in the in the in the in the in the brain in the in the in the in the in the brain in ## Abstract Deep Learning (DL) can predict biomarkers from cancer histopathology. Several clinically approved applications use this technology. Most approaches, however, predict categorical labels, whereas biomarkers are often continuous measurements. We hypothesized that regression-based DL outperforms classification-based DL. Therefore, we developed and evaluated a new self-supervised attention-based weakly supervised regression method that predicts continuous biomarkers directly from images in 11,671 patients across nine cancer types. We tested our method for multiple clinically and biologically relevant biomarkers: homologous repair deficiency (HRD) score, a clinically used pan-cancer biomarker, as well as markers of key biological processes in the tumor microenvironment. Using regression significantly enhances the accuracy of biomarker prediction, while also improving the interpretability of the results over classification. In a large cohort of colorectal cancer patients, regression-based prediction scores provide a higher prognostic value than classification-based scores. Our open-source regression approach offers a promising alternative for continuous biomarker analysis in computational pathology. ## Introduction The collection and pathological examination of tissue specimens is used for accurate diagnosis of patients with malignant tumors, providing information related to histology grade, subtype, stage and other tumor biomarkers. Digital pathology describes the computational analysis of tissue specimen samples in the form of whole slide images (WSI). Numerous studies have shown that alterations in individual genes [1, 2, 3], microsatellite instability [4, 5, 6], and the expression of individual genes [7] or expression patterns of groups of genes [8, 9] can be predicted directly from WSI. This research area has also enabled genetic changes to be correlated with morphological patterns (i.e. genotypic-phenotypic correlations) [10], which facilitates the prediction of patient outcome [11]. Consistent with their clinical application, several of these methods have been approved for clinical use by regulatory agencies [12], to the extent that the prediction of biomarkers from pathological diagnostic workflows based on deep learning (DL) is becoming increasingly relevant, not only in the research setting, but also as a de facto clinical application [2, 12, 13]. The prediction of genotypic-phenotypic correlations, which involves predicting genetic biomarkers from WSIs, is a weakly supervised problem in DL. To accomplish this task, a DL model correlates phenotypic features from WSIs with a single ground truth obtained from molecular genetic sequencing of tumor tissue at the patient level. Nevertheless, as these WSI are of gigapixel resolution, neural network processing requires breaking them into smaller regions referred to as tiles or patches. These regions may, however, contain less relevant tissues such as connective tissue or fat, which might not contribute to biomarker predictability [14]. To address this issue, attention-based multiple instance learning (attMIL) is the predominant technical approach that is currently used [15, 16, 17, 18]. To implement this strategy, feature vectors are first extracted from pre-processed tiles. These vectors are then aggregated by a multi-layer perceptron with an attention component, allowing for a patient-level prediction of the WSI. Despite the current attMIL approach yielding a high accuracy for biomarker prediction from WSIs [15, 19, 20], almost all published approaches are limited to classification problems with categorical values (e.g. presence or absence of a genetic alteration) [1, 21, 22, 3, 1, 3, 8, 11, 12]. Nonetheless, the ground truth of many biomarkers is available as continuous values, which are then binarized prior to being utilized as ground-truth for DL. This is true for whole-genome duplications, copy number alterations, homologous recombination deficiency (HRD), gene expression values, protein abundance, and many other measurements. Studies that pursue regression analysis of continuous values often opt for dichotomisation or custom thresholds for categorization. For example, prior to modeling, Fu et al. utilized a LASSO approach for the classification of continuous chromosome data into three classes.[10] Schmauch et al. trained a regression model to predict continuous biomarkers and subsequently used percentile thresholds for the evaluation of the models through a categorical representation.[7] However, binarization or dichotomization of these values results in information loss[23], which presumably limits the performance of DL systems predicting these biomarkers from pathology slides. Alternatively, a more suitable approach to classification in histopathological WSI analysis would be regression. Regression[24] is a modeling approach used to investigate the relationship between variables, such as morphological features from a WSI, and continuous numerical values, such as genetic biomarkers. To date, there is a paucity of data exploring this approach. A recent study by Graziani et al. presented a novel approach to predict continuous values from pathological images[25], yet their regression network was not systematically compared and required more extensive validation with respect to the more-explored classification approach. In this study, we systematically compared classification- and regression-based approaches for prediction of continuous biomarkers across multiple cancer types. We hypothesized that regression outperforms classification in weakly supervised analyses of pathology hematoxylin-and-eosin (H&E)-stained WSIs for biomarker predictability, model interpretability and prognostic capability. In addition to various tumor entities, our work also explores several clinically relevant biomarkers represented as continuous numerical values. As a result, we developed a new contrastively-clustered attention-based multiple instance learning (CAMIL) regression approach, which combines self-supervised learning (SSL) with attMIL, and systematically compared it with the CAMIL classification approach, and the regression method proposed by Graziani et al.[25] The evaluation and application of regression versus classification on multiple datasets, organs and biomarkers fills a gap in the computational pathology literature. ## Results ### Regression predicts HRD from histology We developed a new regression-based DL approach which combines a feature extractor trained by SSL[26] and an attML[14] model (**Fig. 1A-C**), referred to as contrastively-clustered attention-based multiple instance learning (CAMIL) regression. We tested the abilities of this approach for prediction of HRD directly from pathology images. We chose HRD because it is a pan-cancer biomarker that is measured as a continuous score, but can be binarized at a clinically validated cutoff. We used the The Cancer Genome Atlas (TCGA) cohorts for breast cancer (BRCA), colorectal cancer (CRC), glioblastoma (GBM), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), pancreatic adenocarcinoma (PAAD), and endometrial cancer (UCEC) to train a regression DL model for each cancer type and evaluated their performance by cross-validation (**Fig. 1D**). To mitigate batch effects, which are problematic in the TCGA cohort, we used site-aware cross-validation splits[27]. We found that our CAMIL regression models were able to predict HRD status with AUROCs above 0.70 in 6 out of 7 tested cancer types. The area under the receiver operating characteristic (AUROC) with 95% confidence interval (CI) were 0.78 (\(\pm\) 0.02) in BRCA, 0.76 (\(\pm\) 0.12) in CRC, 0.75 (\(\pm\) 0.40) in GBM, 0.72 (\(\pm\) 0.06) in PAAD, 0.72 (\(\pm\) 0.05) in LUAD, 0.57 (\(\pm\) 0.05) in LUSC, and 0.82 (\(\pm\) 0.03) in UCEC (**Fig. 2A, Suppl. Table 1**). We validated the models on CPTAC, a set of external validation cohorts, in which images and HRD status were available for LUSC, LUAD, PAAD, UCEC. In these cohorts, the model achieved even higher AUROCs, reaching 0.68 (\(\pm\) 0.04) in PAAD, 0.81 (\(\pm\) 0.03) in LUAD, and 0.96 (\(\pm\) 0.01) in UCEC. The lowest AUROC was 0.62 (\(\pm\) 0.06) in LUSC. Together, these data show that regression-based DL can predict HRD status from pathology images alone. ### Regression outperforms the state-of-the-art classification-based approach We compared the performance of our new DL approach, CAMIL regression, against two state-of-the-art approaches: the Graziani et al. regression method [25] and the CAMIL classification method. In order to compare classification with regression, we chose the AUROC as an evaluation metric. In the site-aware-split test set of the TCGA cohort, CAMIL regression outperformed both of the previous approaches in HRD prediction in all 7 of the tested cancer types (**Fig. 2A, Suppl. Table 1**). In 5 out of 7 cancer types, an ANOVA test showed that the difference in mean AUROCs was statistically significant with p<0.05 (**Suppl. Table 2 and 3**). In TCGA-LUSC, all three methods performed equally poorly, reaching AUROCs of 0.57 (\(\pm\) 0.05), 0.57 (\(\pm\) 0.04) and 0.57 (\(\pm\) 0.03) for CAMIL regression, Graziani et al. regression, and CAMIL classification, respectively. In the external validation cohorts, all models reached comparable performance (**Suppl. Table 1 and 2**). In the external validation cohorts (**Fig. 2B**), a t-test showed that the mean AUROCs of CAMIL regression were not statistically significantly better than the classification model, whereas the Graziani et al. model outperformed the CAMIL classification model in 1 out of 4 external validation cohorts (**Suppl. Table 3**). Next, we compared CAMIL regression to Graziani et al. [25] regression by assessing the coefficient of determination R\({}^{2}\) of the predicted scores compared to the clinically-derived ground- truth scores. In TCGA, the CAMIL regression model reached higher R\({}^{2}\) scores than the Graziani et al. [25] model in all of the 7 selected cohorts (**Suppl. Table 5**). In the CPTAC validation cohort, the CAMIL regression model reached higher R\({}^{2}\) scores than the Graziani et al. [25] model in all 4 of the selected cohorts (**Suppl. Table 5**). To determine the reason for our superior performance over Graziani et al. [25] regression, we conducted an ablation study of the CAMIL regression approach. These results revealed that the inferior performance in Graziani et al. [25] approach for predicting clinical biomarkers is mainly due to the standard stochastic gradient descent optimizer, compared to the stochastic gradient descent with adaptive moments optimizer in our CAMIL regression approach (**Suppl. Table 7**). Taken together, these data indicate that the CAMIL regression method outperforms the Graziani et al. [25] regression method and the CAMIL classification method. Consequently, the regression method by Graziani et al. [25] is not further compared to CAMIL regression and classification in subsequent experiments. Moreover, we investigated additional aspects of model performance which the AUROC does not capture [28]. We compared CAMIL regression to CAMIL classification by quantifying the absolute distance between the medians of the normalized scores for the positive and negative samples (**Fig. 2C-F**). For example, for detection of HRD status in endometrial cancer, the AUROC on the CPTAC test cohort was 0.98 \(\pm\) 0.02 for CAMIL classification and 0.96 \(\pm\) 0.01 for CAMIL regression. This difference was not statistically significant (p = 0.095). When the distribution of the CAMIL regression model output (**Fig. 2C-F**) was visualized, we found a greater separation of the predicted HRD scores in positive and negative patients compared to the CAMIL classification approach (**Suppl. Table 4**). The absolute distance between the peak of the score distribution between positive and negative patients was higher for CAMIL regression than for CAMIL classification. We further quantified this in all tumor entities and found that in all 7 of the selected TCGA cohorts, this distance was larger in the CAMIL regression, resulting in a greater class separability. In CPTAC, as compared to the classification-based approach, class separability was improved in 2 out of 5 cohorts when using the regression approach. Overall, our CAMIL regression approach improves separation distance of the groups' medians by 378% for the test set of the TCGA training cohort, and 19% for the external CPTAC test cohort (**Suppl. Table 4**). ### Regression predicts key biological process biomarkers from histology Having shown that our CAMIL regression method can predict HRD from histology WSIs, we expanded our experiments to additional biomarkers. We investigated biomarkers related to the three key components of solid tumors: tumor cells, stroma, and immune cells. For tumor cells, we aimed to predict proliferation, as measured by an RNA expression signature [29]. For stroma, we aimed to predict stromal fraction (SF), as assessed via DNA methylation analysis [29]. For immune cells, we investigated the tumor infiltrating lymphocytes regional fraction (TIL RF), the leukocyte fraction (LF), and the lymphocyte infiltration signature score (LISS) [29]. We found that our CAMIL regression method was able to predict all of these five biomarkers with high AUROCs across cancer types in the TCGA cohort (**Suppl. Table 9**). For example, in breast cancer, the AUROCs for these five biomarkers were 0.88 (\(\pm\) 0.02) in TIL RF, 0.83 (\(\pm\) 0.05) in proliferation, 0.81 (\(\pm\) 0.03) in leukocyte fraction, 0.80 (\(\pm\) 0.03) in LISS and 0.80 (\(\pm\) 0.03) in stromal fraction. In colorectal cancer, these AUROCs were 0.79 (\(\pm\) 0.07), 0.59 (\(\pm\) 0.12), 0.76 (\(\pm\) 0.06), 0.70 (\(\pm\) 0.01) and 0.77 (\(\pm\) 0.04), respectively. In all other cancer types, mean AUROCs of above 0.70 were reached (**Suppl. Table 9**). These findings show that the regression-based DL model can be trained to predict tumor cell proliferation, stromal fraction and immune-cell-related biomarkers from H&E histopathology. We compared this to the state-of-the-art CAMIL classification approach using the AUROC with 95%CI as evaluation metric. Using site-aware splits, our proposed CAMIL regression approach outperformed CAMIL classification in 8 out of 34 instances, with the remainder of cases having equal performance for the classification and regression approach (**Fig. 3B**). Regression outperformed classification in TCGA-BRCA in two targets, LF (0.80 \(\pm\) 0.02, p<0.0001) and LISS (0.80 \(\pm\) 0.03, p<0.0001). In TCGA-CRC, the performance between regression and classification was equal for all five targets. In TCGA-LIHC, regression outperformed classification in LISS (0.70 \(\pm\) 0.01, p < 0.001). In TCGA-LUAD, regression outperformed classification in proliferation (0.84 \(\pm\) 0.04, p < 0.0001). In TCGA-TCGA-LUSC, regression outperformed classification in TIL RF (0.88 \(\pm\) 0.04, p < 0.0001). In TCGA-STAD, regression outperformed classification in proliferation (0.87 \(\pm\) 0.07, p = 0.06), but did not reach a statistically significant AUROC in either classification or regression (p > 0.05). In TCGA-UCEC, regression outperformed classification in the two lymphocyte-based targets, TIL RF (0.82 \(\pm\) 0.04, p < 0.0001) and LISS (0.73 \(\pm\) 0.06, p < 0.001). These findings collectively demonstrate that utilizing the CAMIL regression approach leads to an average 4% increase in the AUROCs, as compared to employing the CAMIL classification approach for the same task of predicting key biological process biomarkers from histology. ### Regression improves interpretability of biomarker predictions from histology Next, we investigated the interpretability of the CAMIL classification model compared to the CAMIL regression model. We evaluated the biological plausibility of spatial prediction heatmaps obtained by deploying the regression model and the classification model on tumors in the site-aware split test set of the TCGA cohort. We used the models trained to predict the LISS in breast cancer. Although the LISS is only available as a weak label (one score per WSI), a good model should be able to highlight regions which are associated with the LISS, and these regions should contain lymphocytes. Indeed, we saw that both the classification model and the regression model placed their attention on lymphocyte-rich regions (**Fig. 3C-0**). In the evaluated WSIs, however, the LISS regression model yielded a sharper delineation of lymphocyte-rich regions and placed less attention on areas where histologic features are less relevant. Contrastingly, the LISS classification model demonstrates relatively less confidence in areas with a dense lymphocyte population compared to the regression model, as indicated by a lower attention score (**Fig. 3C-1**). The classification model assigns importance to regions without any presumed clinical relevance, as evidenced by the fact that the model highlighted the tissue edge which lacks high density lymphocytes regions (**Fig. 3C-2**). We quantified these findings by a blinded interpretability review of 42 attention heatmaps from the classification and regression models by KJH, a pathology resident. Based on the expert review, the CAMIL regression approach produced the most interpretable attention heatmaps in 34 out of 42 cases. In 6 out of 42 cases, the CAMIL classification approach was more interpretable. Similar interpretability between the CAML classification and regression approaches was observed in 2 out of 42 cases. Hence, CAMIL regression outperforms CAMIL classification in interpretability in 81% of cases as observed in a blinded review. Taken together, these data demonstrate that the regression approach gives a statistically significantly better AUROC for the investigated biomarkers (p < 0.05; **Suppl. Table 11**), and a markedly improved interpretability, compared to the classification approach. ### Regression-based biomarkers improve survival prediction in colorectal cancer Biological processes of tumor cell proliferation, deposition of stromal components, and infiltration by lymphocytes are biologically relevant during tumorigenesis and progression, and are known to be related to clinical outcome.[30, 31] Thus, prediction of lymphocytic infiltration from H&E pathology slides should be relevant for prognostication. We investigated this in a large cohort of 2,297 patients with colorectal cancer from the Darmkrebs: Chancen der Verhutung durch Screening (DACHS) study, for which H&E WSIs and long-term (10 years) follow-up data were available for overall survival (**Suppl. Table 15**). First, we deployed the CAMIL classification models that were trained on colorectal cancer patients in TCGA, which obtained similar AUROCs in all biomarkers (**Fig. 3B**). We deployed these models on WSIs from patients enrolled in DACHS, obtaining a binanized prediction label for each patient. We then assessed the prognostic impact of this predicted label with univariate and multivariate Cox Proportional Hazard models for overall survival (**Fig. 4A and 4B**), yielding hazard ratios (HR). We found that the classification models reached significant risk-group stratification in 3 out of 5 biomarkers (**Fig. 4A, Suppl. Table 12**): leukocyte fraction (HR=0.74, p < 0.0001), LISS (HR=0.74, p < 0.0001), and stromal fraction (HR=0.77, p < 0.0001). These hazard ratios represent only a modest predictability of survival. In the multivariate survival model (**Fig. 4B, Suppl. Table 13**), the classification models show significant prognostic capabilities in only 2 out of 5 biomarkers: leukocyte fraction (HR=0.83, p = 0.0394) and LISS (HR=0.82, p = 0.0265). When we repeated the procedure with continuous scores obtained from the CAMIL regression models, we found that the regression models markedly improved the survival prediction. The regression model reached significant risk-group stratification in 3 out of 5 biomarkers (**Fig. 4A**): leukocyte fraction (HR=0.18, p < 0.01), LISS (HR=0.03, p < 0.0001) and TIL regional fraction (HR=0.21, p < 0.01). This effect was also observed when the scores obtained from the CAMIL regression model were binanized at the median before using them as an input for the univariate Cox Proportional Hazard model (**Suppl. Table 14**), showing consistent risk-group stratification superiority for regression-based biomarkers. For the multivariate survival model (**Fig. 4B, Suppl. Table 13**), the regression models show significant prognostic capabilities in the same 2 biomarkers: leukocyte fraction (HR=0.20, p < 0.01) and LISS (HR=0.14, p < 0.01). Again, the HR for regression are significantly further away from non-significance (HR=1) with non-overlapping 95%CI compared to the classification models. Similar observations were made for the models trained on breast cancer patients from TCGA and deployed on colorectal cancer patients from DACHS, corroborating the improved generalizability of regression on biomarkers across different cancer types (**Fig. 4C and 4D**). Taken together, these data demonstrate that by training models on biologically relevant biomarkers with weakly supervised learning, the resultant regression models are better predictors of survival than their classification counterpart. Therefore, regression models enhance the use of weakly supervised learning to build DL systems of potential clinical utility. ## Discussion Since 2018, the field of digital pathology has rapidly expanded to include the development of tools for predicting molecular biomarkers from routine tumor pathology sections, which has led to the development of clinically approved products. Traditional DL methods have limited the analysis of many biomarkers, including HRD and gene expression signatures, which are continuous values, by categorizing them into discrete classes. Our study provides direct evidence that novel regression networks, such as the CAMIL regression method described in this study, which builds on recent work using attention-based multiple instance learning and self-supervised pre-training of the feature extractor[18, 20, 26], outperforms traditional classification networks in predicting these biomarkers. This approach unlocks a key clinical application area for pathology-based biomarker prediction. Our proposed CAMIL regression approach has shown promising results in improving the accuracy and separability of biomarker predictions compared to CAMIL classification. This improvement is particularly noticeable for biomarkers that have a clinically established threshold for categorization, such as HRD. Similar improvements are observed for biomarkers that do not have any clinically relevant cut-off point and would traditionally necessitate dichotomization for analysis, such as immune biomarkers. In addition, our CAMIL regression approach demonstrates better generalization capabilities than the regression approach by Graziani et al.[25], as seen in the external test cohort. We identified that the optimizer used in Graziani et al.[25] predominantly caused the regression model to converge to the mean, which explains the observed difference. Furthermore, our study highlights the advantages of regression-based biomarker prediction over classification-based prediction in terms of interpretability. We demonstrated that, for tumor infiltrating lymphocytes, attention heatmaps generated through regression were preferred in 81% of cases for their interpretability compared to those generated through classification. Regression also resulted in an improvement in survival prediction based on immunologic biomarkers, as it allowed for more effective stratification of risk groups for overall survival compared to classification models. The biomarkers were deliberately chosen on the basis of their prognostic capabilities[32, 33, 34, 35], and are better reflected by the tumor morphology analysis through the CAMIL regression approach as compared to the CAMIL classification approach. This study has several limitations. The experiments were limited to a select number of tumors and clinical targets, and not all analyzed clinical targets had an external test set with the same clinical information available. This resulted in meta-external test sets through site-aware splits, and blind deployments on an external cohort. Additionally, none of the hyperparameters of the trained models were optimized. Further research could expand the analysis to a wider variety of cancers and clinical targets, while also exploring potential pitfalls of regression in computational pathology. The approaches described here, however, provide a proof-of-principle for the use of regression-based attMIL systems and their potential impact for the inference of biomarkers and prediction of outcomes from histologic WSIs, expanding the repertoire of applications of DL in precision medicine. ## Materials and Methods ### Ethics statement We examined anonymized patient samples from several academic institutions in this investigation. This analysis has been approved by the ethical boards at DACHS. CPTAC and TCGA did not require formal ethics approval for a retrospective study of anonymised samples. The overall analysis was approved by the Ethics commission of the Medical Faculty of the Technical University Dresden (BO-EK-444102022). ### Image Data and Cohorts A total of 11,671 raw WSIs were scanned by an Aperio ScanSlide scanner and pre-processed in this study. Two types of clinical targets were analyzed to observe the performance of the classification and regression models: 1) continuous variables with a known clinically relevant cut-off for categorization, and 2) continuous variables with unknown clinically relevant cut-offs, thus requiring categorization by splitting at the median. These categories of targets were chosen due to theory mentioning the loss of information by splitting at the median[23], but does not mention the loss of information when utilizing clinically relevant cut-offs before training the model. The target with a clinically relevant cut-off is homologous recombination deficiency (HRD) (**Suppl. Table 16**), a clinically relevant biomarker in solid tumor types, such as breast cancer. One way to calculate HRD is by adding up the three subscores, Loss of Heterozygosity (LOH), Telomeric Allelic Imbalance (TAI) and large-scale state transitions (LST), giving us a continuous value ranging from 0 to 103 in the training sets. A clinically relevant cut-off point of HRD >= 42 was used to binarize the continuous score[36]. The targets without a known clinically relevant cut-off point are biological process biomarkers (**Suppl. Table 17**), which are interesting to analyze due to their prominent role in immunotherapy outcome prediction[29, 37, 38]: Stromal Fraction (SF) with range [0, 0.92] and leukocyte fraction (LF) with range [0, 0.96] as assessed via DNA methylation analysis, lymphocyte infiltrating signature score (LISS) with range [-3.49, 4.17] and proliferation (Prolif.) with range [-2.86, 1.59], as measured by RNA expression data and tumor infiltrating lymphocytes regional fraction (TIL RF) with range [0, 63.65], quantified using a DL based classification. For TCGA-LIHC, there was no data available for TIL regional fraction, leading to an analysis of 5 targets in 7 cancer types with 5-fold cross-validation, resulting in (35-1)*5 models for each modeling type, of which the AUROC \(\pm\) 95%CI of the 5 folds per target and tumor type was reported. ### Model description The entire image processing pipeline, from whole-slide image (WSI) to patient-level predictions, consisted of three main steps: 1) image pre-processing, 2) feature extraction, 3a) classification-based attention attMIL and 3b) regression-based attMIL for score aggregation resulting in patient-level predictions (**Fig. 1A and 1B**). All WSI in the experiments were tessellated into image patches at a resolution of 224 by 224 pixels with an edge length of 256 um, resulting in a Microns Per Pixel (MPP) value of approximately 1.14. After tessellation, every image patch underwent a rejection filter using the Canny edge detection method[39], removing blurry patches and the white background of the image when two or less edges were detected in the patches. The remaining patches were color-normalized in order to reduce the H&E-staining variance across patient cohorts according to the Macenko spectral matching technique[40], with a prior added step of brightness standardization. For pre-processing, our end-to-end WSI pre-processing pipeline was utilized. The target image used to define the color distribution was uploaded to the GitHub repository. Every pre-processed image patch was turned into a 2048 feature vector through inference of a ImageNet-weighted ResNet50-based self-supervised contrastive clustering model fine-tuned on 32,000 WSIs from different cancer types; RetCCL[26]. The feature extraction resulted in an _(n x 2048)_ feature matrix per patient, where n is the number of _(224 x 224 pixels)_ pre-processed image patches. ### Experimental setup and implementation details For the experiments, 5-fold cross-validation on patient-level with site-aware splits was performed to train the models. Site-aware splits ensure that patients are stratified and grouped by the hospital the WSI originated from, creating a stratified random 80-20 split which forces all patients from the same hospital to exist in either the training and internal validation set, or the internal test set, while retaining ground-truth class distributions. Specifically, in The Cancer Genome Atlas (TCGA), site-specific histological features were shown to be present in the WSI, causing biased evaluations in the model when not accounted for accordingly during the training procedure [27]. The basis for the weakly supervised classification and regression was adapted from the attention-based multiple instance learning (attMIL) method by Ilse et al [41]. Our proposed model used Balanced MSE [42] as a loss function to account for the natural class imbalance in clinical settings, as well as the Adam optimizer [43] and an attention component followed by a MLP head [41] which was trained for 25 epochs. The dropout layer was removed, due to loss of performance in regression in tabular data settings [44]. The attMIL variant in our proposed CAMIL regression differs from Ilse et al. by swapping their feature extractor with a pre-trained ResNet50 with ImageNet weights, fine-tuned on 32,000 histopathology images in a self-supervised manner using contrastive clustering shown to yield significantly better results on WSI image analysis [26]. Moreover, the classification head consisting of a fully-connected (FC) layer and sigmoid operation was swapped with custom heads to allow for classification and regression tasks to be performed. The attention component was not altered. To evaluate the relative supremacy between classification and regression, first, the CAMIL regression method was compared with 1) the regression method from Graziani et al. and 2) the CAMIL classification method on the continuous HRD score and clinically-relevant binarized HRD score, respectively. Then, CAMIL regression was compared to CAMIL classification on continuous biomarkers related to biological processes with no known clinically-relevant cut-off points, where the median score per target was used for binarizing. Moreover, an expert review by a pathology resident was conducted on attention heatmaps produced by CAMIL classification and CAMIL regression to determine which method yielded the most interpretable heatmaps. Finally, the prognostic capabilities of CAMIL regression versus CAMIL classification was evaluated on an external data cohort DACHS-CRC by predicting survival of groups stratified by the models which were trained on the same biological process biomarkers and extracted features. For the HRD scores, the models were trained on TCGA-BRCA, TCGA-CRC, TCGA-GBM, TCGA-LUAD, TCGA-LUSC, TCGA-PAAD, TCGA-UCEC and externally validated on CPTAC-LUAD, CPTAC-LSCC, CPTAC-PDA and CPTAC-UCEC. For the biological process biomarkers, the models were trained on TCGA-BRCA, TCGA-CRC, TCGA-LUAD, TCGA-LUSC, TCGA-LIHC, TCGA-STAD and TCGA-UCEC. Every model that was compared, both regression and classification, consisted of the exact same patients for training, internal validation, internal testing and external testing (**Suppl. Table 16 and 17**). For the regression method from Graziani et al., we introduced the self-supervised component as feature extractor [26] followed by embedding-level attention aggregation, instead of the ImageNet weighted ResNet18 backbone followed by patch-level attention aggregation in the original study by Graziani et al. [25] As it was shown that the self-supervised backbone increases performance and generalizability compared to an ImageNet weighted architecture as backbone [26], we added the self-supervised component in order to compare the regression heads in isolation. The commonalities between the models are the learning rate (1.00E-04), weight decay (1.00E-02), patience (12 epochs), the attention component [41] and the fit-one-cycle learning rate scheduling policy [45]. The differences of the models' hyperparameters and optimization strategies (**Suppl. Table 6**) of Graziani et al. and our CAMIL regression model were broken down in an ablation study to find the reason for the performance differences of the regression heads. ## Statistics and endpoints The classification and regression method were made comparable in a similar dimension by utilizing the area under the receiver operating characteristic (AUROC) metric. For the definition of the binarized groups required for the AUROCs, the clinically-relevant cut-off for HRD was used, while for the biological process biomarkers, the continuous targets were split at the median. The prediction scores of the classification model [0-1] and the predictions of the regression models (\(-\infty,\infty\)) were used as continuous score for all the possible thresholds of the AUROC.46 By utilizing this approach, it was possible to test which type of model, when provided with the same ground-truth binarized label, had the least overlap between the predicted score distributions for different groups. This, in turn, resulted in achieving the highest AUROC. However, the AUROC measures only the separation of groups' score distributions, but does not account for the distance between the distributions. Therefore, to determine whether there is an increased distance between distributions, the median and interquartile range (IQR) were calculated for the clinically relevant HRD+ and HRD- groups. However, this calculation was not performed for the biological process biomarkers due to the unclear relevance of distance between the dichotomized groups. To determine statistical significance of the AUROCs, the 95% confidence interval (CI) of the 5 training folds was calculated for each model. In order to identify if the AUROCs of the three compared models (CAMIL classification, regression from Graziani et al., and our proposed CAMIL regression) had a significant difference for the HRD target, the repeated measures ANOVA statistical analysis was performed, which resulted in an F value for each tumor-type the three models were trained on. If the difference between the three models was statistically significant, the dependent one-sided t-test for paired samples statistical analysis was performed in order to determine if CAMIL regression outperformed CAMIL classification, resulting in a t-statistic with 95%CI for every model comparison for every analyzed tumor-type of the internal test set from TCGA. For the external test set, the repeated measures ANOVA is also performed, after which two dependent one-sided t-tests with Bonferroni correction were performed, resulting in two t-statistics with 97.5%CI for every model comparison of every analyzed tumor-type. For the biological process biomarkers' models, a dependent two-sided t-test with 95%CI was performed to test the alternative hypothesis if the 5-fold mean of the CAMIL classification and CAMIL regression AUROCs were significantly different from each other. To determine the prognostic capabilities of the biological process biomarkers' models, survival prediction analysis is done on an external cohort, DACHS. All 5 models trained through site-aware splits were blindly deployed, of which the mean of the predicted scores were used for further analysis. The univariate (UV) and multivariate (MV) Cox proportional-hazards (PH) regression analysis were independently performed to determine the Hazard Ratio (HR) of the classification and regression models' predictive biomarker. The continuous score from the regression models were used for the Cox PH analyses, as well as the binarized continuous scores to rule out bias in the prognostic capabilities solely through which variant of the continuous score was used. The prognostic capabilities of the classification and regression models were independently analyzed together with three covariates: age (continuous, \(\mathbb{R}^{*}\)), sex (binary, 0: female, 1: male) and tumor stage (continuous, \(\mathbb{Z}\in\)[1, 4]). Thus, one model's scores per target and the three covariates were analyzed for each model independently. Statistical significance of the HR is reached when the 95%CI does not cross a HR=1. ### Visualization and explainability To compare the separability of CAMIL classification and CAMIL regression models' score distribution for HRD at a similar scale, all values for both models were min-max normalized individually to redistribute every model's score output between [0,1]. To explain the classification and regression CAMIL models' capability of decision-making using clinically relevant features, the attention component from the attMIL model architecture was utilized. The attention heatmaps were created by loading the attMIL model architectures for classification and regression into a fully convolutional equivalent[47] with their respective weights from the training procedure, which allows for a high-resolution attention heatmap, rather than 224x224 patches the model was trained on. By running inference on the WSIs of the patient, the attention layer which resulted from the patient-wise prediction was extracted and used as an overlay on the WSI to indicate hot zones which the model used in decision making. The TCGA-BRCA cohort was chosen for visualization to observe the contrast between equal and superior performance of the regression model compared to the classification model in lymphocyte-based targets. For each target, the classification and regression model were trained, validated and tested on the exact same patient using site-aware splits. The attention heatmaps for the blinded review were generated from patients with the top 42 highest expression of the LISS biomarker from the unseen internal TCGA-BRCA test set through the trained classification and regression models, resulting in 84 heatmaps in total. The models' clinical interpretability was reviewed by a pathologist, choosing the most interpretable attention heatmap for each of the 42 patients. ### Data and Code availability All source codes are available under an open-source license on GitHub. The pre-processing pipeline is found at [https://github.com/KatherLab/end2end-WSI-preprocessing/releases/tag/v1.0.0-preprocessing](https://github.com/KatherLab/end2end-WSI-preprocessing/releases/tag/v1.0.0-preprocessing), the classification pipeline is found at [https://github.com/KatherLab/manugoto/releases/tag/v1.0.0-classification](https://github.com/KatherLab/manugoto/releases/tag/v1.0.0-classification), the regression pipeline is found at [https://github.com/KatherLab/manugoto/releases/tag/v1.0.0-regression](https://github.com/KatherLab/manugoto/releases/tag/v1.0.0-regression), and the classification and attention heatmaps are found at [https://github.com/KatherLab/highres-WSI-heatmaps/releases/tag/v1.0.0-heatmaps](https://github.com/KatherLab/highres-WSI-heatmaps/releases/tag/v1.0.0-heatmaps). The slides for TCGA are available at [https://portal.gdc.cancer.gov/](https://portal.gdc.cancer.gov/). The slides for CPTAC are available at [https://proteomics.cancer.gov/data-portal](https://proteomics.cancer.gov/data-portal). The molecular data for TCGA is available at [https://www.chioportal.org/](https://www.chioportal.org/) and additional biomarker data is available from Thorsson et al.[28] ## Figures Graziani et al., and CAML regression as proposed in this study. C) The performance metrics and their respective confidence intervals (CI) to evaluate the performance of the three separately trained heads of the model, including the coefficient of determination (R\({}^{2}\)) for the regression models, the area under the receiver operating characteristic (AUROC) for all models, analysis of variance (ANOVA) with repeated measures for the homologous recombination deficiency (HRD) and biological process biomarkers, and expert review of attention heatmaps with univariate (UV) and multivariate (MV) Cox proportional-hazard (PH) models for the biological process models. D) The cohorts used for training and external validation represented in the inner- and outer circle, respectively. The training cohorts are from The Cancer Genome Atlas (TCGA) programme for all clinical targets, with the external validation cohorts coming from the Clinical Proteomic Tumor Analysis Consortium (CPTAC) effort and the Darmkrebs: Chancen der Verhutung durch Screening (DACHS) study for the HRD target and the biological process biomarkers, respectively. The biological process biomarkers are tumor infiltrating lymphocytes regional fraction (TIL RF), proliferation (Prolif.), leukocyte fraction (LF), lymphocytes infiltrating signature score (LISS) and stromal fraction (SF). The considered cancer types in this study are breast cancer (BRCA), colorectal cancer (CRC), glioblastoma (GBM), lung adenocarcinoma (LUAD), lung squamous cell cancer (LUSC), pancreas adenocarcinoma (PAAD), endometrial cancer (UCEC), liver hepatocellular carcinoma (LIHC) and stomach cancer (STAD). Figure 2: Performance overview of classification versus regression approaches predicting the homologous recombination deficiency (HRD) score. Panel A) and B) show boxplots of area under the receiver operating characteristic (AUROC) values from HRD predictions of this (I) CAMIL classification, (II) regression by Graziani et al. and (III) CAMIL regression on the internal test set from The Cancer Genome Atlas (TCGA) and the external test set from the Clinical Proteomic Tumor Analysis Consortium (CPTAC) effort, respectively. Cancer types include glioblastoma (GBM), pancreas adenocarcinoma (PAAD), endometrial cancer (UCEC), colorectal cancer (CRC), breast cancer (BRCA), lung adenocarcinoma (LUAD) and lung squamous cell cancer (LUSC). Non-significant AUROC values are shown as transparent violin instances, and statistical tests include an analysis of variance with repeated measures displayed at the bottom and dependent one-sided t-tests, with Bonferroni correction for multiple hypothesis testing in the external test set displayed on top. Panel C) and D) show the proportional distribution plot of the normalized classification scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set from the CAMIL classification model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set set CPTAC-UCEC, respectively. Panel E) and F) show proportional distribution plot of the normalized regression scores of the internal test set CPTAC-UCEC, respectively. regression model trained on TCGA-UCEC, and the external test set CPTAC-UCEC, respectively. In the distribution plots, the ground-truth classes are depicted as a darker shared (HRD+) and lighter shade (HRD-) of the color designated to CAMIL regression and CAMIL classification, respectively. Figure 3: CAMIL classification versus CAMIL regression for the prediction of continuous biological process biomarkers of the tumor microenvironment. A) The scope in which we analyzed the tumor microenvironment (TME) consists of tumor cells, stroma and immune cells. B) Heatmap depicting area under the receiver operating curve (AUROC) deltas between CAMIL regression and CAMIL classification for 5 biological process biomarkers (tumor infiltrating lymphocytes regional fraction (TIL RF), proliferation (Prolif.), leukocyte fraction (LF), lymphocytes infiltrating signature score (LISS) and stromal fraction (SF)) on the test sets of breast cancer (BRCA), colorectal cancer (CRC), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell cancer (LUSC), pancreas adenocarcinoma (PAAD), liver hepatocellular carcinoma (LIHC), stomach cancer (STAD) and endometrial cancer (UCEC) from The Cancer Genome Atlas (TCGA) program for site-aware split folds. The higher the positive delta, the better the CAMIL regression model performed. Statistical significance is indicated with an asterisk as a result of a dependent one-sided t-test (o=0.05). C) Attention heatmap of a slide from the test set of TCGA-BRCA. Image 0 shows the entire slide, with an area of interest for diagnostics in image 1. Image 2 shows an area presumably containing non-essential diagnostics information. This is repeated for the original slide, the attention heatmap using the classification model, and the attention heatmap using our CAMIL regression model in fold 0 for LISS. The higher the attention score of an area, the more important it is for the model’s decision making. Icon source: smart.servier.com ## 5 Conclusion Figure 4: **Overview of the externally validated prognostic capabilities of the trained models to predict overall survival.** Panel A) and B) display an univariate (UV) Cox Proportional-Hazard (PH) analysis of the trained models on The Cancer Genome Atlas (TCGA) program, deployed on the external colorectal cancer (CRC) samples from the Darmkrebs: Chancen der Verhütung durch Screening (DACHS) study for the TCGA-CRC and TCGA breast cancer (BRCA) models, respectively. Panel C) and D) display a multivariate (MV) Cox PH analysis of the trained immune cell models, deployed on the external DACHS-CRC cohort for the TCGA-CRC and TCGA-BRCA models, respectively. Each model’s output, from CAMIL classification (categorical class predictions) and CAMIL regression (continuous score predictions), is considered independently together with the three covariates tumor stage, age and sex for the MV Cox PH analysis. The observed biological process biomarkers are tumor infiltrating lymphocytes regional fraction (TIL RF), proliferation (Prolif.), leukocyte fraction (LF), lymphocyte infiltration signature score (LISS), and stromal fraction (SF). Stars indicate statistical significance (p < 0.05) for the hazard ratios (HR) and their 95% confidence interval (CI).
2304.13731
Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model
The immense scale of the recent large language models (LLM) allows many interesting properties, such as, instruction- and chain-of-thought-based fine-tuning, that has significantly improved zero- and few-shot performance in many natural language processing (NLP) tasks. Inspired by such successes, we adopt such an instruction-tuned LLM Flan-T5 as the text encoder for text-to-audio (TTA) generation -- a task where the goal is to generate an audio from its textual description. The prior works on TTA either pre-trained a joint text-audio encoder or used a non-instruction-tuned model, such as, T5. Consequently, our latent diffusion model (LDM)-based approach TANGO outperforms the state-of-the-art AudioLDM on most metrics and stays comparable on the rest on AudioCaps test set, despite training the LDM on a 63 times smaller dataset and keeping the text encoder frozen. This improvement might also be attributed to the adoption of audio pressure level-based sound mixing for training set augmentation, whereas the prior methods take a random mix.
Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, Soujanya Poria
2023-04-24T07:45:28Z
http://arxiv.org/abs/2304.13731v2
# Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model ###### Abstract The immense scale of the recent large language models (LLM) allows many interesting properties, such as, instruction- and chain-of-thought-based fine-tuning, that has significantly improved zero- and few-shot performance in many natural language processing (NLP) tasks. Inspired by such successes, we adopt such an instruction-tuned LLM Flan-T5 as the text encoder for text-to-audio (TTA) generation--a task where the goal is to generate an audio from its textual description. The prior works on TTA either pre-trained a joint text-audio encoder or used a non-instruction-tuned model, such as, T5. Consequently, our latent diffusion model (LDM)-based approach (Tango) outperforms the state-of-the-art AudioLDM on most metrics and stays comparable on the rest on AudioCaps test set, despite training the LDM on a 63 times smaller dataset and keeping the text encoder frozen. This improvement might also be attributed to the adoption of audio pressure level-based sound mixing for the training set augmentation, whereas the prior methods take a random mix. Introduction Following the success of automatic text-to-image (TTI) generation [31; 32; 33], many researchers have also succeeded in text-to-audio (TTA) generation [17; 18; 43] by employing similar techniques as the former. Such models may have strong potential use cases in the media production where the creators are always looking for novel sounds that fit their creations. This could be especially useful in prototyping or small-scale projects where producing the exact sound could be infeasible. Beyond this, these techniques also pave the path toward general-purpose multimodal AI that can simultaneously recognize and generate multiple modalities. To this end, the existing works use a large text encoder, such as, RoBERTa[19] and T5[30], to encode the textual description of the audio to be generated. Subsequently, a large transformer decoder or a diffusion model generates the audio prior, which is subsequently decoded by a pre-trained VAE, followed by a vocoder. We instead assume that replacing the text encoder with an unstructured large language model (LLM) would improve text understanding and overall audio generation without any fine-tuning, due to its recently discovered gradient-descent mimicking property [4]. To augment training samples, the existing methods take a randomly generated combination of audio pairs, along with the concatenation of their descriptions. Such a mixture does not account for the overall pressure level of the source audios, potentially leading to a louder audio overwhelming the quieter one. Thus, we employ a pressure level-based mixing method, as suggested by Tokozume et al. [39]. Our model (Tango) 1 is inspired by latent diffusion model (LDM) [33] and AudioLDM [18] models. However, instead of using CLAP-based embeddings, we used a large language model (LLM) due to its powerful representational ability and fine-tuning mechanism, which can help learn complex concepts in the textual description. Our experimental results show that using an LLM greatly improves text-to-audio generation and outperforms state-of-the-art models, even when using a significantly smaller dataset. In the image generation literature, the effects of LLM has been studied before by Saharia et al. [35]. However, they considered T5 as the text encoder which is not pre-trained on instruction-based datasets. Flan-T5[3] is initialized with a T5 checkpoint and fine-tuned on a dataset of 1.8K NLP tasks in terms of instructions and chain-of-thought reasoning. By leveraging instruction-based tuning, Flan-T5 has achieved state-of-the-art performance on several NLP tasks, matching the performance of LLMs with billions of parameters. Footnote 1: The acronym Tango stands for Text-to-Audio using iNstruction Guided diffusiOn and was suggested by ChatGPT. The word Tango is often associated with music [42] and dance [41]. According to Wikipedia [41], “Tango is a partner dance and social dance that originated in the 1880s along the Rio de la Plata, the natural border between Argentina and Uruguay.” The image above resembles the Tango dance form and was generated by prompting Dalle-V2 with “A couple dancing tango with musical notes in the background” In Section 3, we empirically show that Tango outperforms AudioLDM and other baseline approaches on most of the metrics on AudioCaps test set under both objective and subjective evaluations, despite training the LDM on a \(63\) times smaller dataset. We believe that if Tango is trained on a larger dataset such as AudioSet (as Liu et al. [18] did), it would be able to provide even better results and improve its ability to recognize a wider range of sounds. The overall contribution of this paper is threefold: 1. We do not use any joint text-audio encoder--such as CLAP--for guidance. Liu et al. [18] claim that CLAP-based audio guidance is necessary during training for better performance. We instead use a frozen instruction-tuned pre-trained LLM Flan-T5 with strong text representation capacity for text guidance in both training and inference. 2. AudioLDM needed to fine-tune RoBERTa [19] text encoder to pre-train CLAP. We, however, keep Flan-T5 text encoder frozen during LDM training. Thus, we find that LDM itself is capable of learning text-to-audio concept mapping and composition from a 63 times smaller training set, as compared to AudioLDM, given an instruction-tuned LLM. 3. To mix audio pairs for data augmentation, inspired by Tokozume et al. [39], we consider the pressure levels of the audio pairs, instead of taking a random combination as the prior works like AudioLDM. This ensures good representations of both source audios in the fused audio. ## 2 Method Tango, as depicted in Fig. 1, has three major components: i) textual-prompt encoder, ii) latent diffusion model (LDM), and iii) mel-spectogram/audio VAE. The textual-prompt encoder encodes the input description of the audio. Subsequently, the textual representation is used to construct a latent representation of the audio or audio prior from standard Gaussian noise, using reverse diffusion. Thereafter the decoder of the mel-spectogram VAE constructs a mel-spectogram from the latent audio representation. This mel-spectogram is fed to a vocoder to generate the final audio. ### Textual-Prompt Encoder We use the pre-trained LLM Flan-T5-Large (780M) [3] as the text encoder (\(E_{text}\)) to obtain text encoding \(\tau\in\mathbb{R}^{L\times d_{text}}\), where \(L\) and \(d_{text}\) are the token count and token-embedding size, respectively. Due to the pre-training of Flan-T5 models on a large-scale chain-of-thought- (CoT) and instruction-based dataset, Dai et al. [4] posit that they are able to learn a new task very well from the in-context information by mimicking gradient descent through attention weights. This property is missing in the older large models, such as RoBERTa[19] (used by Liu et al. [18]) and T5[30] (used by Kreuk et al. [17]). Considering each input sample a distinct task, it might be reasonable to assume that the gradient-descent mimicking property could be pivotal in learning the mapping between textual and acoustic concepts without fine-tuning the text encoder. The richer pre-training may also allow the encoder to better emphasize the key details with less noise and enriched context. This again may lead to the better transformation of the relevant textual concepts into their acoustics counterparts. Consequently, we keep the text encoder frozen, assuming the subsequent reverse diffusion process (see Section 2.2) would be able to learn the inter-modality mapping well for audio prior to construction. We also suspect that fine-tuning \(E_{text}\) may degrade its in-context learning ability due to gradients from the audio modality that is out of distribution to the pre-training dataset. This is in contrast with Liu et al. [18] that fine-tunes the pre-trained text encoder as a part of the text-audio joint-representation learning (CLAP) to allow audio prior reconstruction from text. In Section 3, we empirically show that such joint-representation learning may not be necessary for text-to-audio transformation. Figure 1: Overall architecture of Tango. ### Latent Diffusion Model for Text-Guided Generation The latent diffusion model (LDM) [33] is adapted from Liu et al. [18], with the goal to construct the audio prior \(z_{0}\) (see Section 2.5) with the guidance of text encoding \(\tau\). This essentially reduces to approximating the true prior \(q(z_{0}|\tau)\) with parameterized \(p_{\theta}(z_{0}|\tau)\). LDM can achieve the above through forward and reverse diffusion processes. The forward diffusion is a Markov chain of Gaussian distributions with scheduled noise parameters \(0<\beta_{1}<\beta_{2}<\dots<\beta_{N}<1\) to sample noisier versions of \(z_{0}\): \[q(z_{n}|z_{n-1}) =\mathcal{N}(\sqrt{1-\beta_{n}}z_{n-1},\beta_{n}\mathbf{I}), \tag{1}\] \[q(z_{n}|z_{0}) =\mathcal{N}(\sqrt{\overline{\alpha}_{n}}z_{0},(1-\overline{ \alpha}_{n})\mathbf{I}), \tag{2}\] where \(N\) is the number of forward diffusion steps, \(\alpha_{n}=1-\beta_{n}\), and \(\overline{\alpha}_{n}=\prod_{i=1}^{n}\alpha_{n}\). Song et al. [38] show that Eq. (2) conveniently follows from Eq. (1) through reparametrization trick that allows direct sampling of any \(z_{n}\) from \(z_{0}\) via a non-Markovian process: \[z_{n}=\sqrt{\overline{\alpha}_{n}}z_{0}+(1-\overline{\alpha}_{n})\epsilon, \tag{3}\] where the noise term \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The final step of the forward process yields \(z_{N}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). The reverse process denoises and reconstructs \(z_{0}\) through text-guided noise estimation (\(\hat{\epsilon}_{\theta}\)) using loss \[\mathcal{L}_{DM}=\sum_{n=1}^{N}\gamma_{n}\mathbb{E}_{\epsilon_{n}\sim \mathcal{N}(\mathbf{0},\mathbf{I}),z_{0}}||\epsilon_{n}-\hat{\epsilon}_{ \theta}^{(n)}(z_{n},\tau)||_{2}^{2}, \tag{4}\] where \(z_{n}\) is sampled from Eq. (3) using standard normal noise \(\epsilon_{n}\), \(\tau\) is the text encoding (see Section 2.1) for guidance, and \(\gamma_{n}\) is the weight of reverse step \(n\)[6], taken to be a measure of signal-to-noise ratio (SNR) in terms of \(\alpha_{1:N}\). The estimated noise is used to reconstruct \(z_{0}\): \[p_{\theta}(z_{0:N}|\tau) =p(z_{N})\prod_{n=1}^{N}p_{\theta}(z_{n-1}|z_{n},\tau), \tag{5}\] \[p_{\theta}(z_{n-1}|z_{n},\tau) =\mathcal{N}(\mu_{\theta}^{(n)}(z_{n},\tau),\tilde{\beta}^{(n)}),\] (6) \[\mu_{\theta}^{(n)}(z_{n},\tau) =\frac{1}{\sqrt{\alpha_{n}}}[z_{n}-\frac{1-\alpha_{n}}{\sqrt{1- \overline{\alpha}_{n}}}\hat{\epsilon}_{\theta}^{(n)}(z_{n},\tau)],\] (7) \[\tilde{\beta}^{(n)} =\frac{1-\bar{\alpha}_{n-1}}{1-\bar{\alpha}_{n}}\beta_{n}. \tag{8}\] The noise estimation \(\hat{\epsilon}_{\theta}\) is parameterized with U-Net [34] with a cross-attention component to include the text guidance \(\tau\). In contrast, AudioLDM [18] uses audio as the guidance during training. During inference, they switch back to text guidance, as this is facilitated by pre-trained joint text-audio embedding (CLAP). We did not find audio-guided training and pre-training CLAP to be necessary, as argued in Section 2.1. ### Augmentation Many text-to-image [28] and text-to-audio [17] works have shown the efficacy of training with fusion-based augmented samples to improve cross-modal concept-composition abilities of the diffusion network. Therefore, we synthesize additional text-audio pairs by superimposing existing audio pairs on each other and concatenating their captions. Unlike Liu et al. [18] and Kreuk et al. [17], to mix audio pairs, we do not take a random combination of them. Following Tokozume et al. [39], we instead consider the human auditory perception for fusion. Specifically, the audio pressure level \(G\) is taken into account to ensure that a sample with high pressure level do not overwhelm the sample with low pressure level. The weight of an audio sample (\(x_{1}\)) is calculated as a relative pressure level (see Fig. 2 in the appendix for its distribution) \[p=(1+10^{\frac{G_{1}-G_{2}}{20}})^{-1}, \tag{9}\] where \(G_{1}\) and \(G_{2}\) are pressure levels of two audio samples \(x_{1}\) and \(x_{2}\), respectively. This ensures good representation of both audio samples, post mixing. Furthermore, as pointed out by Tokozume et al. [39], the energy of a sound wave is proportional to the square of its amplitude. Thus, we mix \(x_{1}\) and \(x_{2}\) as \[\text{mix}(x_{1},x_{2})=\frac{px_{1}+(1-p)x_{2}}{\sqrt{p^{2}+(1-p)^{2}}}. \tag{10}\] ### Classifier-Free Guidance To guide the reverse diffusion process to reconstruct the audio prior \(z_{0}\), we employ a classifier-free guidance [7] of text input \(\tau\). During inference, a guidance scale \(w\) controls the contribution of text guidance to the noise estimation \(\hat{\epsilon}_{\theta}\), with respect to unguided estimation, where empty text is passed: \[\hat{\epsilon}_{\theta}^{(n)}(z_{n},\tau)=we_{\theta}^{(n)}(z_{n},\tau)+(1-w) \epsilon_{\theta}^{(n)}(z_{n}). \tag{11}\] We also trained a model for which the text guidance was randomly dropped for 10% of the samples during training. We found this model to perform equivalently to a model for which text guidance was always used for all samples. ### Audio VAE and Vocoder Audio variational auto-encoder (VAE) [13] compresses the mel-spectogram of an audio sample, \(m\in\mathbb{R}^{T\times F}\), into an audio prior \(z_{0}\in\mathbb{R}^{C\times T/r\times F/r}\), where \(C\), \(T\), \(F\), \(r\) are the number of channels, number of time-slots, number of frequency-slots, and compression level, respectively. The LDM (see Section 2.2) reconstructs the audio prior \(\hat{z}_{0}\) using input-text guidance \(\tau\). The encoder and decoder are composed of ResUNet blocks [15] and are trained by maximizing evidence lower-bound (ELBO) [13] and minimizing adversarial loss [9]. We adopt the checkpoint of audio VAE provided by Liu et al. [18]. Thus, we use their best reported setting, where \(C\) and \(r\) are set to \(8\) and \(4\), respectively. As a vocoder to turn the audio-VAE decoder-generated mel-spectogram into an audio, we also use HiFi-GAN [14] as Liu et al. [18]. ## 3 Experiments ### Datasets and Training Text-to-Audio Generation.We perform our main text-to-audio generation experiments on the AudioCaps dataset [12]. The dataset contains 45,438 audio clips paired with human-written captions for training. The validation set contains 2,240 instances. The audio clips are ten seconds long and were collected from YouTube videos. The clips were originally crowd-sourced as part of the significantly larger AudioSet dataset [5] for the audio classification task. We train our LDM using only the paired (text, audio) instances from the AudioCaps dataset. We use the AudioCaps test set as our evaluation data. The test set contains five human-written captions for each audio clip. We use one caption for each clip chosen at random following Liu et al. [18] for consistent evaluation with their work. The randomly chosen caption is used as the text prompt, using which we generate the audio signal from our model. Audio VAE and Vocoder.We use the audio VAE model from Liu et al. [18]. This VAE network was trained on the AudioSet, AudioCaps, Freesound2, and BBC Sound Effect Library3 (SFX) datasets. Longer audio clips in Freesound and BBC SFX were truncated to the first thirty seconds and then segmented into three parts of ten seconds each. All audio clips were resampled in 16KHz frequency for training the VAE network. We used a compression level of 4 with 8 latent channels for the VAE network. Footnote 2: [https://freesound.org/](https://freesound.org/) Footnote 3: [https://sound-effects.bbrewind.co.uk](https://sound-effects.bbrewind.co.uk) We also use the vocoder from Liu et al. [18] for audio waveform generation from the mel spectrogram generated by the VAE decoder. The vocoder is a HiFi-GAN [14] network trained on the AudioSet dataset. All audio clips were resampled at 16KHz for training the vocoder network. Model, Hyperparameters, and Training DetailsWe freeze the Flan-T5-Large text encoder in Tango and only train the parameters of the latent diffusion model. The diffusion model is based on the Stable Diffusion U-Net architecture [33; 34] and has a total of 866M parameters. We use 8 channels and a cross-attention dimension of 1024 in the U-Net model. We use the AdamW optimizer [20] with a learning rate of 3e-5 and a linear learning rate scheduler for training. We train the model for 40 epochs on the AudioCaps dataset and report results for the checkpoint with the best validation loss, which we obtained at epoch 39. We use four A6000 GPUs for training Tango, where it takes a total of 52 hours to train 40 epochs, with validation at the end of every epoch. We use a per GPU batch size of 3 (2 original + 1 augmented instance) with 4 gradient accumulation steps. The effective batch size for training is 3 (instance) \(*\)4 (accumulation) \(*\)4 (GPU) \(=48\). ### Baseline Models In our study, we examine three existing models: DiffSound by Yang et al. [43], AudioGen by Kreuk et al. [17], and AudioLDM by Liu et al. [18]. AudioGen and DiffSound use text embeddings for conditional generative training, while AudioLDM employs audio embeddings to avoid potential noise from weak textual descriptions in the paired text-audio data. AudioLDM uses audio embeddings from CLAP and asserts that they are effective in capturing cross-modal information. The models were pre-trained on large datasets, including AudioSet, and fine-tuned on the AudioCaps dataset, before evaluation, for enhanced performance. Thus, comparing them to our model Tango would not be entirely fair. Despite being trained on a much smaller dataset, our model Tango outperformed the baselines that were trained on significantly larger datasets. We may largely attribute this to the use of LLM Flan-T5. Therefore, our model Tango sets itself apart from the three existing models, making it an exciting addition to the current research in this area. It is important to note that the AudioLDM-L-Full-FT checkpoint from Liu et al. [18] was not available for our study. Therefore, we used the AudioLDM-M-Full-FT checkpoint, which was released by the authors and has \(416\)M parameters. This checkpoint was fine-tuned on both the AudioCaps and MusicCaps datasets. We performed a subjective evaluation using this checkpoint in our study. We attempted to fine-tune the AudioLDM-L-Full checkpoint on the AudioCaps dataset. However, we were unable to reproduce the results reported in Liu et al. [18] due to a lack of information on the hyperparameters used. Our model can be compared directly to AudioLDM-L since it has almost the same number of parameters and was trained solely on the AudioCaps dataset. However, it is worth noting that Liu et al. [18] did not release this checkpoint, which made it impossible for us to conduct a subjective evaluation of its generated samples. ### Evaluation Metrics Objective Evaluation.In this work, we used two commonly used objective metrics: Frechet Audio Distance (FAD) and KL divergence. FAD [11] is a perceptual metric that is adapted from Fechet Inception Distance (FID) for the audio domain. Unlike reference-based metrics, it measures the distance between the generated audio distribution and the real audio distribution without using any reference audio samples. On the other hand, KL divergence [43; 17] is a reference-dependent metric that computes the divergence between the distributions of the original and generated audio samples based on the labels generated by a pre-trained classifier. While FAD is more related to human perception, KL divergence captures the similarities between the original and generated audio signals based on broad concepts present in them. In addition to FAD, we also used Frechet Distance (FD) [18] as an objective metric. FD is similar to FAD, but it replaces the VGGish classifier with PANN. The use of different classifiers in FAD and FD allows us to evaluate the performance of the generated audio using different feature representations. Subjective Evaluation.Following Liu et al. [18] and Kreuk et al. [17], we ask six human evaluators to assess two aspects -- overall audio quality (OVL) and relevance to the input text (REL) - of 30 randomly selected baseline- and Tango-generated audio samples on a scale from 1 to 100. The evaluators were proficient in the English language and instructed well to make a fair assessment. ### Results and Analysis **Main Results.** We report our main comparative study in Table 1. We comapre our proposed method Tango with DiffSound [43], AudioGen [17] and various configurations of AudioLDM [18]. AudioLDM obtained best results with 200 sampling steps from the LDM during inference. For a fair comparison, we also use 200 inference steps in Tango and in our additional AudioLDM experiments. We used a classifier-free guidance scale of 3 for Tango. AudioLDM used a guidance scale among {2, 2.5, 3} in their various experiments. Tango achieves new state-of-the-art results for objective metrics when trained only on the AudioCaps dataset, with scores of 24.52 FD, 1.37 KL, and 1.59 FAD. This is significantly better than the most direct baseline AudioLDM-L, which also used only the AudioCaps dataset for LDM training. We attribute this to the use of Flan-T5 as text encoder in Tango. We also note that Tango matches or beats the performance of AudioLDM-*-FT models, which used significantly (\(\sim\) 63 times) larger datasets for LDM training. The AudioLDM-*-FT models used two phases of LDM training - first on the collection of the four datasets, and then only on AudioCaps. Tango is thus far more sample efficient as compared to the AudioLDM-*-FT model family. Tango also shows very promising results for subjective evaluation, with an overall audio quality score of 85.94 and a relevance score of 80.36, indicating its significantly better audio generation ability compared to AudioLDM and other baseline text-to-audio generation approaches. Training on Larger Datasets.In this experiment, we followed a two-step process to enhance the performance of Tango. First, we conducted pre-training using a diverse corpus consisting of textual prompts and audio samples sourced from WavCaps [24], AudioCaps, ESC [26], UrbanSound [36], MusicCaps [1], GTZAN [40], and Musical Instruments4 dataset. The dataset statistics \begin{table} \begin{tabular}{c c c c|c c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{**Text**} & \multirow{2}{*}{**\#Params**} & \multicolumn{3}{c|}{**Objective Metrics**} & \multicolumn{2}{c}{**Subjective Metrics**} \\ & & & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) & OVL \(\uparrow\) & REL \(\uparrow\) \\ \hline Ground truth & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(91.61\) & \(86.78\) \\ \hline DiffSound [43] & AS+AC & ✓ & \(400\)M & \(47.68\) & \(2.52\) & \(7.75\) & \(-\) & \(-\) \\ AudioGen [17] & AS+AC+8 others & ✓ & \(285\)M & \(-\) & \(2.09\) & \(3.13\) & \(-\) & \(-\) \\ AudioLDM-S & AC & ✗ & \(181\)M & \(29.48\) & \(1.97\) & \(2.43\) & \(-\) & \(-\) \\ AudioLDM-L & AC & ✗ & \(739\)M & \(27.12\) & \(1.86\) & \(2.08\) & \(-\) & \(-\) \\ AudioLDM-M-Full-FT\({}^{\dagger}\) & AS+AC+2 others & ✗ & \(416\)M & \(26.12\) & \(\mathbf{1.26}\) & \(2.57\) & \(79.85\) & \(76.84\) \\ AudioLDM-L-Full\({}^{\dagger}\) & AS+AC+2 others & ✗ & \(739\)M & \(32.46\) & \(1.76\) & \(4.18\) & \(78.63\) & \(62.69\) \\ AudioLDM-L-Full-FT & AS+AC+2 others & ✗ & \(739\)M & \(\mathbf{23.31}\) & \(1.59\) & \(1.96\) & \(-\) & \(-\) \\ \hline Tango & AC & ✓ & \(866\)M & \(24.52\) & \(1.37\) & \(\mathbf{1.59}\) & \(\mathbf{85.94}\) & \(\mathbf{80.36}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The comparison between Tango and baseline TTA models. _FT_ indicates the model is fine-tuned on the Audiocaps (AC) dataset. The AS and AC stand for AudioSet and AudioCaps datasets respectively. We borrowed all the results from [18] except for AudioLDM-L-Full which was evaluated using the model released by the authors on Huggingface. Despite the LDM being trained on a much smaller dataset, Tango outperforms AudioLDM and other baseline TTA models as per both objective and subjective metrics. \({}^{\dagger}\) indicates the results are obtained using the checkpoints released by Liu et al. [18]. \begin{table} \begin{tabular}{c c c c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{**Dataset Size**} & \multirow{2}{*}{**Text**} & \multirow{2}{*}{**\#Params**} & \multicolumn{2}{c}{**Objective Metrics**} \\ & & & FD \(\downarrow\) & KL \(\downarrow\) & & FD \(\downarrow\) & KL \(\downarrow\) \\ \hline AudioLDM-M-Full-FT\({}^{\ddagger}\) & AS+AC+2 others & 3.3M & ✗ & \(416\)M & \(26.12\) & \(1.26\) \\ AudioLDM-L-Full\({}^{\ddagger}\) & AS+AC+2 others & 3.3M & ✗ & \(739\)M & \(32.46\) & \(1.76\) \\ AudioLDM-L-Full-FT & AS+AC+2 others & 3.3M & ✗ & \(739\)M & \(23.31\) & \(1.59\) \\ \hline Tango-Full-FT & AS+AC+7 others & 1.2M & ✓ & \(866\)M & \(\mathbf{18.93}\) & \(\mathbf{1.12}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The comparison between Tango and baseline TTA models when trained on the corpus of large datasets. Tango-Full-FT was first pre-trained on a corpus comprising samples from AudioSet, AudioCaps, Freesound, and BBC datasets followed by fine-tuning on AudioCaps. are reported Table 3. All audio clips of longer than 10 seconds were segmented into partitions of successive 10 seconds or shorter. We also resampled all audio clips to 16KHz. The WavCaps dataset consists of ChatGPT-generated captions for the FreeSound5, BBC Sound Effects6 (SFX), and the AudioSet strongly labeled subset. The Urban Sound and ESC50 datasets contain various environmental sounds. The Musical Instruments dataset contains sounds of guitar, drum, violin, and piano instruments. The GTZAN dataset contains sounds of different musical genres - classical, jazz, etc. These four datasets - Urban Sound, ESC50, Musical Instruments, GTZAN are audio classification datasets. We use the classification label e.g. _piano_ and a more natural prompt _sound of piano_ to create two different training instances for each audio sample for these datasets. Footnote 5: [https://freesound.org/](https://freesound.org/) Footnote 6: [https://sound-effects.bbcrewind.co.uk](https://sound-effects.bbcrewind.co.uk) The initial pre-training stage aimed to capture a broad understanding of audio and text interactions. Next, we fine-tuned the pre-trained model specifically on the AudioCaps dataset. The obtained results, as presented in Table 2, demonstrate a remarkable performance improvement achieved by Tango-Full-FT compared to similar models in the AudioLDM family. These comparable models underwent identical pre-training and fine-tuning approaches, highlighting the effectiveness of our methodology in enhancing the model's overall performance. We conducted pre-training on Tango for a duration of \(200,000\) steps using four A6000 GPUs. To optimize the training process, we set the batch size per GPU to \(2\) and employed \(8\) gradient accumulation steps, which effectively increased the batch size to \(64\). We fine-tuned the model on AudioCaps for \(57K\) steps. To help open-source research in TTA, we released this dataset publicly 7. Footnote 7: [https://huggingface.co/datasets/declare-lab/TangoPromptBank](https://huggingface.co/datasets/declare-lab/TangoPromptBank) Effect of Different Data Augmentation Strategies.Table 4 presents a comparison between random and relative pressure-based data augmentation strategies. Notably, the relative pressure-based augmentation strategy yields the most promising results. When evaluating Tango against AudioLDM-L, both utilizing random data augmentation strategies, Tango outperforms AudioLDM-L in two out of three objective metrics. This notable improvement can be attributed to the integration of a powerful large language model (fan-t5) as a textual prompt encoder within Tango. Effect of Inference Steps and Classifier-Free Guidance.The number of inference steps and the classifier-free guidance scale are of crucial importance for sampling from latent diffusion models [38, 7]. We report the effect of varying number of steps and varying guidance scale for audio generation in AudioCaps in Table 5. We found that a guidance scale of 3 provides the best results for Tango. In the left part of Table 5, we fix the guidance scale of 3 and vary the number of steps from 10 to 200. The generated audio quality and resultant objective metrics consistently become better with more steps. Liu et al. [18] reported that the performance for AudioLDM plateaus at around 100 steps, with 200 steps providing only marginally better performance. However, we notice a substantial improvement in performance when going from 100 to 200 inference steps for Tango, suggesting that there could be further gain in performance with more inference steps. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Model & Augmentation & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) \\ \hline \multirow{2}{*}{Tango} & Random & \(25.84\) & \(1.38\) & \(2.72\) \\ & Relative Pressure & \(\mathbf{24.52}\) & \(\mathbf{1.37}\) & \(\mathbf{1.59}\) \\ \hline AudioLDM-L & Random & \(27.12\) & \(1.86\) & \(2.08\) \\ \hline \hline \end{tabular} \end{table} Table 4: Effect on the objective evaluation metrics with random vs. relative pressure-guided augmentation. Scores were computed for a guidance scale of 3 and 200 inference steps. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Model** & **AudioSet** & **AudioCaps** & **Freesound** & **BBC** & \begin{tabular}{c} **Urban** \\ **Sound** \\ \end{tabular} & \begin{tabular}{c} **Musical** \\ **Instrument** \\ \end{tabular} & **MusicCaps** & \begin{tabular}{c} **Gram** \\ **Music Game** \\ \end{tabular} & **ESC50** & **Total** \\ \hline Tango & \(108K\) & \(45K\) & \(680K\) & \(374K\) & \(17K\) & \(12K\) & \(10K\) & \(6K\) & \(4K\) & \(1.2M\) \\ AudioLDM & \(2.1M\) & \(49K\) & \(680K\) & \(374K\) & - & - & - & - & - & \(3.3M\) \\ \hline \hline \end{tabular} \end{table} Table 3: Statistics of the datasets used in training Tango-Full-FT. We report the effect of varying guidance scale with a fixed 100 steps in the right half of Table 5. The first row uses a guidance scale of 1, thus effectively not applying classifier-free guidance at all during inference. Not surprisingly, the performance of this configuration is poor, lagging far behind the classifier-free guided models across all the objective measures. We obtain almost similar results with a guidance scale of 2.5 and better FD and KL with a guidance scale of 5. We obtain the best FAD metric at a guidance scale of 3 and the metric becomes poorer with larger guidance. Temporal Sequence Modelling.We analyze how Tango and AudioLDM models perform audio generation when the text prompt contains multiple sequential events. Consider the following examples: _A toy train running as a young boy talks followed by plastic clanking then a child laughing_ contains three separate sequential events, whereas _Rolling thunder with lightning strikes_ contains only one. We segregate the AudioCaps test set using the presence of temporal identifiers - _while, before, after, then, followed_ - into two subsets, one with multiple events and the other with single event. We show the objective evaluation results for audio generation on these subsets in Table 6. Tango achieves the best FD and FAD scores for both multiple events and single event instances. The best KL divergence score is achieved by the AudioLDM-M-Full-FT model. We conjecture that the larger corpus from the four training datasets in AudioLDM could be more helpful in improving the reference-based KL metric, unlike the reference-free FD and FAD metrics. Performance against Number of Labels.Recall that the AudioCaps dataset was curated from the annotations of the audio classification task in the AudioSet dataset. The text prompts in AudioCaps can thus be paired with the discrete class labels of AudioSet. The AudioSet dataset contains a total of 632 audio event classes. For instance, _A woman and a baby are having a conversation_ and its corresponding audio clip has the following three labels: _Speech, Child speech kid speaking, Inside small room._ We group instances having one label, two labels, and multiple (two or more) labels in AudioCaps and evaluate the generated audios across the objective metrics. We report the result of the experiment in Table 7. Tango outperforms AudioLDM models across all the objective metrics for audio generation from texts with one label or two labels. For texts with multiple labels, AudioLDM achieves a better KL divergence score and Tango achieves better FD and FAD scores. Interestingly, all the models achieve consistently better FD and KL scores with progressively more labels, suggesting that such textual prompts are more effectively processed by the diffusion models. \begin{table} \begin{tabular}{c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Datasets**} & \multicolumn{3}{c|}{**Multiple Events**} & \multicolumn{3}{c}{**Single Event**} \\ & & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) \\ \hline AudioLDM-L-Full & \multirow{2}{*}{AS+AC+2 others} & \(43.65\) & \(1.90\) & \(3.77\) & \(35.39\) & \(1.66\) & \(5.24\) \\ AudioLDM-M-Full-FT & & \(34.57\) & \(\mathbf{1.32}\) & \(2.45\) & \(29.40\) & \(\mathbf{1.21}\) & \(3.27\) \\ \hline Tango & AC & \(\mathbf{33.36}\) & \(1.45\) & \(\mathbf{1.75}\) & \(\mathbf{28.59}\) & \(1.30\) & \(\mathbf{2.04}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Objective evaluation results for audio generation in the presence of multiple events or a single event in the text prompt in the AudioCaps test set. The multiple events and single event subsets collectively constitute the entire AudioCaps test set. It should be noted that FD and FAD are corpus-level non-linear metrics, and hence the FD and FAD scores reported in Table 1 are not average of the subset scores reported in this table. \begin{table} \begin{tabular}{c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{6}{c|}{**Varying Steps**} & \multicolumn{6}{c}{**Varying Guidance**} \\ & Guidance & Steps & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) & Steps & Guidance & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) \\ \hline \multirow{4}{*}{Tango} & & \(10\) & \(45.12\) & \(1.66\) & \(11.38\) & & - & 35.76 & 2.02 & 6.22 \\ & & \(20\) & \(31.38\) & \(1.39\) & \(4.52\) & & \(2.5\) & \(26.32\) & \(1.39\) & \(1.97\) \\ \cline{1-1} & \(3\) & \(50\) & \(25.33\) & \(\mathbf{1.27}\) & \(2.13\) & \(100\) & 3 & \(26.13\) & \(1.37\) & \(\mathbf{1.87}\) \\ \cline{1-1} & & \(100\) & \(26.13\) & \(1.37\) & \(1.87\) & & \(5\) & \(\mathbf{24.28}\) & \(\mathbf{1.28}\) & \(2.32\) \\ \cline{1-1} & & \(200\) & \(\mathbf{24.52}\) & \(1.37\) & \(\mathbf{1.59}\) & & \(10\) & \(26.10\) & \(1.31\) & \(3.30\) \\ \hline \hline \end{tabular} \end{table} Table 5: Effect on the objective evaluation metrics with a varying number of inference steps and classifier-free guidance. Effect of Augmentation and Distribution of Relative Pressure-Level (\(p\)) for AugmentationWe described our augmentation strategy earlier in Section 2.3. The distribution of the relative pressure level \(p\) in Equation (9) across the training samples is shown in Figure 2 that implies that the relative pressure levels are roughly normally distributed and many samples have low levels of relative pressure, which might be poorly represented in a random mixing. In contrast, our approach allows for much equitable mixing. Categorical Modelling.The class labels in AudioSet can be arranged hierarchically to obtain the following top-level categories: i) Human sounds, ii) Animal sounds, iii) Natural sounds, iv) Sounds \begin{table} \begin{tabular}{c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Datasets**} & \multicolumn{3}{c|}{**One Label**} & \multicolumn{3}{c|}{**Two labels**} & \multicolumn{3}{c}{**Multiple Labels**} \\ & & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) & FD \(\downarrow\) & KL \(\downarrow\) & FAD \(\downarrow\) \\ \hline AudioLDM-L-Full & \multirow{2}{*}{AS+AC+2 others} & \(48.11\) & \(2.07\) & \(4.71\) & \(44.93\) & \(1.90\) & \(4.09\) & \(34.94\) & \(1.68\) & \(4.59\) \\ AudioLDM-M-Full-FT & & 46.44 & \(1.85\) & \(3.77\) & \(39.01\) & \(1.29\) & \(3.52\) & \(26.74\) & \(\mathbf{1.10}\) & \(2.62\) \\ \hline Tango & AC & \(\mathbf{40.81}\) & \(\mathbf{1.84}\) & \(\mathbf{1.79}\) & \(\mathbf{35.09}\) & \(\mathbf{1.56}\) & \(\mathbf{2.53}\) & \(\mathbf{26.05}\) & \(1.24\) & \(\mathbf{1.96}\) \\ \hline \hline \end{tabular} \end{table} Table 7: Performance of audio generation in AudioCaps for texts containing one, two, or multiple (two or more) labels. Each text in AudioCaps has its corresponding multi-category labels from AudioSet. We use these labels to segregate the AudioCaps dataset into three subsets. \begin{table} \begin{tabular}{c c c c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Human} & \multirow{2}{*}{Animal} & \multirow{2}{*}{Natural} & \multirow{2}{*}{Things} & \multirow{2}{*}{CEB} & \multicolumn{2}{c|}{FD \(\downarrow\)} & \multicolumn{2}{c|}{KL \(\downarrow\)} & \multicolumn{2}{c}{FAD \(\downarrow\)} \\ & & & & AudioLDM & Tango & AudioLDM & Tango & AudioLDM & Tango \\ \hline ✓ & ✗ & ✗ & ✗ & ✗ & \(38.15\) & \(\mathbf{34.06}\) & \(1.01\) & \(\mathbf{0.99}\) & \(2.81\) & \(\mathbf{2.13}\) \\ ✗ & ✓ & ✗ & ✗ & ✗ & \(78.62\) & \(\mathbf{77.78}\) & \(\mathbf{1.82}\) & \(1.92\) & \(\mathbf{4.28}\) & \(4.62\) \\ ✓ & ✓ & ✗ & ✗ & ✗ & \(\mathbf{61.91}\) & \(70.32\) & \(\mathbf{0.89}\) & \(1.29\) & \(6.32\) & \(\mathbf{5.19}\) \\ ✗ & ✗ & ✓ & ✗ & ✗ & \(\mathbf{51.61}\) & \(57.75\) & \(\mathbf{1.89}\) & \(1.96\) & \(6.75\) & \(\mathbf{5.15}\) \\ ✗ & ✗ & ✗ & ✓ & ✗ & \(35.60\) & \(\mathbf{33.13}\) & \(\mathbf{1.35}\) & \(1.43\) & \(5.42\) & \(\mathbf{3.40}\) \\ ✗ & ✗ & ✓ & ✓ & ✗ & \(55.06\) & \(\mathbf{42.00}\) & \(1.46\) & \(\mathbf{1.12}\) & \(6.57\) & \(\mathbf{3.89}\) \\ ✓ & ✗ & ✗ & ✓ & ✗ & \(\mathbf{37.57}\) & \(39.22\) & \(\mathbf{1.11}\) & \(1.34\) & \(3.26\) & \(\mathbf{3.18}\) \\ ✗ & ✗ & ✗ & ✓ & ✓ & \(54.25\) & \(\mathbf{52.77}\) & \(1.43\) & \(\mathbf{1.33}\) & \(11.49\) & \(\mathbf{9.26}\) \\ \hline \hline \end{tabular} \end{table} Table 8: Performance of AudioLDM-M-Full FT and Tango for the most frequently occurring categories in AudioCaps dataset. CEB indicates the Channel, environment, and background sounds category. Figure 2: Distribution of relative pressure level (see Equation (9)) across the augmented samples. of Things, v) Channel, environment, background sounds, vi) Source-ambiguous sounds, and vii) Music. We map the class labels in AudioCaps to the seven main categories listed above. The Music category is very rare in AudioCaps and the rest either appear on their own or in various combinations with others. We select the most frequently occurring category combinations and analyze the performance of various models within the constituting AudioCaps instances in Table 8. The performance of the two models is pretty balanced across the FD and KL metrics, with Tango being better in some, and AudioLDM in others. However, Tango achieves better FAD scores in all but one group, with large improvements in (human, animal), (natural), (things), and (natural, things) categories. ## 4 Related Works Diffusion Models.Recent years have seen a surge in diffusion models as a leading approach for generating high-quality speech [2; 16; 27; 28; 10; 8]. These models utilize a fixed number of Markov chain steps to transform white noise signals into structured waveforms. Among them, FastDiff has achieved remarkable results in high-quality speech synthesis [8]. By leveraging a stack of time-aware diffusion processes, FastDiff can generate speech samples of exceptional quality at an impressive speed, 58 times faster than real-time on a V100 GPU, making it practical for speech synthesis deployment. It surpasses other existing methods in end-to-end text-to-speech synthesis. Another noteworthy probabilistic model for audio synthesis is DiffWave [16], which is non-autoregressive and generates high-fidelity audio for various waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. DiffWave delivers speech quality that is on par with the powerful WaveNet vocoder [25] while synthesizing audio much faster. Diffusion models have emerged as a promising approach for speech processing, particularly in speech enhancement [21; 37; 29; 22]. Recent advancements in diffusion probabilistic models have led to the development of a new speech enhancement algorithm that incorporates the characteristics of noisy speech signals into the forward and reverse diffusion processes [23]. This new algorithm is a generalized form of the probabilistic diffusion model, known as the conditional diffusion probabilistic model. During its reverse process, it can adapt to non-Gaussian real noises in the estimated speech signal, making it highly effective in improving speech quality. In addition, Qiu et al. [29] propose SRTNet, a novel method for speech enhancement that incorporates the diffusion model as a module for stochastic refinement. The proposed method comprises a joint network of deterministic and stochastic modules, forming the "enhance-and-refine" paradigm. The paper also includes a theoretical demonstration of the proposed method's feasibility and presents experimental results to support its effectiveness, highlighting its potential in improving speech quality. Text-to-Audio Generation.The field of text-to-audio generation has received limited attention until recently [17; 43]. In Yang et al. [43], a text encoder is used to obtain text features, which are then processed by a non-autoregressive decoder to generate spectrogram tokens. These tokens are fed to a vector quantized VAE (VQ-VAE) to generate mel spectrograms that are used by a vocoder to generate audio. The non-autoregressive decoder is a probabilistic diffusion model. In addition, Yang et al. [43] introduced a novel data augmentation technique called the mask-based text generation strategy (MBTG), which masks out portions of input text that do not represent any event, such as those indicating temporality. The aim of MBTG is to learn augmented text descriptions from audio during training. Although this approach seems promising, its fundamental limitation is the lack of diversity in the generated data, as it fails to mix different audio samples. Later, Kreuk et al. [17] proposed a correction to this method, mixing audio signals according to random signal-to-noise ratios and concatenating the corresponding textual descriptions. This approach allows for the generation of new (text, audio) pairs and mitigates the limitations of Yang et al. [43]. Unlike Yang et al. [43], the architecture proposed in Kreuk et al. [17] uses a transformer encoder and decoder network to autoregressively generate audio tokens from text input. Recently, Liu et al. [18] proposed AudioLDM, which translates the Latent Diffusion Model of text-to-visual to text-to-audio generation. They pre-trained VAE-based encoder-decoder networks to learn a compressed latent representation of audio, which was then used to guide a diffusion model to generate audio tokens from text input. They found that using audio embeddings instead of text embeddings during the backward diffusion process improved conditional audio generation. During inference time, they used text embeddings for text-to-audio generation. Audio and text embeddings were obtained using pre-trained CLAP, which is the audio counterpart of CLIP embeddings used in the original LDM model. ## 5 Limitations Tango is not always able to finely control its generations over textual control prompts as it is trained only on the small AudioCaps dataset. For example, the generations from Tango for prompts _Chopping tomatoes on a wooden table_ and _Chopping potatoes on a metal table_ are very similar. _Chopping vegetables on a table_ also produces similar audio samples. Training text-to-audio generation models on larger datasets is thus required for the model to learn the composition of textual concepts and varied text-audio mappings. In the future, we plan to improve Tango by training it on larger datasets and enhancing its compositional and controllable generation ability. ## 6 Conclusion In this work, we investigate the effectiveness of the instruction-tuned model, Flan-T5, for text-to-audio generation. Specifically, we use the textual embeddings produced by Flan-T5 in the latent diffusion model to generate mel-spectrogram tokens. These tokens are then fed to a pre-trained variational auto-encoder (VAE) to generate mel-spectrograms, which are later used by a pre-trained vocoder to generate audio. Our model achieved superior performance under both objective and subjective evaluations compared to the state-of-the-art text-to-audio model, AudioLDM, despite using only 63 times less training data. We primarily attribute this performance improvement to the representational power of Flan-T5, which is due to its instruction-based tuning in the pre-training stage. In the future, we plan to investigate the effectiveness of Flan-T5 in other audio tasks, such as, audio super-resolution and inpainting. ## Acknowledgements We are grateful to Oracle for Research and Huggingface for their generous support to the project Tango.
2306.12256
Stability Analysis of Trajectories on Manifolds with Applications to Observer and Controller Design
This paper examines the local exponential stability (LES) of trajectories for nonlinear systems on Riemannian manifolds. We present necessary and sufficient conditions for LES of a trajectory on a Riemannian manifold by analyzing the complete lift of the system along the given trajectory. These conditions are coordinate-free which reveal fundamental relationships between exponential stability and incremental stability in a local sense. We then apply these results to design tracking controllers and observers for Euler-Lagrangian systems on manifolds; a notable advantage of our design is that it visibly reveals the effect of curvature on system dynamics and hence suggests compensation terms in the controller and observer. Additionally, we revisit some well-known intrinsic observer problems using our proposed method, which largely simplifies the analysis compared to existing results.
Dongjun Wu, Bowen Yi, Anders Rantzer
2023-06-21T13:23:55Z
http://arxiv.org/abs/2306.12256v1
# Stability Analysis of Trajectories on Manifolds with Applications to Observer and Controller Design ###### Abstract This paper examines the local exponential stability (LES) of trajectories for nonlinear systems on Riemannian manifolds. We present necessary and sufficient conditions for LES of a trajectory on a Riemannian manifold by analyzing the complete lift of the system along the given trajectory. These conditions are coordinate-free which reveal fundamental relationships between exponential stability and incremental stability in a local sense. We then apply these results to design tracking controllers and observers for Euler-Lagrangian systems on manifolds; a notable advantage of our design is that it visibly reveals the effect of curvature on system dynamics and hence suggests compensation terms in the controller and observer. Additionally, we revisit some well-known intrinsic observer problems using our proposed method, which largely simplifies the analysis compared to existing results. ## I Introduction Many physical systems are naturally modelled on Riemannian manifolds. The most important example may refer to a class of mechanical systems with configuration spaces being Riemannian manifolds, rather than Euclidean spaces [1]. Another well known example appears in quantum systems [2], in which systems state lives on Lie groups [3]. It is well known that local stability of equilibria for systems, whose state space is on Riemannian manifolds, can be analyzed via linearization in local coordinate--similar to the case in Euclidean space--known as the Lyapunov indirect method. In many practically important control tasks, we are very interested in the stability of _a particular solution_\(X(\cdot)\), the problem of which widely arises in observer design, trajectory tracking [4], orbital stabilization [5], and synchronization [6]. In Euclidean space, these tasks, or equivalently the stability of a solution \(X(\cdot)\), may be solved by introducing an error variable and then studying the error dynamics, which is usually a nonlinear time-varying system. In particular, the local exponential stability (LES) of \(X(\cdot)\) for a given nonlinear system can be characterized by the linearization of its error dynamics near the trajectory. A similar problem arises in contraction and incremental stability analysis [7, 8], in which we are interested in the attraction of any two trajectories to each other, rather than a particular one \(X(\cdot)\). The basic idea is to explore the stability of a linearized dynamics, regarded as first-order approximation, to obtain the incremental stability of the given system. Indeed, studying the stability of a _particular_ solution via first-order approximation has already been used, which, from the perspective of incremental stability, is known as partial (or virtual) contraction [8, 9]. As discussed above, some excitation conditions of the given trajectory may be needed to continue stability analysis. A successful application may be found in [10] for the stability of extended Kalman filter (EKF). For the system evolving on Riemannian manifolds, however, the stability analysis of a solution \(X(\cdot)\) is much more challenging. The difficulty arises from two aspects. On one hand, the "error dynamics" for such a case is more involved--there are, indeed, no generally preferred definition of tracking (or observation, synchronization) errors--the induced Riemannian distance on manifolds can hardly be used to derive error dynamics directly. In practice, one has to choose an error vector according to the structure of the manifold, see [4, 11, 12] for examples. On the other hand, the alternative method, via first-order approximation (or partial contraction), is non trivial to be applied to Riemannian manifolds, since it is usually a daunting task to calculate the differential dynamics on Riemannian manifolds, and also some complicated calculations of parallel transport are involved. Overcoming these two major challenges is the main motivation of the paper. To address this, we provide in this paper an alternative way to study LES of trajectories on Riemannian manifolds, namely, LES will be characterized by the stability of the _complete lift_ of the system along the trajectory, in this way removing the need of obtaining error dynamics. Complete lift, or tangent lift, has been used to study various control problems, see for example [13, 14, 15, 16, 11]. Among the listed references, the most relevant work to ours are [15, 16]. In [16] the authors have remarked that the complete lift can be seen as a linearization procedure. However, the verification of stability of the complete lift system is challenging since it is a system living in the tangent bundle and thus how to effectively use the aforementioned characterization to guide controller and observer design is an open question. We address this question in this paper. The main contributions of the paper are three folds. * Establish the relationship between LES of a solution to the stability of the complete lift along this solution on a Riemannian manifold, which can be seen as the Lyapunov indirect method on manifolds. Then show that LES of a solution is equivalent to local contraction near the solution \(X(\cdot)\). * Propose an alternative approach for analysis of LES based on the characterization of complete lift system. This novel approach obviates the calculation of complete lift and hence facilitates the analysis of local exponential stability and contraction. We demonstrate the efficiency of the proposed methods by revisiting some well-known research problems. * Two main types of application problems are studied, namely, controller and observer design, especially for mechanical systems on manifolds. These results largely simplify the analysis in some existing works. In particular, the proposed method is quite efficient for analyzing a class of systems called Killing systems. _Notation._ Throughout this paper we use rather standard notations from Riemannian geometry [17, 18]. Denote \(M\) the Riemannian manifold of dimension \(n\), \(\langle\cdot,\cdot\rangle\) the metric, \(\nabla\) the Levi-Civita connection, \(R(X,Y)Z\) the Riemannian curvature, \(\pi:TM\to M\) the natural projection of the tangent bundle. We use \(\nabla\) and \(\operatorname{grad}(\cdot)\) interchangeably to represent the gradient operator. Let \(\operatorname{Hess}(\cdot)\) be the Hessian, \(\exp(\cdot)\) the exponential map, \(P_{x}^{y}:T_{x}M\to T_{y}M\) the parallel transport from \(T_{x}M\) to \(T_{y}M\), \(d(x,y)\) the Riemannian distance between \(x\) and \(y\), and \(B_{c}(x)=\{\xi\in M|d(\xi,x)\leq c\}\) the Riemannian ball. Let \(\phi^{f}(t;t_{0},x_{0})\) be the flow of the equation \(\dot{x}=f(x,t)\); and we someentire \(\phi(\cdot)\) when clear from the context. The notation \(L_{f}Y\) stands for Lie derivative of \(Y\) along \(f\). ## 2 Local Exponential Stability on Riemannian Manifolds ### Theory: LES and Complete Lift Consider a system \[\dot{x}=f(x,t) \tag{1}\] with the system state \(x\) on the Riemannian manifold \(M\), and \(X(\cdot)\) a particular solution, _i.e._, \(\dot{X}(t)=f(X(t),t)\) from the intial condition \(X(t_{0})=X_{0}\in M\). We study the local exponential stability of the solution \(X(t)\). Some definitions are recalled below. **Definition 1**: _The solution \(X(\cdot)\) of the system (1) is locally exponentially stable (LES) if there exist positive constants \(c,K\) and \(\lambda\), all independent of \(t_{0}\), such that_ \[d(x(t),X(t))\leq Kd(x(t_{0}),X(t_{0}))e^{-\lambda(t-t_{0})},\] _for all \(t\geq t_{0}\geq 0\) and \(x(t_{0})\) satisfying \(d(x(t_{0}),X(t_{0}))<c\)._ **Remark 1**: _For the case that \(X(t)\) is a trivial solution at an equilibrium, _i.e._, \(X(t)\equiv X_{0},\ \forall t\geq t_{0}\), Definition 2 coincides with the standard definition of LES of an equilibrium. We should also notice the peculiarity of this definition--it may happen that the union of LES solutions forms into a dense set. For example, every solution of \(\dot{x}=Ax\) is LES when \(A\) is Hurwitz._ We recall the definition of complete lift of a vector field, see [19, 20] for more detailed discussions. **Definition 2** (Complete Lift): _Consider the time-varying vector field \(f(x,t)\). Given a point \(v\in TM\), let \(\sigma(t,s)\) be the integral curve of \(f\) with \(\sigma(s,s)=\pi(v)\). Let \(V(t)\) be the vector field along \(\sigma\) obtained by Lie transport of \(v\) by \(f\). Then \((\sigma,V)\) defines a curve in \(TM\) through \(v\). For every \(t\geq s\), the complete lift of \(f\) into \(TTM\) is defined at \(v\) as the tangent vector to the curve \((\sigma,V)\) at \(t=s\). We denote this vector field by \(\tilde{f}(v,t)\), for \(v\in TM\)._ **Definition 3**: _Given the system (1), and a solution \(X(t)\). Define the complete lift of the system (1) along \(X(t)\) as_ \[\dot{v}=\tilde{f}(v,t),\ v(t)\in T_{X(t)}M \tag{2}\] _where \(\tilde{f}\) is the complete lift of \(f\) as in Definition 2._ The most important property of the complete lift system is _linearity_ at a fixed fibre. We refer the reader to [15] for coordinate expression of (2). From this definition, one can easily verify that the solution to (2), _i.e._, \(v(t)\) has the property that \(\pi v(t)=X(t)\). Hence we say that (2) defines a dynamical system along the particular solution \(X(t)\). The following simple characterization is the theoretical basis of this paper. It can be viewed as an analogue of the Lyapunov indirect method on Riemannian manifolds. **Theorem 1**: _Assume the system (1) is forward complete for \(t\geq 0\). If the solution \(X(t)\) is LES, then the complete lift of the system (1) along \(X(\cdot)\) is exponentially stable. If the solution \(X(\cdot)\) is bounded, the converse is also true._ \((\Longrightarrow)\) Assume that the solution \(X(t)\) is LES. Denote the minimizing normalized (_i.e._ with unit speed) geodesic joining \(X(t)\) to \(x(t_{0})\) as \(\gamma:[0,\dot{s}]\to M\), with \(\gamma(0)=X(t_{0}),\ \ \gamma(\dot{s})=x(t_{0})\) and \(0\leq\dot{s}=d(X(t_{0}),x(t_{0}))\). Let \(v_{0}\in TM\) with \(\pi(v_{0})=X(t_{0})\) and \(v_{0}=\gamma^{\prime}(0)\), and \(v(t)\) be the solution to the complete lift system (2). Then \[\dot{s}\left|v(t)\right|=d\left(\exp_{X(t)}\left(\dot{s}v(t)\right),X(t) \right), \tag{3}\] where \(\exp_{x}:TM\to M\) is the exponential map, by choosing \(\dot{s}\) sufficiently small such that \(\exp\) is defined. Using the metric property of \(d\), we have \[d\left(\exp_{X(t)}\left(\dot{s}v(t)\right),X(t)\right)\] \[\leq d\left(\exp_{X(t)}\left(\dot{s}v(t)\right),x(t)\right)+d(x(t),X(t)) \tag{4}\] \[\leq d\left(\exp_{X(t)}\left(\dot{s}v(t)\right),x(t)\right)+K\dot{s} e^{-\lambda(t-t_{0})}, \tag{5}\] where the second inequality follows from Definition 1. Fixing \(t\) at the moment and invoking (3) and (5) we get \[\begin{split}\left|v(t)\right|&\leq\kappa(\dot{s})+ Ke^{-\lambda(t-t_{0})}\\ &\kappa(\dot{s}):=\frac{d\left(\exp_{X(t)}\left(\dot{s}v(t) \right),x(t)\right)}{\dot{s}}.\end{split} \tag{6}\] Note that more precisely \(\kappa\) is a function of both \(t\) and \(\dot{s}\). But omitting the \(t\) argument does not affect the following analysis. Now we need to show the term \(\kappa(\dot{s})\) is of order \(O(\dot{s})\). Since \(x(t_{0})=\gamma(\dot{s})\), this term can be rewritten as \[\kappa(s)=\frac{d\left(\exp_{X(t)}\left(sv(t)\right),\phi(t;t_{0},\gamma(s)) \right)}{s}\] where we have replaced \(\dot{s}\) by \(s\). To this end, we consider two functions \(\alpha_{1}(s)=\exp_{X(t)}\left(sv(t)\right)\), \(\alpha_{2}(s)=\phi(t;t_{0},\gamma(s))\). Similarly, we have omitted the \(t\) argument which does not affect the proof. We have \(\alpha_{1}(0)=\alpha_{2}(0)=X(t)\) and \(\alpha_{1}^{\prime}(0)=\alpha_{2}^{\prime}(0)=v(t)\). Thus \[\kappa(s)=\frac{1}{s}d(\alpha_{1}(s),\alpha_{2}(s))=O(s),\] where we have used Lemma 3 given in Appendix. Now letting \(\dot{s}\to 0\) in ( 6) and noticing that the geodesic is unit speed, we have \[\left|v(t)\right|\leq K|v(t_{0})|e^{-\lambda(t-t_{0})},\] for any \(v(t_{0})\in T_{X(t_{0})}M\). \((\Longleftarrow)\) A consequence of Proposition 1 (see Section 2B): If the complete lift along \(X(\cdot)\) is ES, then the proof of Proposition 1 shows that the system is contractive on a bounded set \(B_{c}\) and thus the LES of \(X(\cdot)\). **Remark 2**: _Theorem 1 provides a characterization for LES of trajectories on manifolds via complete lift. Unfortunately, the original form of this theoretical result lacks practical utility for applications. The main reason is that the complete lift on manifolds is difficult to obtain, and quite often, its calculation of it relies on local coordinates, which is in conflict with the purpose (coordinate-free design) of this paper. To circumvent this issue, we propose an alternative approach in Section 2-C based on Theorem 1, which will be much more efficient to use. But we must emphasize that Theorem 1 plays the fundamental role for the rest of the paper._ From Theorem 1, we can derive the following interesting corollary which says that there no unbounded LES solution exists for _autonomous_ systems. **Corollary 1**: _For a time invariant system \(\dot{x}=f(x)\), a LES solution \(X(t)\) should always be bounded and non-periodic._ The complete lift of \(\dot{x}=f(x)\) is \(\dot{v}=\frac{\partial f}{\partial x}v,\ v\in T_{x}\mathbb{R}^{n}\). Clearly, \(v=\dot{x}\) is a solution to the complete lift system. Then by Theorem 1, \(|\dot{X}(t)|\leq k|\dot{X}(0)|e^{-\lambda t}\), hence \(X(t)\) cannot be periodic. Further more, \(|X(t)|\leq|X(0)|+\int_{0}^{t}k|\dot{X}(0)|e^{-\lambda s}ds=|X(0)|+\frac{k|\dot{X}( 0)|}{\lambda}(1-e^{-\lambda t})<|X(0)|+\frac{k|\dot{X}(0)|}{\lambda}\). In [21, Lemma 1], the authors obtain a similar result for autonomous systems, _i.e._, there is a unique attractive equilibrium in an invariant set, in which the system is incrementally exponentially stable. ### Contraction and LES Contraction theory has become a powerful tool for analysis and design of control systems, see [22, 23, 24, 6, 7, 8] and the references therein. In Section II-A, we have studied LES of solutions to the system (1). In this subsection, we will show the close connection between the proposed result and contraction analysis on manifolds [25, 15]. The reader may refer to [26, 22] for the case on Euclidean space. We say that the system (1) is contractive on a set \(C\) if there exist positive constants \(K,\lambda\), independent of \(t_{0}\) such that \[d(\phi(t;t_{0},x_{1}),\phi(t;t_{0},x_{2}))\leq Kd(x_{1},x_{2})e^{-\lambda(t-t_ {0})}, \tag{7}\] for all \(x_{1},x_{2}\in C,t\geq t_{0}\geq 0\). For technical ease, we have slightly modified the definition of contraction by allowing the set \(C\) to be not forward invariant. Based on Theorem 1, we have the following proposition, which can be viewed as a bridge from LES to local contraction. **Proposition 1**: _A bounded solution \(X(t)\) to the system (1) is LES if and only if there exists a constant \(c\) such that the system (1) is contractive on a bounded set \(B_{c}\) whose interior contains \(X(\cdot)\)._ Assume that \(X(t)\) is LES. Then the complete lift system along \(X(t)\) is exponential stable (ES) by Theorem 1. By converse Lyapunov theorem, there exists a \(\mathcal{C}\)1 function \(V(t,v)\), quadratic in \(v\) satisfying Footnote 1: Recall that \([X,Y]=\frac{d}{dt}\Big{|}_{t_{0}}(\phi_{t}^{\star})^{\star}Y(t_{0})\), thus \([\dot{q},q^{\prime}]=\frac{d}{dt}\Big{|}_{t_{0}}(\phi_{t}^{\prime})^{\star}( \phi_{t}^{\prime})^{\star}q^{\prime}(t_{0})=\frac{d}{dt}\Big{|}_{t_{0}}q^{ \prime}(t_{0})=0\). \[c_{1}|v|^{2}\leq V(t,v)\leq c_{2}|v|^{2},\ \forall v\in T_{X(t)}M \tag{8}\] and \[\dot{V}(t,v)=\frac{\partial V}{\partial t}(t,v)+L_{\dot{f}}V(t,v)\leq-c_{3}|v |^{2},\ \forall v\in T_{X(t)}M, \tag{9}\] for all \(t\geq t_{0}\geq 0\) and three positive constants \(c_{1},c_{2},c_{3}\). Due to the smoothness of \(V\), we have \[|\dot{V}(t,P_{X(t)}^{x(t)}v)-\dot{V}(t,v)|\leq c_{4}d_{TM}(P_{X(t)}^{x(t)}v,v )=c_{4}d_{M}(x(t),X(t)).\] Thus \[\sup_{\begin{subarray}{c}|v|=1,\\ w\in T_{x(t)}M\end{subarray}}\dot{V}(t,w) =\sup_{\begin{subarray}{c}|v|=1,\\ v\in T_{X(t)}M\end{subarray}}\dot{V}(t,P_{X(t)}^{x(t)}v)\] \[=\sup_{\begin{subarray}{c}|v|=1,\\ v\in T_{X(t)}M\end{subarray}}\dot{V}(t,v)+\dot{V}(t,P_{X(t)}^{x(t)}v)-\dot{V }(t,v)\] \[\leq-c_{3}+c_{4}d(x(t),X(t))<-c_{5}<0,\] for \(c\) small enough such that \(d(x(t),X(t))\) will be small enough for all \(t\geq t_{0}\) when \(x(t_{0})\in B_{c}(X(t_{0}))\). Since \(\dot{V}\) is quadratic in \(v\) (due to the linearity of the complete lift system and that \(V(t,v)\) is quadratic in \(v\)), this implies \[\dot{V}(t,v)\leq-c_{5}|v|^{2},\ \forall v\in T_{x(t)}M,\ t\geq t_{0}\] for all \(x(t_{0})\in B_{c}(X(t_{0}))\). Then the system (1) is contractive on \(B_{c}:=\bigcup_{t_{0}\geq 0}B_{c}(X(t_{0}))\) which is bounded as is \(X(\cdot)\) (use Theorem 2 [15]). The converse is obvious, and hence the proof is completed. The following corollary is a straightforward consequence. **Corollary 2**: _Assume that the system (1) has an equilibrium point \(x_{\star}\in M\). Then \(x_{\star}\) is LES if and only if there exists an open neighborhood of \(x_{\star}\) on which the system is contractive._ In [27], the authors proved similar result to this corollary for _autonomous_ systems in Euclidean space. The paper [22] focuses on asymptotic stability and asymptotic contraction, also in Euclidean space. ### A More Usable Form As remarked earlier, Theorem 1 is not suitable for practical applications due to the difficulty of calculating the complete lift system. In this subsection, we propose a more usable version of Theorem 1 (still intrinsic) which will make the analysis of LES a routine task. For reasons that will be clear later, we rename the state \(x\) in the system (1) as \(q\). Fig. 1 is drawn to illustrate our idea. In Fig. 1, the solid curve represents a trajectory of the system system (1), say \(q:\mathbb{R}_{\geq 0}\to M\), whose velocity vectors are drawn as black arrows, denoted \(\dot{q}\). The dashed curves are flows of the initial curve \(\gamma:s\mapsto\gamma(s)\in M\). The blue arrows emanating from the curve \(q\) are the (transversal) velocities of the dashed curve, denoted as \(q^{\prime}\), or in precise language, \(q^{\prime}=\frac{\partial q(s,t)}{\partial s}\) for the parameterized curve \((s,t)\mapsto q(s,t)\). We call \(q^{\prime}\) a variation along \(q(\cdot)\). Two important observations can be made from the figure: * By construction, \(q^{\prime}\) is the solution to the complete lift of the system (1) along the trajectory \(q(\cdot)\). Thanks to this, the Lie bracket \([\dot{q},q^{\prime}]\) vanishes for all \(t\geq t_{0}\) along \(q(\cdot)\). 1 Footnote 1: Recall that \([X,Y]=\frac{d}{dt}\Big{|}_{t_{0}}(\phi_{t}^{\star})^{\star}Y(t_{0})\), thus \([\dot{q},q^{\prime}]=\frac{d}{dt}\Big{|}_{t_{0}}(\phi_{t}^{\prime})^{\star}( \phi_{t}^{\prime})^{\star}q^{\prime}(t_{0})=\frac{d}{dt}\Big{|}_{t_{0}}q^{ \prime}(t_{0})=0\). * The map \((s,t)\mapsto q(s,t)\) forms a parameterized surface in \(M\). Then due to the torsion-free property of Levi-Civita connection, there holds \(\frac{\partial}{\partial t}\frac{\partial q}{\partial s}=\frac{\partial}{ \partial t}\frac{\partial q}{\partial q}\) (see (35) and [17, Lemma 3.4]), which implies that \(\nabla_{\dot{q}}q^{\prime}=\nabla_{q^{\prime}}q^{\prime}=\nabla_{q^{\prime}}f\). Now that \(q^{\prime}\) is the solution to the complete lift system, it is sufficient to analyze the dynamics of \(q^{\prime}\). This may seem naive at the first thought and that the novelty seems to be only at a notational level. The fact is, however, due to the above two observations, we now have access to rich results in Riemannian geometry. In particular, we will see how LES on Riemannian manifold is affected by curvature - the most important ingredient of a Riemannian manifold. ### Revisit of some existing results #### 1 Contraction on Riemannian manifolds [25] The following result is obtained in [25] (the contraction version): **Theorem 2** ([25]): _Let \(q(\cdot)\) be a solution to the system (1), if_ \[\langle\nabla_{v}f,v\rangle\leq-k\langle v,v\rangle,\ \forall v\in T_{q(t)}M,\ t\geq 0,\] _for some positive constant \(k\), then the solution \(q(t)\) is LES._ The proof of this theorem will now simplify to a few lines: \(\langle q^{\prime},q^{\prime}\rangle\). Indeed, \[\frac{1}{2}\frac{d}{dt}\big{\langle}q^{\prime},q^{\prime}\big{\rangle}=\left\langle \nabla_{\dot{q}}q^{\prime},q^{\prime}\right\rangle=\left\langle\nabla_{q^{ \prime}}f,q^{\prime}\right\rangle\leq-k\left\langle q^{\prime},q^{\prime} \right\rangle.\] Thus \(\left\langle q^{\prime},q^{\prime}\right\rangle\) converges exponentially. Notice that we have used the fact that \([q^{\prime},\dot{q}]=0\). #### 3.2.2 Intrinsic reduced observer [28] The following lemma was among the key results in [28]: **Lemma 1** ([28]): _Let \(M\) be a smooth Riemannian manifold. Let \(P\in M\) be fixed. On the subspace of \(M\) defined by the injectivity radius at \(P\), we consider_ \[\dot{q}=-\frac{1}{2\lambda}\operatorname{grad}d(q,P)^{2},\quad\lambda>0. \tag{10}\] _If the sectional curvature is non-positive, the dynamics is a contraction in the sense of [7], i.e., if \(\delta x\) is a virtual displacement at fixed \(t\), we have_ \[\frac{d}{dt}\left\langle\delta q,\delta q\right\rangle\leq-\frac{2}{\lambda} \left\langle\delta q,\delta q\right\rangle. \tag{11}\] _If the sectional curvature is upper bounded by \(A>0\), then (12) holds for \(d(q,P)<\pi/(4\sqrt{A})\)._ The proof provided in [28] is a bit technical. We now give a simplified proof using the methods developed in this paper and provide a new estimation of the convergence rate. **Lemma 2**: _Let \(M\) be a smooth Riemannian manifold whose curvature is upper bounded by \(A\geq 0\). Let \(P\in M\) be fixed. Then the dynamics (10) is globally contractive if \(A=0\), and locally contractive otherwise, with contraction rate \(\gamma(q)=\frac{2\sqrt{A}d(q,P)}{\lambda\operatorname{tan}(\sqrt{A}d(q,P))}\),2 i.e.,_ Footnote 2: \(\gamma(P)\) is understood as \(\lim_{d(q,P)\to 0}\gamma(q)=\frac{2}{A}\). Notice that \(\gamma\) is monotone decreasing and strictly positive on the interval \([0,\frac{\pi}{2})\) \[\frac{d}{dt}\left\langle\delta q,\delta q\right\rangle\leq-\gamma\left\langle \delta q,\delta q\right\rangle. \tag{12}\] Let \(F(q)=\frac{1}{2}d(q,P)^{2}\) and we estimate \[\frac{d}{dt}\left\langle q^{\prime},q^{\prime}\right\rangle =2\left\langle-\frac{1}{\lambda}\nabla_{q^{\prime}}\nabla F,q^{ \prime}\right\rangle\] \[=-\frac{2}{\lambda}\operatorname{Hess}F(q^{\prime},q^{\prime})\] where the last equality follows from the definition of the Hessian operator, see (36). The conclusion follows invoking comparison for the Hessian of square distance (e.g., [29, Theorem 6.6.1]): \[\operatorname{Hess}F\geq\sqrt{A}d(q,P)\cot(\sqrt{A}d(q,P))\text{Id}\] for all \(q\in\operatorname{ini}(P)\) if \(A>0\) and for all \(q\in M\) if \(A=0\). **Remark 3**: _The second part of the Lemma 1[28] seems incorrect: by Rauch comparison (see [18, Theorem 6.4.3]), for manifold with sectional curvature lower bounded by \(k>0\), there holds \(\operatorname{Hess}F\leq(1-kF)g\), where \(g\) is the Riemannian metric. Therefore, the contraction rate is strictly less than \(\frac{2}{A}\) in any neighborhood of \(P\)._ **Remark 4**: _Since \(\operatorname{Hess}F|_{P}=g\), if the Hessian is continuous at \(P\), then the dynamics (10) is always locally contractive without assumptions on curvature, which also implies that \(P\) is an LES equilibrium._ The above method is not limited to study contraction of distance, in fact, it can be easily adapted to study \(k\)-contraction [30] (Hausdorff measure such as area and volume) on Riemannian manifolds. As an example, let us consider the contraction of volume. Suppose that \(\{q_{1}^{\prime},\cdots,q_{n}^{\prime}\}\) forms a frame at \(q\) and denote \(\operatorname{vol}(q_{1}^{\prime},\cdots,q_{n}^{\prime})\) the signed volume of the parallelepiped spanned by this frame and we study the change of the volume under the dynamics (12): \[\frac{d}{dt}\operatorname{vol}(q_{1}^{\prime},q_{2}^{\prime}, \cdots,q_{n}^{\prime})\] \[= -(\operatorname{div}\nabla F/\lambda)\operatorname{vol}(q_{1}^{ \prime},q_{2}^{\prime},\cdots,q_{n}^{\prime}) \tag{13}\] \[= -\frac{\Delta F}{\lambda}\operatorname{vol}(q_{1}^{\prime},q_{2}^ {\prime},\cdots,q_{n}^{\prime})\] where \(\Delta\) is the Laplace-Beltrami operator [29]. Now \(\Delta F=\operatorname{tr}(G^{-1}\operatorname{Hess}F)\), with \(G\) the Riemannian metric, we can conclude that the condition in Lemma 2 implies exponential contraction of volume (on non-positive curvature manifold). Since \(\Delta F\) is controlled by \(\operatorname{tr}(G^{-1}\operatorname{Hess}F)\), non-positive curvature assumption is too restrictive. In fact, \(\Delta F=1+H(q,P)d(q,P)\), where \(H(q,P)\) is the mean curvature, thus the same conclusion can be drawn for manifold with non-positive mean curvature. **Remark 5**: _From the proof of Lemma 2 we see that the function \(F\) need not be the square distance. It can be replaced by any function whose Hessian has the required property, as the next example shows._ #### 3.2.3 Filtering on \(SO(3)\) Consider first the attitude control problem \[\dot{R}=Ru \tag{14}\] where \(R\in SO(3)\) and the control input \(u\in\mathfrak{so}(3)\). The control objective is to exponentially stabilize a solution \(R_{*}(t)\in SO(3)\), which verifies \(\dot{R}_{*}(t)=R_{*}(t)\Omega(t)\), where \(\Omega(t)\) is some known signal. The Lie group \(SO(3)\) is a Riemannian manifold with the bi-invariant metric \(\left\langle X,Y\right\rangle=\operatorname{tr}(X^{\top}Y)\). Due to the bi-invariance of the metric, the Levi-Civita connection is simply \(\nabla_{X}Y=\frac{1}{2}[X,Y]\), see (38). Consider the function \[F(R,R_{*})=\frac{1}{2}\|R-R_{*}\|^{2},\] where \(\|\cdot\|\) is the Frobenius norm (\(F\) is not the square distance). The gradient and Hessian of \(F\) can be calculated as \(\nabla F=\frac{1}{2}R(R_{*}^{*}R-R^{\top}R_{*}),\operatorname{Hess}F(RY,RZ)= \frac{1}{4}\operatorname{tr}(Z^{\top}YR_{*}^{\top}R)\), with \(X,Y\in\mathfrak{so}(3)\) respectively. Clearly, \(R_{*}(\cdot)\) is the solution to \[\dot{R} =-k\nabla F(R,R_{*})+R\Omega(t)\] \[=-\frac{k}{2}R(R_{*}^{\top}(t)R-R^{\top}R_{*}(t))+R\Omega(t).\] Let us check the LES of the \(R_{*}(\cdot)\). For \(T_{R}SO(3)\ni R^{\prime}=RX\) for some \(X\in\mathfrak{so}(3)\), we calculate \[\frac{1}{2}\frac{d}{dt}\left\langle R^{\prime},R^{\prime}\right\rangle =-k\operatorname{Hess}F(R^{\prime},R^{\prime})+\left\langle\nabla _{R^{\prime}}(R\Omega(t)),R^{\prime}\right\rangle\] \[=-k\operatorname{Hess}F(R^{\prime},R^{\prime})+\frac{1}{2} \left\langle\left\langle R^{\prime},R\Omega(t)\right\rangle,R^{\prime}\right\rangle\] \[=-k\operatorname{Hess}F(R^{\prime},R^{\prime})+\frac{1}{2} \operatorname{tr}\{(X^{\top}X-XX^{\top})\Omega\}\] \[=-k\operatorname{Hess}F(R^{\prime},R^{\prime})\] since \(X^{\top}X-XX^{\top}\) is symmetric. Note that the Hessian of \(F\) is positive definite at \(R=R_{*}\). Hence the controller \[u=-\frac{k}{2}(R_{*}^{\top}(t)R-R^{\top}R_{*}(t))+\Omega(t).\] renders the trajectory \(R_{*}(\cdot)\) LES as expected. The extension to the design of a low pass filter becomes straightforward: the following dynamics \[\dot{\hat{R}}=-\frac{k}{2}\hat{R}(R^{\top}\hat{R}-R^{\top}\hat{R})+\hat{R}\Omega \tag{15}\] is a locally exponential observer (filter) for \(\hat{R}=R\Omega\). This result has been obtained in [11], see also [12]. ### Killing system #### 3.2.1 Low-pass filter for Killing system Consider a system defined by a time-varying Killing field [18, Chapter 8] on a Riemannian manifold \((M,g)\): \[\dot{q}=f(t,q) \tag{16}\] i.e., \(L_{f}g=0\), see also (39) in Appendix. We call such system a Killing system. When the system (16) is perturbed by some noise, it is tempting to design a low pass filter to reconstruct the system state from the corrupted data \(q\). For that, we propose the simple filter \[\dot{\hat{q}}=f(t,\dot{q})-k\nabla F(\dot{q},q), \tag{17}\] where \(F(q,p)=\frac{1}{2}d(q,p)^{2}\) and \(k\) is a positive constant. To verify the convergence of this filter, we calculate as before \[\frac{1}{2}\frac{d}{dt}\left\langle\dot{q}^{\prime},\dot{q}^{ \prime}\right\rangle =\left\langle\nabla_{\dot{q}^{\prime}}(f-k\nabla F),\dot{q}^{ \prime}\right\rangle\] \[=\left\langle\nabla_{\dot{q}^{\prime}}f,\dot{q}^{\prime}\right\rangle -k\left\langle\nabla_{\dot{q}^{\prime}}\nabla F,\dot{q}^{\prime}\right\rangle\] \[=-k\left\langle\nabla_{\dot{q}^{\prime}}\nabla F,\dot{q}^{\prime }\right\rangle\text{ (by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq: For a curve \(w(s)=(c(s),v(s))\) lying in \(TM\), we can calculate its length under the Sasaki metric as: \[\ell(w)= \int\sqrt{\langle w^{\prime}(s),w^{\prime}(s)\rangle_{\!s}}ds\] \[= \int\sqrt{\langle c^{\prime}(s),c^{\prime}(s)\rangle+\langle v^{ \prime}(s),v^{\prime}(s)\rangle}ds\] in which \(v^{\prime}(s)\) is understood as the covariant derivative of \(v(\cdot)\) along \(c(\cdot)\). **Assumption 1**: _In the sequel we assume that for each pair of points \((q,v)\) and \((p,w)\) in \(TM\), the minimizing geodesic that joins \((q,v)\) to \((p,w)\) always exists._ Now that the EL equation (19) defines a system on \(TM\), it seems that to analyze LES of solutions of the EL equation, one has to consider variation (see Section II-C) of the form \((q^{\prime},v^{\prime})\), with \(v^{\prime}\in TTM\). The next theorem shows that this is not needed. **Theorem 3**: _Consider a dynamical system on a Riemannian manifold \((M,g)\):_ \[\nabla_{\dot{q}}\dot{q}=f(q,\dot{q}) \tag{21}\] _where \(f\) is smooth. Let \((q(\cdot),\dot{q}(\cdot))\) be a trajectory of the system and \(q^{\prime}\) any variation along \(q(\cdot)\). Then the system (21) is contractive if the following system_ \[\frac{D}{dt}\left[\begin{array}{c}q^{\prime}\\ \frac{Dq^{\prime}}{dt}\end{array}\right]=F\left(\left[\begin{array}{c}q^{ \prime}\\ \frac{Dq^{\prime}}{dt}\end{array}\right]\right) \tag{22}\] _is exponentially stable along \(q(\cdot)\)._ **Remark 6**: _Notice that \((q^{\prime},Dq^{\prime}/dt)\in T_{q}M\times T_{q}M\), thus exponential stability can be defined in the obvious way for the system (22) using Sasaki metric._ Given a point \((q_{1},v_{1})\in TM\), and the integral curve \(\eta_{1}(t)=(q_{1}(t),\dot{q}_{1}(t))\) of the system (21) passing through it at time \(t=0\). Let \(\eta_{0}(t)=(q(t),\dot{q}(t))\) be another integral curve with initial condition \((q_{0},v_{0})\). By assumption, there exists a minimizing geodesic \(\gamma(s)=(q(s),v(s)),\ s\in[0,1]\) joining \((q_{0},v_{0})\) to \((q_{1},v_{1})\), that is, \(\gamma(0)=(q_{0},v_{0}),\ \gamma(1)=(q_{1},v_{1})\). Let \(q(s,t)\) be the solution to the system (21) with initial condition \(\gamma(s)\), then the parameterized curve \(s\mapsto(q(s,t),\frac{\partial q(s,t)}{\partial t})\) forms a variation between the curves \(\eta_{0}(\cdot)\) and \(\eta_{1}(\cdot)\). Therefore, the following estimation of the distance between the two points \(\eta_{0}(t)\) and \(\eta_{1}(t)\) is obvious: \[\begin{split} d_{TM}(\eta_{0}(t),\eta_{1}(t))&\leq \int_{0}^{1}\sqrt{\left|\frac{\partial q}{\partial s}(s,t)\right|^{2}+\left| \frac{D}{ds}\frac{\partial q}{\partial t}\right|^{2}}ds\\ &=\int_{0}^{1}\sqrt{\left|\frac{\partial q}{\partial s}(s,t) \right|^{2}+\left|\frac{D}{dt}\frac{\partial q}{\partial s}\right|^{2}}ds \end{split} \tag{23}\] The conclusion follows immediately after replacing \(\frac{\partial q}{\partial s}\) by \(q^{\prime}\). As we have remarked earlier, due to Theorem 3, the analysis of LES and contraction does not require variation of the form \((q^{\prime},v^{\prime})\) and that \(q^{\prime}\) alone is sufficient. This observation is crucial for the rest of this section. With the preceding preparations, we are now in a position to study tracking controller for the EL system. We focus on fully-actuated system: \[\nabla_{\dot{q}}\dot{q}=-\mathrm{grad}\,V(q)+u \tag{24}\] and assume \((q_{\star}(\cdot),\dot{q}_{\star}(\cdot),u_{\star}(\cdot)\equiv 0)\) is a bounded feasible solution to the EL equation, i.e., \(\nabla_{\dot{q}_{\star}}\dot{q}_{\star}=-\nabla V(q_{\star})\) (solution with non-zero \(u_{\star}\) is similar). We propose a controller with structure \(u=u_{P}+u_{V}+u_{R}\) to locally exponentially stabilize \((q_{\star}(\cdot),\dot{q}_{\star}(\cdot)\), where \[\begin{split} u_{P}(q)=-k_{2}\nabla F(q,q_{\star}),\\ u_{D}(q,\dot{q})=-k_{1}(\dot{q}-P_{\dot{q}_{\star}}^{q}\dot{q}_{ \star}),\\ u_{R}(q,\dot{q}_{\star})=R(\dot{q},\nabla F(q,q_{\star}))\dot{q} \end{split} \tag{25}\] As before, \(F\) is half of the square distance function. \(k_{1}\) and \(k_{2}\) are constants to be determined and \(P_{\dot{q}}^{p}\) is the parallel transport from \(q\) to \(p\), \(R(\cdot,\cdot)\) is the curvature tensor. Heuristically, this can be seen as a PD-controller [35], with a curvature compensation term. By construction, \((q_{\star}(\cdot),\dot{q}_{\star}(\cdot))\) is a solution to the closed loop system since \(u(q_{\star},\dot{q}_{\star})\equiv 0\). Hence it remains to show the LES of this solution. Thanks to Theorem 3 and Proposition 1, we need only check the exponential stability of the system (22) along \(q_{\star}(\cdot)\). For this we calculate \[\begin{split}\nabla_{q^{\prime}}\nabla_{\dot{q}}\dot{q}& =\nabla_{\dot{q}}\nabla_{q^{\prime}}\dot{q}+R(\dot{q},q^{\prime}) \dot{q}\\ &=\nabla_{\dot{q}}\nabla_{\dot{q}}\dot{q}^{\prime}+R(\dot{q},q^{ \prime})\dot{q}\\ &=\frac{D^{2}q^{\prime}}{dt^{2}}+R(\dot{q},q^{\prime})\dot{q} \end{split} \tag{26}\] where we used the basic fact about the curvature tensor: \(\frac{D}{dt}\frac{D}{ds}X-\frac{D}{dt}\frac{D}{X}X=R(\dot{q},q^{\prime})X\), see e.g., [17, Lemma 4.1]. The following calculations are in order (notice that we calculate along \(q_{\star}(\cdot)\), otherwise these are invalid): \[\begin{split}\nabla_{q^{\prime}}u_{P}&=-k_{2}\nabla _{q^{\prime}}\nabla F=k_{2}q^{\prime}\\ \nabla_{q^{\prime}}u_{D}&=-k_{1}\nabla_{q^{\prime}}( \dot{q}-P_{\dot{q}_{\star}}^{q}\dot{q}_{\star})=-k_{1}\nabla_{\dot{q}}q^{ \prime}\\ \nabla_{q^{\prime}}u_{R}&=\nabla_{q^{\prime}}R(\dot{q}, \nabla F)\dot{q}\\ &=(\nabla_{q^{\prime}}R)(\dot{q},\nabla F)\dot{q}+R(\nabla_{q^{ \prime}}\dot{q},\nabla F)\dot{q}\\ &\quad+R(\dot{q},\nabla_{q^{\prime}}\nabla F)\dot{q}+R(\dot{q}, \nabla F)\nabla_{q^{\prime}}\dot{q}\\ &=R(\dot{q},\nabla_{q^{\prime}}\nabla F)\dot{q}\\ &=R(\dot{q},q^{\prime})\dot{q}\end{split} \tag{27}\] where we have used the fact that \(\nabla_{q^{\prime}}\nabla F(q,\dot{q}_{\star})|_{q=q_{\star}(t)}=q^{\prime}\). \(\nabla F(q_{\star},q_{\star})=0\). The second line of (27) holds because one can take \(s\mapsto q(s,t)\) as a geodesic. Substituting (26) and (27) into the EL equation we immediately get \[\frac{D^{2}q^{\prime}}{dt}=-k_{1}\frac{Dq^{\prime}}{dt}-k_{2}q^{\prime}- \nabla_{q^{\prime}}\nabla V. \tag{28}\] **Theorem 4**: _Let \((q_{\star}(\cdot),\dot{q}_{\star},u_{\star}\equiv 0)\) be a bounded feasible solution to the fully-actuated Euler-Lagrangian system (24). If the Hessian of the potential function \(V\) is bounded along \(q_{\star}(\cdot)\), then the controller (25) renders \((q_{\star}(\cdot),\dot{q}_{\star}(\cdot)\) LES for \(k_{1}>0\) and \(k_{2}>0\) large enough._ If the Hessian of \(V\) is bounded along \(q_{\star}(\cdot)\), then it is obvious that the "linear system" (28) is exponentially stable setting \(k_{1}>0\) and choosing \(k_{2}>0\) large enough. The theorem follows invoking Theorem 3. **Remark 7**: _Note that the assumption of Theorem 4 holds if \(V\in\mathcal{C}^{2}\) as \(q_{\star}(\cdot)\) is bounded. If \(V\) is (weakly) convex, then the Hessian of \(V\) is positive semi-definite, hence the same holds true for arbitrary positive constants \(k_{1},k_{2}\)._ **Remark 8**: _In equation (28), we have in fact obtained the celebrated Jacobi equation by setting \(u=0\) and \(V=0\):_ \[\frac{D^{2}q^{\prime}}{dt^{2}}=- Since we work only locally, let us consider a constant curvature manifold, that is \[\langle R(\dot{q},q^{\prime})\dot{q},q^{\prime}\rangle=K\langle\dot{q},\dot{q} \rangle\langle q^{\prime},q^{\prime}\rangle,\quad\forall\dot{q},q^{\prime}\] for some constant \(K\). The time derivative of \(V\) reads \[\dot{V} =2\langle\frac{D^{2}q^{\prime}}{dt^{2}},\frac{Dq^{\prime}}{dt} \rangle+\langle R(\dot{q},\frac{Dq^{\prime}}{dt})\dot{q},q^{\prime}\rangle+ \langle R(\dot{q},q^{\prime})\dot{q},\frac{Dq^{\prime}}{dt}\rangle\] \[=2\langle-R(\dot{q},q^{\prime})\dot{q},\frac{Dq^{\prime}}{dt} \rangle+2\langle R(\dot{q},q^{\prime})\dot{q},\frac{Dq^{\prime}}{dt}\rangle\] \[=0,\] where we have used the fact that \(\frac{Dq}{dt}=0\). Remember that \(q(\cdot)\) is a geodesic, we may assume \(|\dot{q}|=1\), then it follows that \[V(\dot{q},q^{\prime})=|Dq^{\prime}/dt|^{2}+K|q^{\prime}|^{2}=\text{ constant}.\] Therefore, we can draw the following non-rigorous conclusions: * \(K>0\): along a given geodesic, nearby geodesics oscillate around it (see Fig. 2). * \(K<0\): along a given geodesic, nearby geodesics have a trend to diverge. * \(K=0\): the geodesics neither converge nor diverge. In the above we have studied tracking controller design for fully-actuated EL systems. This problem becomes more involved for under-actuated systems. In that case, we may apply energy shaping method to obtain some matching conditions and then try to solve some PDEs on the manifolds [37], see also [38] and the references therein. ### Speed Observer for EL Systems Consider the EL system without input \[\nabla_{\dot{q}}\dot{q}=-\nabla V(q) \tag{30}\] where \(V(q)\) is the potential energy. The objective is to design a speed observer for \(\dot{q}(\cdot)\) knowing \(q(\cdot)\). In [39], Aghannan and Rouchon proposed the following intrinsic speed observer for the system (30) when there is no potential energy in the EL equation: \[\left\{\begin{aligned} \dot{\dot{q}}&=\hat{v}- \alpha\nabla F(\dot{q},q)\\ \nabla_{\dot{\dot{q}}}\hat{v}&=-\beta\nabla F(\dot{q },q)+R(\hat{v},\nabla F)\hat{v}.\end{aligned}\right. \tag{31}\] where \(F\) is half of the square distance as before. The convergence of this observer was analyzed in local coordinates via contraction analysis [39], which was, in our opinion, quite tedious. **Remark 9**: _Using the notation introduced in Section II-E, we may rewrite (31) as_ \[\left\{\begin{aligned} \dot{\dot{q}}&=\hat{v}+ \alpha\log_{\dot{q}}q\\ \nabla_{\dot{\dot{q}}}\hat{v}&=\beta\nabla\log_{\dot{ q}}q-R(\hat{v},\log_{\dot{q}}q)\hat{v}.\end{aligned}\right.\] _obviating the use of the square distance function._ In this subsection, we provide a much simpler proof using the methods developed in this paper. Note that our model contains non-vanishing potential energy function, thus it is an extension to the free Lagrangian case in [39]. To cope with the potential energy, we consider a slightly modified version of (31): \[\left\{\begin{aligned} \dot{\dot{q}}&=\hat{v}- \alpha\nabla F(\dot{q},q)\\ \nabla_{\dot{\dot{q}}}\hat{v}&=-\beta\nabla F(\dot{q },q)+R(\hat{v},\nabla F)\hat{v}-P_{q}^{\dot{q}}\nabla V(q).\end{aligned}\right. \tag{32}\] Note that by construction, \((q(\cdot),\dot{q}(\cdot))\) is a solution to the observer. Hence it suffices to study LES of \((q(\cdot),\dot{q}(\cdot))\). Substituting \(\hat{v}=\dot{\dot{q}}+\alpha\nabla F(\dot{q},q)\) into the second line of (32), we get \[\nabla_{\dot{\dot{q}}}(\dot{\dot{q}}+\alpha\nabla F)= -\beta\nabla F+R(\dot{\dot{q}}+\alpha\nabla F,\nabla F)(\dot{\dot{ q}}+\alpha\nabla F)\] \[-P_{q}^{\dot{q}}\nabla V(q)\] or \[\nabla_{\dot{q}}\dot{\dot{q}}=-\alpha\nabla_{\dot{\dot{q}}}\nabla F -\beta\nabla F+R(\dot{\dot{q}},\nabla F)(\dot{\dot{q}}+\alpha\nabla F)\] \[-P_{q}^{\dot{q}}\nabla V(q)\] Taking covariant derivative along \(q(\cdot)\) on both sides yields \[\nabla_{q^{\prime}}\nabla_{\dot{q}}\dot{\dot{q}}=\frac{D^{2}q^{\prime}}{dt^{2} }+R(\dot{\dot{q}},q^{\prime})\dot{\dot{q}}, \tag{33}\] on the left, and \[-\alpha\nabla_{q^{\prime}}\nabla_{\dot{q}}\nabla F-\beta\nabla_{q ^{\prime}}\nabla F+\nabla_{q^{\prime}}[R(\dot{\dot{q}},\nabla F)(\dot{\dot{q} }+\alpha\nabla F)]\] \[= -\alpha\nabla_{\dot{q}}\nabla_{q^{\prime}}\nabla F-\alpha R(\dot {\dot{q}},q^{\prime})\nabla F-\beta\nabla_{q^{\prime}}\nabla F\] \[+\nabla_{q^{\prime}}[R(\dot{\dot{q}},\nabla F)(\dot{\dot{q}}+ \alpha\nabla F)]\] \[= -\alpha\nabla_{\dot{q}}\dot{q}^{\prime}-\beta q^{\prime}+R(\dot {q},\nabla_{q^{\prime}}\nabla F)\dot{\dot{q}}\] \[= -\alpha\nabla_{\dot{q}}\dot{q}^{\prime}-\beta q^{\prime}+R(\dot {\dot{q}},q^{\prime})\dot{\dot{q}},\] on the right, where we have used the relations \(\nabla F|_{\dot{q}=q}=0\), \(\nabla_{q^{\prime}}\nabla F|_{\dot{q}=q}=q^{\prime}\) and \(\nabla_{q^{\prime}}P_{q}^{\dot{q}}\nabla V(q)=0\) (be \(q^{\prime}\) tangent to a geodesic). Combining this with (33) yields \[\frac{D^{2}q^{\prime}}{dt^{2}}+\alpha\frac{Dq^{\prime}}{dt}+\beta q^{\prime}=0. \tag{34}\] This, together with Theorem 3 shows the local exponential convergence of the observer. **Remark 10**: _Notice that in both the tracking controller and observer design, we have to calculate the geodesic distance. Although there are efficient computation schemes, it is still tempting to avoid computing geodesics. This may be solved by embedding the system into Euclidean space and use equivalent distance functions in Euclidean spaces. The example of observer design on \(SO(3)\) in Section II-D has used this method._ ## 4 Conclusion In this paper, we have proposed a novel intrinsic approach for analyzing local exponential stability of trajectories and contraction. The advantages of our approach have been justified by applications and improved analysis of some existing works in the literature. We leave studies of concrete examples including under-actuated mechanical systems for future research. ## 5 Acknowledgement We thank Prof. Antoine Chaillet, who gave important comments and suggestions through the writing of the paper. Figure 2: For \(\mathbf{K>0}\), the geodesics oscillate near a given geodesic. ## VI Appendix We collect some elementary formulas in Riemannian geometry as a reference for the reader. They can be found in standard texts such as [17, 18]. Let \((M,g)\) be a smooth Riemannian manifold. The Levi-Civita connection on \(M\) is compatible with the metric \(g\): for any three vector fields \(X,Y,Z\in\Gamma(M)\), \(X\left\langle Y,Z\right\rangle=\left\langle\nabla_{X}Y,Z\right\rangle+\left\langle Y,\nabla_{X}Z\right\rangle\). The Levi-Civita connection is torsion-free in the sense that \(\nabla_{X}Y-\nabla_{Y}X=[X,Y]\), where \([X,Y]\) is the Lie bracket. Given a curve \(q:t\mapsto q(t)\) in \(M\) and a vector field \(v(t)\) along \(q(\cdot)\), the covariant derivative of \(v(\cdot)\) along \(q(\cdot)\) is defined as \(\frac{Dv(t)}{dt}:=\nabla_{\hat{q}(t)}v(t)\). Given a 2-surface parameterized by \((s,t)\mapsto q(s,t)\), then there holds \[\frac{D}{ds}\frac{Dq}{dt}=\frac{D}{dt}\frac{\partial q}{\partial s}. \tag{35}\] The gradient of a scalar function \(f\) on \(M\) is defined as the unique vector \(\nabla f\) satisfying \((\nabla f,X)=df(X)\). The Hessian of a scalar function is a symmetric bilinear form on \(TM\) defined as \[\operatorname{Hess}f(X,Y):=\left\langle\nabla_{X}\nabla f,Y\right\rangle,\ \forall X,Y\in\Gamma(M). \tag{36}\] For a parameterized surface \((s,t)\to q(s,t)\) and a vector field along the surface, there holds \[\frac{D}{ds}\frac{DX}{dt}-\frac{D}{dt}\frac{DX}{ds}=R\left(\frac{\partial q}{ \partial t},\frac{\partial q}{\partial s}\right)X. \tag{37}\] A metric on a Lie group \(G\) is bi-invariant if it is both left-invariant, i.e., \(dL_{x}\left(v,w\right)=\left\langle v,w\right\rangle\) and right-invariant. For a bi-invariant metric, the Levi-Civita connection admits a simple formula \[\nabla_{X}Y=\frac{1}{2}[X,Y]. \tag{38}\] A vector field \(X\) on is called a Killing field (w.r.t. \(g\)) if \(L_{X}g=0\). Consequently, if \(X\) is Killing, \(Y\) an arbitrary vector field, there holds \[g(\nabla_{Y}X,Y)=0. \tag{39}\] **Lemma 3**: _Given \(\gamma_{1},\gamma_{2}\in\mathcal{C}^{1}(\mathbb{R}_{+};M)\), where \(M\) is a Riemannian manifold. If \(\gamma_{1}(0)=\gamma_{2}(0)=x\) and \(\gamma_{1}^{\prime}(0)=\gamma_{2}^{\prime}(0)=v\), then \(d(\gamma_{1}(s),\gamma_{2}(s))=O(s^{2})\) when \(s>0\) is sufficiently small._
2305.14377
Unsupervised Discovery of Continuous Skills on a Sphere
Recently, methods for learning diverse skills to generate various behaviors without external rewards have been actively studied as a form of unsupervised reinforcement learning. However, most of the existing methods learn a finite number of discrete skills, and thus the variety of behaviors that can be exhibited with the learned skills is limited. In this paper, we propose a novel method for learning potentially an infinite number of different skills, which is named discovery of continuous skills on a sphere (DISCS). In DISCS, skills are learned by maximizing mutual information between skills and states, and each skill corresponds to a continuous value on a sphere. Because the representations of skills in DISCS are continuous, infinitely diverse skills could be learned. We examine existing methods and DISCS in the MuJoCo Ant robot control environments and show that DISCS can learn much more diverse skills than the other methods.
Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka
2023-05-21T06:29:41Z
http://arxiv.org/abs/2305.14377v2
# Unsupervised Discovery of Continuous Skills on a Sphere ###### Abstract Recently, methods for learning diverse skills to generate various behaviors without external rewards have been actively studied as a form of unsupervised reinforcement learning. However, most of the existing methods learn a finite number of discrete skills, and thus the variety of behaviors that can be exhibited with the learned skills is limited. In this paper, we propose a novel method for learning potentially an infinite number of different skills, which is named _discovery of continuous skills on a sphere_ (DISCS). In DISCS, skills are learned by maximizing mutual information between skills and states, and each skill corresponds to a continuous value on a sphere. Because the representations of skills in DISCS are continuous, infinitely diverse skills could be learned. We examine existing methods and DISCS in the MuJoCo Ant robot control environments and show that DISCS can learn much more diverse skills than the other methods. Machine Learning, While VISR has these advantages, it also has drawbacks. VISR has been tested only in discrete action domains in the original and subsequent research (Liu and Abbeel, 2021), and according to the experimental results of Kim et al. (2021) in continuous action control environments, the diversity of skills learned by VISR was limited. Futhermore, the analysis of unsupervised learning process itself (e.g., sample efficiency) has been rarely performed. Due to the computational cost of unsupervised learning, its sample efficiency is important, and so is the analysis from that perspective. In this paper, we propose a new unsupervised RL method, _discovery of continuous skills on a sphere_ (DISCS), which learns continuous skills as weights of reward vectors like VISR. We show an overview of our method in Figure 1. We investigate the process of unsupervised learning in existing methods and DISCS in the MuJoCo Ant robot control environment with continuous actions, and show that DISCS can sample-efficiently learn various skills compared to existing methods. We also show that learning skills in VISR is more difficult than DISCS because of its generation of rewards. Furthermore, we show that an existing discrete skill learning method with many skills cannot be a substitute for DISCS. In addition, we propose _hindsight preference posterior sampling_ (HIPPS) as one of the techniques of DISCS and show that it helps learning in DISCS. The paper is organized as follows. We introduce the background of DISCS, multi-objective RL in Section 2 and details of DISCS in Section 3. In Section 4, related work including VISR and differences between VISR and DISCS are introduced. In Section 5, experimental analysis and comparisons between existing methods and DISCS are shown. In Section 6, concluding remarks are given. ## 2 Background This section briefly introduces multi-objective RL (MORL), upon which our method is based. The tasks in MORL are modeled as multi-objective Markov decision processes (MOMDPs) (Roijers et al., 2013). An MOMDP is an extension of well-known Markov decision processes (MDP) (Sutton and Barto, 2018). An MOMDP can be represented by a tuple \((\mathcal{S},\mathcal{A},R,T,s_{0},\mathcal{W},f_{\mathcal{W}})\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the spaces of states and actions, respectively, \(s_{0}\in\mathcal{S}\) is the initial state, \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}^{m}\) is a reward vector function whose output dimension is \(m\), \(T:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\) is a function that determines the probability of transition to a state when an action is taken at a state, \(\mathcal{W}\subset\mathbb{R}^{m}\) is the space of preferences, and \(f_{\mathcal{W}}:\mathcal{W}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) is a scalarization function that transforms the total reward to a scalarized total reward according to a preference. We consider the class of MOMDPs with a linear scalarization function. That is, \(f_{\mathcal{W}}(w,V^{\pi}(s,w))=w^{T}V^{\pi}(s,w)\), where \(V^{\pi}(s,w)\) is \(\mathbb{E}_{\pi_{w}}\left[\sum_{t=0}\gamma^{t}r(s_{t},a_{t})|s_{0}=s,w\right]\), \(w\in\mathcal{W}\) and \(\pi_{w}\) is a policy (a distribution over actions) for \(w\). The goal of MORL is to learn a policy that maximizes the total scalarized reward for each preference. An MOMDP with only one preference corresponds to one MDP. MORL is a framework for improving learning efficiency by learning the optimal policy set for the given preference set, rather than learning policies from scratch in individual MDPs. In this paper, as in previous work (Abels et al., 2019; Yang et al., 2019; Chen et al., 2020), we focus on learning a preference conditional policy and Q-function, which returns the expected cumulative reward vectors for the policy. Also, we put constraints on \(\mathcal{W}\) to remove redundancy of \(w\) (e.g., a multiplication of rewards leads to the same optimal policy). For example, Yang et al. (2019) regularize the L1 norm of preferences. In our method, we regularize the L2 norm of preferences instead because of the tractability of distributions on \(\mathcal{W}\) and \(\mathcal{W}=\{w\mid||w||_{2}=1,w\in\mathbb{R}^{m}\}\). Note that an MORL agent learns preference conditional policies, which means that preference \(w\) controls sequential actions (often referred to as a skill). Thus, we refer to \(w\) as not only a preference but also a skill. ## 3 Discovery of Continuous Skills on a Sphere In this section, we introduce three main components in DISCS: 1) multi-objective soft actor-critic (MOSAC), 2) reward vector generation, and 3) their effective training, HIPPS. DISCS learns a policy by MOSAC, which is a simple extension of soft actor-critic (SAC) (Haarnoja et al., 2018) to MORL, which is one of the most sample-efficient off-policy RL methods. A DISCS agent learns how to generate reward vectors on the basis of mutual information between states and skills on a unit sphere, and the generated reward vectors are used for the learning in MOSAC. DISCS uses HIPPS, which aims to improve the sample efficiency of DISCS by adding data sampled from the distribution (posterior) learned in the reward generation. Pseudo code of DISCS is shown in Section B. ### Multi-Objective Soft Actor-Critic An MOSAC agent collects data from the environment (rollouts), preserves them in a replay buffer. By using data in the replay buffer, the agent iteratively learns preference conditional policies and Q-functions as MORL. The agent maximizes the sum of the policy entropy and the total reward, as in SAC. For simplicity, we introduce \(m+1\)-dimensional extended reward vector and preference, whose the \(0\)-th dimension is reserved for the entropy of policy. Let \(\tilde{r}\) denote \((c,r_{1},\dots,r_{m})^{\top}\), where \(c\) is typically \(0\) and \(r_{i}\) (\(1\leq i\leq m\) is the \(i\)-th element of the original reward vector, and \(\bar{w}\) denote \((1,w_{1},\dots,w_{m})^{\top}\), where \(w_{i}\) (\(1\leq i\leq m\)) is the \(i\)-th element of the original preference. Let \(h^{\pi}(s^{\prime},a^{\prime},w)\) denote a vector \((-\alpha\log\pi(a^{\prime}|s^{\prime},w),0,\dots,0)^{\top}\) for entropy of its policy whose dimension is \(m+1\), where \(\alpha\) is the coefficient of the entropy. The Q-function is updated based on a Bellman operation with reward and entropy vectors, \[\mathcal{T}Q^{\pi}(s,a,w)=\tilde{r}(s,a)+\gamma\mathbb{E}_{s^{ \prime}}[V^{\pi}(s^{\prime},w)] \tag{1}\] \[V^{\pi}(s^{\prime},w)=\mathbb{E}_{\pi(a^{\prime}|s^{\prime},w)}[ Q^{\pi}(s^{\prime},a^{\prime},w)+h^{\pi}(s^{\prime},a^{\prime},w)]. \tag{2}\] Applying Bellman operation \(\mathcal{T}\) defined above repetitively leads to a fixed point because \(\mathcal{T}\) is a contraction mapping (see e.g., (Bertsekas, 2012)). In MOSAC, its policy is updated as follows: \[\arg\min_{\pi^{\prime}\in\Pi}\mathrm{D}_{\mathrm{KL}}\left(\pi^{ \prime}(a|s,w)\middle\|\frac{\exp(\frac{\bar{w}^{\top}}{\alpha}Q^{\pi}(s,a,w) )}{Z^{\pi}(s,w)}\right) \tag{3}\] For these updates, extensions of two theorems in SAC (Haarnoja et al., 2018) can be derived in the same way as the proofs in SAC from the fact that one \(w\) corresponds to one MDP. _Theorem 1_.: For any \(s\in\mathcal{S},a\in\mathcal{A},w\in W\) and \(\pi\), \(\pi^{\prime}\) which is updated by (3), then \(\bar{w}^{\top}(Q^{\pi^{\prime}}(s,a,w)-Q^{\pi}(s,a,w))\geq 0\), assuming \(|\mathcal{A}|<\infty\). This means that \(\pi\) can be improved by (3). _Theorem 2_.: Repeated application of the updates of Q-functions and policies converges to a policy \(\pi^{*}\) such that \(\bar{w}^{\top}(Q^{\pi^{*}}(s,a,w)-Q^{\pi}(s,a,w))\geq 0\) for all \(\pi\) and \(s,a,w\), assuming \(|\mathcal{A}|<\infty\). In this paper, the above policy and Q-function are approximated by neural networks. Let \(Q_{\theta_{Q}}(s,a,w)\) denote a Q-function and \(\pi_{\theta_{\pi}}(a|s,w)\) a policy, whose parameter vectors are \(\theta_{Q}\) and \(\theta_{\pi}\), respectively. In the same way as SAC, as the target for a Q-function update, we use a Q-function with parameter \(\bar{\theta}\). \(\bar{\theta}\) is an exponential moving average of \(\theta_{Q}\) and updated as \(\bar{\theta}\leftarrow\tau\theta_{Q}+(1-\tau)\bar{\theta}\). The policy and Q-fuction are updated by minimizing losses, \(\mathcal{L}_{\text{actor}}\) and \(\mathcal{L}_{\text{critic}}\), which are respectively, \[\mathbb{E}\left[\alpha\log\pi_{\theta_{\pi}}(a_{t}|s_{t},w)-\bar{ w}^{\top}Q_{\theta_{Q}}(s_{t},a_{t},w)\right],\text{and} \tag{4}\] \[\mathbb{E}\left[-(\bar{w}^{\top}(Q_{\theta_{Q}}(s_{t},a_{t},w)- \bar{\mathcal{T}}Q_{\bar{\theta}}(s_{t},a_{t},w)))^{2}\right], \tag{5}\] where \(\mathbb{E}\) in the above equations mean the expectations over tuples, \((w,s_{t},a_{t},s_{t+1})\), which are sampled from the replay buffer, and \(\bar{\mathcal{T}}Q_{\bar{\theta}}(s_{t},a_{t},w)\) is \[\tilde{r}(s_{t},a_{t})+\gamma\mathbb{E}_{a\sim\pi_{w}}[Q_{\bar{ \theta}}(s_{t+1},a,w)+h^{\pi}(s_{t+1},a,w)]. \tag{6}\] ### Reward Vector Generation by Mutual Information In DISCS, the agent learns diverse skills by maximizing \(I(S_{t},W)\) the mutual information between states and preferences. Due to the use of MOSAC, the agent also aims to maximize the policy entropy, \(\mathcal{H}(A_{t}|S_{t},W)\). Intuitively, maximizing \(I(S_{t},W)\) means to go to the preference's own state as much as possible. \(I(S_{t};W)\) can be expressed as \(\mathbb{E}[\log p(w|s_{t})-\log p(w)]\). We fix \(p(w)\) as the uniform distribution, so \(\log p(w)\) is constant and can be ignored. Hence, our objective is maximizing the expected sum of \(\log p(w|s_{t})-\log\pi(a|s_{t},w)\) and this value corresponds to a scalarized reward including the policy entropy. The expected sum of the scalarized rewards, including the expectation over the entire preferences, which is denoted as \(\eta(\pi)\), has the following lower bound: \[\eta(\pi)\geq\eta_{\phi}(\pi), \tag{7}\] where \(\eta(\pi)\) and \(\eta_{\phi}(\pi)\) are \[\mathbb{E}_{w,\pi_{w}}\left[\sum_{t=0}\gamma^{t}\log p(w|s_{t})- \alpha\log\pi(a_{t}|s_{t},w)\right],\text{and} \tag{8}\] \[\mathbb{E}_{w,\pi_{w}}\left[\sum_{t=0}\gamma^{t}\log q_{\phi}(w|s _{t})-\alpha\log\pi(a_{t}|s_{t},w)\right], \tag{9}\] respectively, and \(\phi\) is a parameter vector. This inequality can be derived by an inequation of KL-divergence, \(\mathrm{D}_{\mathrm{KL}}(p(w|s_{t})||q_{\phi}(w|s_{t}))\geq 0\). Hereafter, \(q_{\phi}(w|s)\) is referred to as a discriminator. We aim to improve \(\eta_{\phi}(\pi)\) instead of \(\eta(\pi)\) by updating the policy and Q-function and the discriminator. Let us assume \(\phi^{\prime}\) is a parameter vector updated from \(\phi\) and the following inequality holds: \[\eta_{\phi^{\prime}}(\pi)-\eta_{\phi}(\pi)=\mathbb{E}\left[\sum_{t=0}\gamma^{t} \Delta(\log q(w|s_{t}))\right]\geq 0, \tag{10}\] where \(\Delta(\log q(w|s_{t}))=\log q_{\phi^{\prime}}(w|s_{t})-\log q_{\phi}(w|s_{t})\). Under fixed \(\phi^{\prime}\), i.e., a fixed reward function, Theorem 1 (and Theorem 2) hold. Thus, updating the policy \(\pi\) to \(\pi^{\prime}\) by (3), Q-values of any \((s,a,w)\) improve and the following inequalities are derived: \[\eta_{\phi^{\prime}}(\pi^{\prime})\geq\eta_{\phi^{\prime}}(\pi)\geq\eta_{\phi}( \pi). \tag{11}\] These mean that the \(\eta\) value can be improved monotonically under condition (10). Therefore, we update \(\phi\) to improve \(\eta_{\phi}(\pi)\). As for our discriminator, we use the von Mises-Fisher distribution (vMF), because our preferences are on a unit sphere (recall Section 2) and vMF is a common probability distribution defined there. vMF has two parameters, \(\mu\) and \(\kappa\), which are the mean direction and concentration parameters, respectively and let vMF\((\mu,\kappa)\) denote vMF with the parameters. More concretely, our discriminator is as follows: \[q_{\phi}(w|s)=C_{m}(\kappa_{\phi_{2}}(s))\exp(\kappa_{\phi_{2}}(s)w^{\top}\mu_{ \phi_{1}}(s)), \tag{12}\] where \(\kappa_{\phi_{2}}(s)\) is a scalar value, \(\mu_{\phi_{1}}(s)\) is a \(m\)-dimensional vector, \(C_{m}(\kappa)=\frac{\kappa^{m/2-1}}{(2\pi)^{m/2}I_{m/2-1}(\kappa)}\) is a normalization constant, \(\pi\) is the ratio of a circle's circumference to its diameter, and \(I_{m/2-1}(\kappa)\) is the modified Bessel function of the first kind at order \(m/2-1\). Note that \(\log C_{m}(\kappa)\) is partially differentiable with respect to \(\kappa\) as follows: \[\frac{\partial}{\partial\kappa}\log C_{m}(\kappa)=-\frac{I_{m/2}(\kappa)}{I_{ m/2-1}(\kappa)} \tag{13}\] The equation above can be derived by using \(\frac{\partial}{\partial\kappa}I_{m/2-1}(\kappa)=\frac{m/2-1}{\kappa}I_{m/2-1 }(\kappa)+I_{m/2}(\kappa)\). This means that the gradients with respect to \(\phi_{2}\) can be backpropagated via Equation (13). Now that we have defined \(q_{\phi}\) as above, \(\log q_{\phi}(w|s)=\tilde{w}^{\top}\tilde{r}_{\phi}\), where \(\tilde{r}_{\phi}\) is \[\kappa_{\phi_{2}}(s)\left(\frac{\log C_{m}(\kappa_{\phi_{2}}(s))} {\kappa_{\phi_{2}}(s)},\mu_{1,\phi_{1}}(s),\ldots,\mu_{m,\phi_{1}}(s)\right)^ {\top}, \tag{14}\] where \(\mu_{i,\phi_{1}}\) is \(i\)-th element of \(\mu_{\phi_{1}}\) and we use \(\tilde{r}_{\phi}\) as a reward vector for MOSAC. We use the following value as the loss of discriminator, and update the parameter vector \(\phi\) to minimize the loss: \[\mathcal{L}_{\mbox{disc}}:=-\mathbb{E}_{(w,s_{t})\sim D}\left[\log q_{\phi}(w| s_{t})\right], \tag{15}\] where \(\mathbb{E}_{(w,s_{t})\sim D}\) means the expectation for \((w,s_{t})\) sampled from the replay buffer, \(D\). The loss defined above is different from that in theoretical analysis around inequation (10) in some respects. As for the update of the discriminator, in the theoretical analysis, we considered a discounted objective with \(\gamma\), but in the implementation, it was not included. Also, in the theoretical analysis, we considered updating the discriminator for the state distribution defined by the most recent policy, but using only the recent data would reduce the amount of them. In the implementation, the discriminator is updated by sampling from the entire replay buffer. Data in the replay buffer are collected by the previous policies which are generally different from the latest policy. More details of these differences are examined in Section 5.4. ### Hindsight Preference Posterior Sampling The discriminator can be optimized as described in the previous section. However, learning (infinitely) many diverse policies may need much more data than learning a single policy. We also propose a method to generate artificial data to learn policies effectively, named _hindsight preference posterior sampling_ (HIPPS). Since our method is off-policy RL, it can learn from data that are not actually collected by the policy. Therefore, it is possible to make learning more efficient by adding data. HIPPS modifies the data in a hindsight manner, as in HER (Andrychowicz et al., 2017). In HIPPS, in addition to actual data stored in the replay buffer \((w,s,a,s^{\prime})\), additional data \((w^{\prime},s,a,s^{\prime})\), where \(w^{\prime}\) is a generated preference, are used for learning. However, it may not be a good idea to train with arbitrary generated preferences. DISCS learns the policy, Q-function for each preference, as described in Section 3.1. However, learning to correctly approximate the Q-value for any \((w,s,a,s^{\prime})\) is impractical and difficult, e.g., in terms of computational cost. Therefore, if we choose \(w^{\prime}\) poorly, the critic loss, for example, may be high for \((w^{\prime},s,a,s^{\prime})\). As a result, the parameter update is affected by the loss, and the prediction of the Q-value for the actual data \((w,s,a,s^{\prime})\) may become poor. Thus, if we can choose a more plausible \(w^{\prime}\), our learning would be more efficient. Motivated by this, we propose to sample additional preferences \(w^{\prime}\) from the discriminator, i.e., posterior, \(q_{\phi}(w|s)\), and to use it as a tuple \((w^{\prime},s,a,s^{\prime})\) for the training of policy and Q-network. We sample additional preferences from the projected normal distribution (PN) (Mardia, 1975; Wang & Gelfand, 2013), instead of sampling from vMF. In general, sampling from vMF is difficult. To address this difficulty, several sampling methods, including rejection sampling have been proposed (Ulrich, 1984; Kurz & Hanebeck, 2015). We apply PN to HIPPS for its tractability. PN is a probability distribution of \(Y=\frac{X}{||X||_{2}}\), where \(X\) is a random vector which follows the multivariate normal distribution \(\mathcal{N}(\mu,\Sigma)\), which is denoted as PN(\(\mu,\Sigma\)). If \(X\) is sampled from the multivariate normal distribution \(\mathcal{N}(\mu,\frac{1}{\kappa}I)\), where \(I\) is the identity matrix, the distribution of \(Y\) is vMF\((\mu,a\kappa)\) under condi Figure 2: Two types of environments, NoWall and U-Wall in our experiments. tion, \(||X||_{2}=a\)(Mardia, 1975). Moreover, PN and vMF converge to the uniform distribution and delta function as \(\kappa\) approaches \(0\) and \(\infty\), respectively. In addition to the above properties, the similarity of the two distributions is shown throughout experiments (Campbell et al., 2019). Thus we sample from PN(\(\mu,\frac{1}{\kappa}I\)), instead of vMF. In Section 5, we confirm that the approximation by PN is reasonable enough in terms of experimental results. ## 4 Related Work Unsupervised RL methods based on mutual information are already introduced in Section 1. Among them, we will discuss the differences between the most related methods, VISR and DIAYN, and DISCS. Other related methods will also be briefly reviewed. **VISR and DIAYN.** VISR and DIAYN have the same objective, i.e., maximizing the mutual information between states and skills, as DISCS. While VISR is applied to Q-learning in the original paper (Sutton and Barto, 2018), VISR is applied for MOSAC in this paper. Apart from this difference, VISR can be seen as a special case of DISCS, where \(\kappa\) is \(1\) in the discriminator and HIPPS is not applied. In this case, \(\log C_{m}(\kappa)\) is constant, so VISR ignores it. By ignoring \(\kappa\) and \(\log C_{m}(\kappa)\), for the output of discriminator in VISR, \(\log q_{\text{VI}}(w|s)\), the following inequalities hold because of the L2 norm constraint: \(-1\leq\log q_{\text{VI}}(w|s)=w^{\top}\mu(s)\leq 1\). To learn more fine-grained skills, it is necessary to change the reward more finely according to the differences in the distribution of states induced by the skill, but this is difficult if \(\kappa\) is constant, i.e., \(\kappa\) in the distributions generated by the discriminator are the same value in any states. DIAYN can also be seen as a special case of DISCS, where its skill \(z\) is a discrete variable and it does not deal with reward vectors. Its discriminator's outcomes, \(\log q_{\text{DI}}(z|s_{t})\), are used for its rewards. Also, HIPPS is not applied for DIAYN. **Reward vectors.** The existing methods for MORL (Roijers et al., 2014; Mossalam et al., 2016; Xu et al., 2020; Cao and Zhan, 2021) and successor features (SF) (Barreto et al., 2017; Borsa et al., 2018; Hunt et al., 2019; Barreto et al., 2019; Zahavy et al., 2021) are related in terms of using reward vectors. In conventional SF settings, the agent optimizes its policy under condition that scalar rewards are given. The SF agent approximates the reward by \(w^{\top}\phi\), where \(w\) and \(\phi\) are a weight vector and a reward vector, respectively and learns the policy that maximizes total rewards. **Other viewpoints.** Our method is related to hierarchical RL (Barto and Mahadevan, 2003), although our skill is only chosen at the initial state. In addition, our method gradually changes rewards, which corresponds to gradually changing tasks. This is relevant to curriculum learning (Narvekar et al., 2020). Our method is also relevant to intrinsic motivation and curiosity (Schmidhuber, 2006; Bellemare et al., 2016), as the agent itself generates the reward. As for vMF, Kumar and Tsvetkov (2018) applied it for natural language generation tasks. As for the preference conditional Q-function in our method, the Q-functions in Schaul et al. (2015) and Borsa et al. (2018) are similar to ours, although their studies are not about unsupervised RL. Figure 4: Heatmaps in NoWall at 3 million timesteps in VISR, SAC and at 5 million timesteps in the other methods. Figure 3: Comparisons of learning curves in NoWall. Thin lines are actual data and thick lines are the averages of them. ## 5 Experiments In this section, we mainly examine the following questions: 1) Why is VISR difficult to learn diverse skills? 2) Can diverse skills be learned efficiently by the discrete skill learning method, DIAYN? 3) How much can DISCS outperform these methods? and 4) How can HIPPS help learning by DISCS? We also examine different update methods for discriminator in DISCS. We conducted experiments in the MuJoCo Ant robot control environments shown in Figure 2. In these environments, agents cannot get any rewards from them. We ran five trials with different random seeds. In the experiments, to evaluate how diverse the learned skills is, we discretize the x-y positions of agents in rollouts and show heatmaps about the positions. The episode length was set to 500 timesteps, and heatmaps were drawn for every 100 episodes, i.e., 0.05 million timesteps of data. In addition, to analyze the progress of the diverse skill learning, we measure the number of the discretized x-y positions whose visitation counts are positive (we refer to it as the number of occupied cells). Also, we analyze the discrimination loss, critic loss and the average of scalarized rewards in the batch data excluding the policy entropy bonus. In our experiments, all discriminators are trained with "x-y prior", which means that the inputs of the discriminators are x-y positions instead of states. In general, the state space is large, so it is difficult to learn skills without x-y prior that are diverse in terms of x-y positions. In fact, in the experiments in DIAYN and DADS (Eysenbach et al., 2018; Sharma et al., 2019), the agents could not learn diverse skills in terms of x-y position without it. ### Difficulties of VISR and DIAYN We analyze why VISR does not work in the NoWall environment and examine whether a method for learning a large number of discrete skills, e.g., DIAYN, can be a substitute for that for learning continuous skills, e.g., DISCS. We compare VISR and DIAYN, in which the numbers of discrete skills are \(10\) and \(40\), with DISCS without HIPPS (NoHIPPS) because of the similarities mentioned in Section 4. In addition, we show the performance of SAC, where there is no reward other than entropy of policy. The results are shown in Figure 3. The number of occupied cells of VISR was slightly larger than that of SAC. Although one of the trials of NoHIPPS failed to learn, the other trails show that much more diverse skills were learned than VISR. Also, the heatmaps (Figure 4) showed that the area covered by VISR was much smaller than NoHIPPS. VISR was stable in terms of critic loss, except for the last 1 million timesteps. From this, it appears that the critic learning is fine. We can see that the discrimination loss of VISR has decreased to around \(-1\), which means the outputs of discriminator are nearly the minimum value (recall Section 4). What this means is that in order to learn more different skills, it is necessary to identify small differences in the rewards (i.e., the output of the discriminator) and learn a policy that reflects those differences. In the same way, the discriminator also needs to reflect the differences in the state distribution defined by the policy for each skill. For those reasons, it is quite difficult to learn diverse skills by VISR. On the other hand, the discrimination loss in NoHIPPS was decreasing and much larger than its minimum value which was \(-\infty\) in theory (\(-6\log 10\) in our implementation). The number of occupied cells in DIAYN10 was much larger than in VISR but its sample efficiency was worse than NoHIPPS. The number of occupied cells of DIAYN40 was almost the same throughout the trials. The discrimination loss of DIAYN40 finally began to decrease after around 5 million timesteps, which indicates that DIAYN40 was starting to learn diverse skills. These results show that the learning in DIAYN needs more samples when the number of skills is increased, while that is not the case in DISCS. ### Hipps We examine whether DISCS learns efficiently with more data by HIPPS in the NoWall environment. We also examine why the posterior is important in HIPPS by comparing it to the case where we sample from the prior rather than the posterior. In addition, we compare DISCS with HIPPS to Figure 5: Comparisons of learning curves in NoWall. Thin lines are actual data and thick lines are the averages of them. NoHIPPS with a large batch and show that simply increasing the batch size is not helpful. Note that DISCS with HIPPS uses a larger batch than NoHIPPS owing to its additional preferences. The results are shown in Figure 5. Batchx4 in the figure means simply quadrupling the batch size without using HIPPS. HIPPS4/HIPPS8 means that HIPPS with 3/7 preferences are sampled for each tuple (and the total batch size is increased by 4/8 times). Prior4 is a variant of HIPPS4 where the prior is used for the preference sampling instead of the posterior. The number of occupied cells of HIPPS4,8 and NoHIPPS were larger than the other methods. In particular, HIPPS4,8 showed that the critic losses were low and the number of occupied cells were high in all trials. The critic loss of Batchx4 was huge, which may be due to overtraining on the same data and one of the reasons for the failure in learning by Batchx4. In Prior4, the number of occupied cells started to decrease from about 1 million timesteps, and the critic loss started to increase from about 0.5 million timesteps. The discriminator loss of Prior4 decreased, which suggests that the state distribution changed with each preference and that the discriminator was able to correctly discriminate against the state distribution and that the distribution of discriminator became peaky. On the other hand, the average reward of Prior4 decreased, which indicates that preferences with lower probability in terms of the distribution of discriminator were sampled. These results support the claim made in Section 5.2 that sampling less plausible (in terms of the distribution of posterior) preferences from the prior increases the loss of critics and that it has negative effects on the learning. ### Comparisons in Environment with Obstacle When dealing with complex problems such as controlling ant robots in unsupervised RL, comparisons have been made mainly in tasks without obstacles. In this work, we investigate how much the robot can bypass the obstacles by using U-Wall in Figure 2. We compare the results of the existing methods, DIAYN and VISR with DISCS. As confirmed in the results in Section 5.1, DIAYN learns more slowly when the number of skills is increased. In this comparison, the number of skills in DIAYN was set to \(10\). The learning curves for NoHIPPS and DIAYN were measured up to 5 million timesteps, while those for other methods were 3 millon timesteps. The results are shown in Figure 6. The number of occupied cells of DISCS increased quickly. In particular, DISCS with HIPPS shows better results than the other methods while the number of occupied cells of NoHIPPS decreased from around 3 million timesteps and its critic loss increased. For DIAYN and VISR, the critic loss also increased during the skill learning process. As for VISR, the number of occupied cells decreased after 1 million timesteps. These results show that skill learning in U-wall tends to be unstable and difficult. On the other hand, the critic loss was stable in DISCS with HIPPS. For more detailed analysis, instead of heatmaps, we show trajectories when \(100\) different generated skills were exe Figure 6: Comparisons of learning curves in U-Wall. Thin lines are actual data and thick lines are the averages of them. Figure 7: Trajectories in U-Wall at 5 million timesteps in DIAYN and 3 million timesteps in HIPPS8, NoHIPPS, and VISR. Trajectories of VISR at 1 million timesteps are also shown. cuted in Figure 7 (heatmaps are shown in Section A). The execution of skills was deterministic, i.e., the action was executed whose probability in the skill conditional policy was the highest. In the DIAYN case, \(10\) different skills were executed \(10\) times for each skill. As for VISR, trajectories at 1 million timesteps, the timesteps before the number of occupied cells of VISR started to decrease, are also shown. The trajectories of VISR at 1 million timesteps showed diverse behaviors although their covered area was limited. As for the trajectories of DIAYN, the same skills showed almost the same trajectories and their diversity was limited. Compared with them, the results of DISCS showed that it learned a variety of skills. ### Detailed Analysis of Discriminator Updates We updated \(\phi\) in a way that minimizes (15). This deviates from theoretical analysis in Section 3.2 in the following aspects. 1) All data in the replay buffer are sampled for the update. 2) It does not consider the discount of the reward by \(\gamma\). We examine these deviations. With respect to the first point, we examine the performance when the data used to update the discriminator are limited to the latest data (Recent). From the theoretical analysis in Section 3.2, it is ideal to update the discriminator with the latest policy data to increase its value, but on the other hand, the more we limit the data used to the latest one, the less data we can use. As the latest data, we sampled from the recent 0.1 million steps data. In addition, with respect to the second point, we examine a variant of the discriminator update where the rewards in the discrimination loss are discounted by \(\gamma\) (Gamma). Even if the deviation of the first point is ignored and assumed that the data are the latest, because the reward is not discounted by \(\gamma\), an estimate of \(\eta_{\phi}(\pi)\) is biased as discussed by Thomas (2014). To consider the discount of the reward, we also keep timesteps \(t\) in the replay buffer, and use \(-\gamma^{t}\log q_{\phi}(w|s_{t})\) as the loss for the sampled \((w,s_{t},t)\). The results are shown in Figure 8. The critic loss was more likely to increase when using only recent data than when using the entire replay buffer. One possible explanation for these results is catastrophic forgetting, where the learned relationships between inputs and outputs of the neural networks are forgotten and cannot be reused, so the output of discriminator is not stable. A method in Abels et al. (2019) may alleviate the catastrophic forgetting, where mainly the latest data are used for the training, but also older data are used. For simplicity, however, we sampled from the entire replay buffer to train the discriminator. The performance about the number of occupied cells in Gamma was almost the same as that of NoHIPPS. Although the estimation is biased, in our discriminator updates, we ignored the discount by \(\gamma\) in (15), because it was also ignored in the discriminator loss in VISR and DIAYN and the performances of Gamma and NoHIPPS were almost the same. ## 6 Conclusion In this paper, we proposed DISCS, an unsupervised RL method for learning skills, and HIPPS, a method for effective training in DISCS. DISCS is different from most of the existing methods in that it has a clear correspondence with reward, it is a continuous skill learning method, and it uses HIPPS. We conducted experiments in the MuJoCo Ant robot control environment with continuous actions and analyzed the process of unsupervised learning. Through the analysis of the experiments, we showed that the existing method, VISR, has difficulty learning diverse skills due to the low expressive power of the discriminator, and that increasing the expressive power of the discriminator like DISCS is important. In addition, through the analysis of DIAYN, we showed that the learning became slower when the number of skills in DIAYN was increased. This indicates that learning many discrete skills does not substitute for learning continuous skills. Moreover, we examined DISCS with and without HIPPS and showed that HIPPS contributed efficient and stable learning of skills in DISCS. Figure 8: Comparisons of learning curves in NoWall. Thin lines are actual data and thick lines are the averages of them.
2303.16836
Wall-crossing of universal Brill-Noether classes
We give an explicit graph formula, in terms of decorated boundary strata classes, for the wall-crossing of universal Brill-Noether classes. More precisely, fix $n>0$ and $d<g$ , and two stability conditions $\phi^-, \phi^+$ for degree~$d$ compactified universal (over $\overline{\mathcal{M}}_{g,n}$) Jacobians that lie on opposite sides of a stability hyperplane. Our main result is a formula for the difference between the Brill-Noether classes, compared via the pullback along the (rational) identity map $\mathsf{Id} \colon \overline{\mathcal{J}}^d_{g,n} (\phi^+) \dashrightarrow \overline{\mathcal{J}}^d_{g,n} (\phi^-)$. The calculation involves constructing a resolution of the identity map by means of subsequent blow-ups.
Alex Abreu, Nicola Pagani
2023-03-29T16:32:13Z
http://arxiv.org/abs/2303.16836v1
# Wall-crossing of universal Brill-Noether classes ###### Abstract. We give an explicit graph formula, in terms of decorated boundary strata classes, for the wall-crossing of universal Brill-Noether classes. More precisely, fix \(n>0\) and \(d<g\), and two stability conditions \(\phi^{-},\phi^{+}\) for degree \(d\) compactified universal (over \(\overline{\mathcal{M}}_{g,n}\)) Jacobians that lie on opposite sides of a stability hyperplane. Our main result is a formula for the difference between \(\mathsf{w}_{d}(\phi^{+})\) and the pullback of \(\mathsf{w}_{d}(\phi^{-})\) along the (rational) identity map \(\mathsf{Id}\colon\overline{\mathcal{J}}_{g,n}^{d}(\phi^{+})\dashrightarrow \overline{\mathcal{J}}_{g,n}^{d}(\phi^{-})\). The calculation involves constructing a resolution of the identity map by means of subsequent blow-ups. ###### Contents * 1 Introduction * 1.a Related work * 1.b Acknowledgments * 2 Notation and preliminaries * 2.a Posets * 2.b Graphs * 2.c Families of curves and sheaves * 2.d Moduli spaces and graphs * 3 Compactified Jacobians and Universal Brill-Noether Classes * 3.a The universal stability space * 3.b The stability hyperplanes * 3.c Compactified Jacobians, universal and semistable family * 3.d Brill-Noether classes * 4 Normal crossing stratification categories and blowups * 4.a Categories of resolved strata for a normal crossing stratification * 4.b Normal crossing stratifications * 4.c Intersection theory formulas * 4.d Blow-up * 5 Combinatorial aspects of Wall-Crossing * 5.a Extremal sets, vine functions and full forests * 5.b The stratification categories * 5.c The case of "good" hyperplanes. Nonsingular resolution of the identity * 7Wall-Crossing Formulas * 7.aThe case of disjoint blowups * 7.bWall-crossing in low codimension * 7.cPullbacks via Abel-Jacobi sections ## 1. Introduction The Brill-Noether theory of line bundles on nonsingular algebraic curves is a classical pillar of XIX century algebraic geometry, which has been rediscovered and reused to prove important contemporary results. Broadly speaking, the theory is about studying the space of line bundles of a fixed degree having a fixed number of linearly independent global sections (see [1] and references therein for a survey of the classical results). For fixed integers \(g,n\) (we will assume for uniformity of notation, that \(g\geq 2\) and \(n\geq 1\)), and \(d\) there exists a universal Jacobian \(\mathcal{J}^{d}_{g,n}\to\mathcal{M}_{g,n}\), a moduli space that parameterizes isomorphism classes of degree \(d\) line bundles over smooth, \(n\)-pointed curves of genus \(g\). From now on we assume \(d<g\) and define the universal Brill-Noether class \(\mathsf{w}_{d}\) as the fundamental class in \(\mathcal{J}^{d}_{g,n}\) of the locus \(\mathsf{W}_{d}\) of line bundles that admit a nonzero global section. This locus has fiberwise codimension \(g-d\) over \(\mathcal{M}_{g,n}\) and it is empty for \(d<0\). In this paper we study extensions of this class to different compactifications of the universal Jacobian. The moduli space \(\mathcal{M}_{g,n}\) admits a natural, modular and well-studied compactification \(\overline{\mathcal{M}}_{g,n}\) obtained by adding (Deligne-Mumford) _stable_ pointed curves. On the other hand, there are several natural compactifications of \(\mathcal{J}^{d}_{g,n}\) over \(\overline{\mathcal{M}}_{g,n}\). In the words of Oda-Seshadri [1], this should not be seen as a drawback of the theory, but rather a merit. In [13] Kass-Pagani constructed an affine space of stability conditions \(V^{d}_{g,n}\) with an explicit hyperplane arrangement, with the property that every \(\phi\in V^{d}_{g,n}\) produces a compactification \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\) of the universal Jacobian, with good properties (it is a nonsingular DM stack) when \(\phi\) is not on a hyperplane. This space comes with a natural origin -- a canonical stability -- and so far most of the attention has been devoted to compactified Jacobians corresponding to this particular value (or to its perturbations when the latter belongs to some hyperplanes), see [10], [12]. In this paper we study how the Brill-Noether classes, suitably extended to classes \(\mathsf{w}_{d}(\phi)\) on \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\), vary in \(\phi\). What we mean by this is the following: for different stability conditions \(\phi_{1},\phi_{2}\), the identity on the common open set \(\mathcal{J}^{d}_{g,n}\) of line bundles on smooth curves defines a rational map \[\mathsf{Id}\colon\overline{\mathcal{J}}^{d}_{g,n}(\phi_{1})\dashrightarrow \overline{\mathcal{J}}^{d}_{g,n}(\phi_{2}),\] and we can then compute the difference \(\mathsf{w}_{d}(\phi_{2})-\mathsf{Id}^{*}\mathsf{w}_{d}(\phi_{1})\). By "compute", we mean produce an explicit "graph formula", as in the case of tautological classes on the moduli space of curves \(\overline{\mathcal{M}}_{g,n}\), which can all be expressed as linear combinations of "decorated boundary strata classes" (see [10]). While an established theory of a tautological ring for \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\) is not yet available (a large literature is available for the case of a single curve, the case of the universal moduli space has recently been the subject of important results in [11], [12], [13]), there are several natural classes on each compactified universal Jacobian, and "decorated boundary strata classes", supported on the boundary of \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\), may be defined in complete analogy with the case of \(\overline{\mathcal{M}}_{g,n}\). In fact, an important underlying motivation for our work is to develop a categorical and wall-crossing framework for a theory of tautological classes over compactified universal Jacobians. We now discuss what we mean by "a suitable extension" for the class \(\mathsf{w}_{d}(\phi)\). One possible approach is to take the Zariski closure, but this is very hard to control, and it does not have good formal properties (for example, it does not commute with base change). Another approach is to consider sheaves in \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\) that admit a nonzero global section, but that locus is, in general, not of the expected dimension and not equidimensional. Our extension instead is by means of the Thom-Porteous' formula. By virtue of its universal property, there is a tautological (or Poincare) sheaf \(\mathcal{L}_{\text{tau}}(\phi)\) on the universal curve \(\pi\colon\overline{\mathcal{C}}_{g,n}\to\overline{\mathcal{J}}^{d}_{g,n}(\phi)\). We define the extension as the degeneracy class \[\mathsf{w}_{d}(\phi):=c_{g-d}(-R^{\bullet}\pi_{*}\mathcal{L}_{\text{tau}}(\phi )), \tag{1.1}\] as in [14, Chapter 14]. By the Thom-Porteous formula (see _loc.cit._), the restriction of \(\mathsf{w}_{d}(\phi)\) to \(\mathcal{J}^{d}_{g,n}\) equals the (Poincare dual of the) original Brill-Noether class \(\mathsf{w}_{d}\). We compare (1.1) with the class of the Zariski closure in Proposition 3.38. The class (1.1) is supported on the universal Brill-Noether locus, but in general the latter does not have the expected codimension, hence its fundamental class does not coincide with (1.1) (more details in Proposition 4.18). The class (1.1) is the formal analogue of the \(\lambda_{g-d}\) class on \(\overline{\mathcal{M}}_{g,n}\) (the Hodge bundle \(R^{\bullet}\pi_{*}(\omega_{\pi})\) being replaced by \(-R^{\bullet}\pi_{*}\mathcal{L}\)). Given the important role that the \(\lambda\)-classes have played in the enumerative geometry of curves / intersection theory for moduli of curves, it is legitimate to expect that the same will be true of \(\mathsf{w}_{d}(\phi)\). In this paper we assume that \(\phi^{+}\) and \(\phi^{-}\) are on opposite sides of a stability hyperplane (Definition 5.1), and we give an explicit graph formula for the difference \[\mathsf{w}_{d}(\phi^{+})-\mathsf{Id}^{*}\mathsf{w}_{d}(\phi^{-}).\] In order to achieve this, we first produce a nonsingular resolution of the identity by an explicit sequence of blow-ups of \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\). We use this resolution to give, in Theorem 7.4, an explicit and closed graph formula for the difference \(p^{*}(\mathsf{w}_{d}(\phi^{+}))-p^{*}_{-}(\mathsf{w}_{d}(\phi^{-}))\) in the cohomology of \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\). Finally, we calculate the push-forward of that formula via \(p\) to write a formula (again a graph formula, explicit and closed) for the difference \(\mathsf{w}_{d}(\phi^{+})-\mathsf{Id}^{*}\mathsf{w}_{d}(\phi^{-})\). Our construction of \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) and our formulas are complicated by the fact that, for some of the hyperplanes, the locus where the identity is undefined fails to be irreducible. In those cases, the space \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) is constructed as an explicit sequence of blow ups along centers that have transversal self-intersection, and this construction plays an important part in our paper. In this introduction we describe the particular case of our construction and formula when the indeterminacy locus is irreducible (this occurs in many cases, and in some sense in most cases as long as \(n>1\)). Then the indeterminacy locus \(\mathcal{J}^{\prime}_{\beta}\subset\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\) generically parameterizes curves with \(2\) nonsingular components of genus, say, \(g_{X}\) and \(g_{Y}\), carrying markings \(S\) and \(S^{c}\), and joined at a certain number of nodes, say \(t\), together with line bundles of some fixed bidegree, say, \((d-d_{Y},d_{Y})\). The locus \(\mathcal{J}^{\prime}_{\beta}\) can be parameterized by a "resolved stratum" \[f_{\beta}\colon\mathcal{J}_{\beta}\to\mathcal{J}^{\prime}_{\beta}\hookrightarrow \overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\] (which we simply call "a stratum" in the main body of the paper), where the \(t\) nodes are parameterized: a general point of \(\mathcal{J}_{\beta}\) is a triple of a \((|S|+t)\)-pointed curve of genus \(g_{X}\), a \((|S^{c}|+t)\)-pointed curve of genus \(g_{Y}\), and a line bundle of bidegree \((d-d_{Y},d_{Y})\). The conormal bundle to \(f_{\beta}\) has rank \(t\) and it splits as a direct sum of line bundles, whose first Chern classes we call \(\Psi_{1},\ldots,\Psi_{t}\) (see Remark 7.30 for more details on how these relate to the "classical" \(\psi\)-classes in \(\overline{\mathcal{M}}_{g,n}\)). The base change to \(\mathcal{J}_{\beta}\) of the universal family \(\pi\colon\overline{\mathcal{C}}_{g,n}\to\overline{\mathcal{J}}^{d}_{g,n}(\phi)\) consists of two irreducible components, say \(\pi_{X}\colon X\to\mathcal{J}_{\beta}\) and \(\pi_{Y}\colon Y\to\mathcal{J}_{\beta}\), of genus \(g_{X}\) and \(g_{Y}\) respectively, each carrying a tautological sheaf \(L_{X}\) and \(L_{Y}\) (obtained by pulling back \(\mathcal{L}_{\rm tau}\)). Our main result for the difference \(\mathsf{w}_{d}(\phi^{+})-\mathsf{Id}^{*}\mathsf{w}_{d}(\phi^{-})\), in this special case, reads (Corollary 7.33 with \(m=1\)): \[\sum_{\begin{subarray}{c}s+j+\lambda\\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Section 5** we discuss the combinatorial aspects that arise from a wall-crossing situation where there are stability conditions \(\phi^{\pm}\) are on opposite sides of a given stability hyperplane (Definition 5.1). Our paper is concerned with the case of rank 1 sheaves on nodal curves, and the combinatorics of Section 5 should be the shadow of a theory for higher dimension and rank. The central definition is that, for each graph \(G\) and divisor \(D\) on \(G\) and choice of stability conditions \(\phi^{\pm}\) on opposite sides of a hyperplane, of a poset \(\operatorname{Ext}(G,D)\) of "extremal" subsets of the vertices of \(G\). **Section 6** gives the construction of the resolution \(\widetilde{\mathcal{J}}_{g,n}^{d}(\phi^{+},\phi^{-})\). Finally, in **Section 7** we are then ready to employ intersection theory techniques and calculate the wall-crossing term. At the end of Section 7 we explain how the pullback of the wall-crossing term via an Abel-Jacobi section can be explicitly calculated in terms of decorated boundary strata classes in \(\overline{\mathcal{M}}_{g,n}\) by employing the main result of [10]. In the background of this work, we produce two results that we believe are of independent interest. The first is Theorem 3.29, where we interpret the universal quasistable family over \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) (also known in the literature as Caporaso's family from [1], see also [12] and [1]) as a fine compactified universal Jacobian \(\overline{\mathcal{J}}_{g,n+1}^{d}(\phi^{\prime})\) with one extra point. Secondly, as part of Proposition 3.38, we describe the collection of stability conditions for \(d\ <\ 0\) such that \(\mathsf{w}_{d}(\phi)=0\). One can choose a suitable Abel-Jacobi section \(\sigma\colon\overline{\mathcal{M}}_{g,n}\to\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) and obtain a zero class \(\sigma^{*}\mathsf{w}_{d}(\phi)\). A different formula for the latter as a linear combination of standard tautological classes was given in [10] by means of the GRR formula. This gives tautological relations in \(\overline{\mathcal{M}}_{g,n}\) (see Remark 3.42 for the details). Note that these relations are in degree larger than \(g\) (the degree is \(g-d\) for negative \(d\)), the same range of Pixton's double ramification relations (proven by Clader-Janda in [1]). ### Related work An important motivation that we have not mentioned in the above discussion, is its relation with the (possibly twisted) double ramification cycle. For a review of the latter and related literature, we address the reader to [1] and [15, Section 1.1]. We refer to [10, Section 3.3] for how the double ramification cycle relates to the Brill-Noether classes discussed here. As pointed out in _loc.cit._, the theory on how these classes are extended to the boundary and then pulled back to \(\overline{\mathcal{M}}_{g,n}\) via some Abel-Jacobi section is trivial for nodal curves of compact type (i.e. the moduli space of multidegree zero line bundles is compact) and for curves with 1 node, and the complement of the locus of all such curves is generically parameterized by vine curves. In [1] the authors discuss the theory of a "universal double ramification cycle" as an operational class of degree \(g\) in the Artin stack of families of line bundles on families of nodal curves, which correspond to "universally intersecting with the closure of the zero section". Our extensions (1.1) could also be described in that language, and in fact the construction of an operational class would avoid a lot of technical difficulties owing to the fact that the classes (1.1) obviously commute with base change. In this paper we do not discuss a modular description of our resolution \(\widetilde{\mathcal{J}}_{g,n}^{d}(\phi^{+},\phi^{-})\). We expect that one such description should be possible following the recent work [14] by Molcho. The same author has also recently proved in [14] that the pull-back of the Brill-Noether classes \(\mathsf{w}_{d}(\phi)\) via all Abel-Jacobi _rational_ sections is tautological in \(\overline{\mathcal{M}}_{g,n}\) (this was conjectured in [13, Section 4.1]). ### Acknowledgments To be added after the refereeing process. ## 2. Notation and preliminaries ### Posets In this paper we will work with many posets (typically, the one underlying some category of stratifications, and some of its subposets). Here we recollect the relevant notation. **Definition 2.1**.: Let \(P\) be a finite partially ordered set (or a poset). A subset \(C\) of \(P\) is called a _chain_, if the partial order on \(C\) induced by \(P\) is a total order on \(C\). A poset is _ranked_ if for every element \(a\), all maximal chains having \(a\) as the largest element have the same length (called the _rank_ of \(a\)). The poset \(P\) is called a _forest_, if for every \(a\in P\) the lower set \(\{b\leq a\}\) is a chain. More generally, we say that a subset \(F\subseteq P\) is a forest if \(F\) together with the partial order induced by \(P\) is a forest. If \(a>b\) and there exists no \(c\) such that \(a>c>b\), then we say that \(a\)_covers_\(b\), and write \(a\gtrdot b\). ### Graphs By a _graph_ we mean a finite, connected, undirected multigraph, decorated with a genus function and markings (see for example [15, Section 3.1] and [16, Section 2.1] for a precise definition). If \(G\) is a graph, we write \(V(G)\) for its set of vertices and \(E(G)\) for its set of edges, we write \(g\colon V(G)\to\mathbb{N}\) for the genus function and \(\operatorname{leg}\colon\{1,\dots,n\}\to V(G)\) for the markings function. If \(S\subseteq V(G)\), we write \(G(S)\) for the complete subgraph of \(G\) on the vertices \(S\), and say that \(G(S)\) is the subgraph of \(G\) induced by \(S\). Given \(V_{1},V_{2}\subseteq V(G)\), we write \(E(V_{1},V_{2})\) for the edges that have one endpoint in \(V_{1}\) and another in \(V_{2}\) (if the edge is a loop, we include it if and only if its adjacent vertex is in both \(V_{1}\) and \(V_{2}\)). If \(G\) is a graph and \(E\subseteq E(V(G))\), we denote by \(G^{E}\) the graph obtained from \(G\) by adding exactly \(1\) vertex, denoted \(v_{e}\), in the "interior" of each edge \(e\in E\). We call each such \(v_{e}\) an _exceptional vertex_ of \(G^{E}\). A graph \(G\) is _stable_ if \[2g(v)-2+|E(\{v\},\{v\}^{c})|+|\log^{-1}(v)|>0\] for every vertex \(v\in V(G)\). ### Families of curves and sheaves A _nodal curve_\(C\) is a reduced and connected projective scheme of dimension \(1\) over some fixed algebraically closed field, with singularities that are at worst ordinary double points. The (arithmetic) _genus_ of \(C\) is \(p_{a}(C)=h^{1}(C,\mathcal{O}_{C})\). A _subcurve_\(X\) of \(C\) is a connected union of irreducible components of \(C\). Its complement \(X^{c}\) is the union of the other components of \(C\). A _\(n\)-pointed curve_ is a tuple \((C,p_{1},\ldots,p_{n})\) where \(C\) is a nodal curve, and \(p_{1},\ldots,p_{n}\) are pairwise distinct nonsingular points of \(C\). Its _dual graph_\(G(C)\) has the irreducible components of \(C\) as vertices, the nodes of \(C\) as edges, the geometric genus (resp. the marked points) of each component as the genus (resp. the markings) decoration. A morphism \(f\colon C^{\prime}\to C\) of nodal curves is _a semistable modification_ if it is obtained by contracting some subcurves, not necessarily irreducible, \(E\subset C^{\prime}\) such that \(g(E)=0\) and \(|E\cap E^{c}|=2\). Every subcurve \(E\subset C^{\prime}\) contracted by \(f\) is called _an exceptional curve of \(f\)_. A semistable modification such that every exceptional curve is irreducible is called a _quasistable modification_. A coherent sheaf on a nodal curve \(C\) has _rank_\(1\) if its localization at each generic point of \(C\) has length \(1\). It is _torsion-free_ if it has no embedded components. If the stalk of a torsion-free sheaf \(F\) over \(C\) fails to be locally free at a point \(P\in C\), which must necessarily be a node, we will say that \(F\) is _singular_ at \(P\). If \(F\) is a rank \(1\) torsion-free sheaf on \(C\) we say that \(F\) is _simple_ if its automorphism group is \(\mathbb{G}_{m}\) or, equivalently, if removing from \(C\) the singular points of \(F\) does not disconnect \(X\). A _family of nodal curves_ over a \(\mathbb{C}\)-scheme \(S\) is a proper and flat morphism \(\mathcal{C}\to S\) whose fibers are nodal curves. (Throughout, all families \(\mathcal{C}/S\) will admit a distinguished section in the \(S\)-smooth locus of \(\mathcal{C}\)). A _semistable (resp. a quasistable) modification_ of the family \(\mathcal{C}/S\) is another family \(\mathcal{C}^{\prime}/S\) with a \(S\)-morphism \(f\colon\mathcal{C}^{\prime}\to\mathcal{C}\) that is a semistable (resp. a quasistable) modification (as defined above) on all geometric points \(s\in S\). If \(T\) is a \(S\)-scheme, a _family of rank \(1\) torsion-free simple sheaves_ parameterized by \(T\) over a family of curves \(\mathcal{C}\to S\) is a coherent sheaf \(F\) of rank \(1\) on \(\mathcal{C}\times_{S}T\), flat over \(T\), whose fibers over the geometric points are torsion-free and simple. If \(F\) is a rank \(1\) torsion-free sheaf on a nodal curve \(C\), the _(total) degree_ of \(F\) is \(\deg_{C}(F):=\chi(F)-1+p_{a}(C).\)If \(X\subseteq C\) is a subcurve, we denote by \(F_{X}\) the maximal torsion-free quotient of \(F\otimes\mathcal{O}_{X}\). The total degree and the degree of \(F_{X}\) and \(F_{X^{c}}\) are related by the formula \[\deg_{C}(F)=\deg_{X}F+\deg_{X^{c}}F+\delta_{X\cap X^{c}}(F), \tag{2.2}\] where \(\delta_{S}(F)\) is the number of points in \(S\) where the stalk of \(F\) fails to be locally free. A line bundle \(F^{\prime}\) on a semistable modification \(f\colon C^{\prime}\to C\) is called _positively admissible_ (see [1]) if \(\deg_{E}(F^{\prime})\) is either \(0\) or \(1\) on every exceptional subcurve of \(f\). The following results follow from [1, Section 5]. **Proposition 2.3**.: _Let \(\pi\colon\mathcal{C}\to S\) and \(\pi^{\prime}\colon\mathcal{C}^{\prime}\to S\) be families of nodal curves and \(f\colon\mathcal{C}^{\prime}\to X\) be a semistable modification. Let \(F^{\prime}\) be a positively admissible sheaf on \(\mathcal{C}^{\prime}\) and set \(F=f_{*}(F^{\prime})\)._ 1. _The sheaf_ \(F\) _is a torsion free rank-_\(1\) _sheaf and_ \(R^{1}f_{*}(F^{\prime})=0\)_, in particular_ \(f_{*}(F^{\prime})\) _commutes with base change. Moreover, we have that_ \(R^{\bullet}\pi_{*}(F)=R^{\bullet}\pi_{*}^{\prime}(F^{\prime})\)_._ 2. _The sheaf_ \(f_{*}(F^{\prime})\) _is invertible if and only if_ \(\deg_{E}(F^{\prime})=0\) _on every exceptional subcurve of_ \(f\)_. Moreover, in this case,_ \(F^{\prime}=f^{*}f_{*}(F^{\prime})\)_._ 3. _If_ \(f\) _is a quasistable modification and_ \(\deg_{E}(F^{\prime})=1\) _for every exceptional subcurve, then_ \(\mathcal{C}^{\prime}=\mathbb{P}_{\mathcal{C}}(F^{\vee})\) _and_ \(F^{\prime}\) _is isomorphic to the tautological line bundle_ \(\mathcal{O}_{\mathbb{P}_{\mathcal{C}}(F^{\vee})}(1)\)_._ 4. _More generally, we have that_ \(f\) _factors as_ \(X^{\prime}\xrightarrow{g}\mathbb{P}_{\mathcal{C}}(F^{\vee})\to X\)_, and_ \(\mathcal{O}(1)\cong g_{*}(F^{\prime})\) _and_ \(F^{\prime}\cong g^{*}(\mathcal{O}(1))\)_._ In particular, we have the following. **Corollary 2.4**.: _Let \(\mathcal{C}\to S\) be a family of nodal curves. Taking the direct image under the quasistable modification gives a bijection between isomorphism classes of positively admissible line bundles on quasistable modifications of \(\mathcal{C}/S\), and isomorphism classes of families of rank \(1\) torsion free sheaves on \(\mathcal{C}\)._ We now define the multidegree of a sheaf on a nodal curve as the multidegree of the unique positively admissible line bundle as in the above corollary. A degree \(d\)_pseudodivisor_ on a graph \(G\) is a pair \((E,D)\) where \(E\subseteq E(G)\) and \(D\in\operatorname{Div}^{d}(G^{E})\) satisfies \(D(v^{\prime})=1\) for each exceptional vertex \(v^{\prime}\). When \(E\) is empty, we simply write \(D\) in place of the pair \((\varnothing,D)\). Given a degree-\(d\) rank \(1\) torsion free sheaf \(F\) on a curve \(C\), we define the multidegree \(\deg(F)\) of \(F\) as the pseudodivisor \((E,D)\) on the dual graph \(G(C)\) of \(C\) as follows. The set \(E\) is the set of edges of \(G(C)\) that correspond to nodes of \(C\) where \(F\) is not locally free. The divisor \(D\) on \(G(C)^{E}\) is defined by \(D(v)=\deg_{C_{v}}(F_{C_{v}})\) if \(v\in V(G(C))\subseteq V(G(X)^{E})\) and \(D(v)=1\) for every exceptional vertex \(v\). By Equation (2.2), we have that \((E,D)\) is a degree-\(d\) pseudodivisor. Note also that a rank \(1\) torsion free sheaf on \(C\) is simple if and only if its multidegree \((E,D)\) has the property that \(E\) does not disconnect the graph \(G(C)\). ### Moduli spaces and graphs Here we discuss some general notation on moduli spaces of curves. We refer the reader to [1] for more details on nodal curves and their dual graphs. A \(n\)-pointed curve \((C,p_{1},\dots,p_{n})\) is _stable_ if \(|\operatorname{Aut}(C,p_{i})|<\infty\). We will sometimes abuse notation and write \(C\) for \((C,p_{i})\). For example, we will say that the genus of \((C,p_{i})\) is \(g\) to mean that the \(h^{1}(C,\mathcal{O}_{C})=g\), the arithmetic genus of the underlying curve \(C\) is \(g\). We will denote by \(\overline{\mathcal{M}}_{g,n}\) the moduli spaces of stable \(n\)-pointed curves of genus \(g\). The moduli space comes admits a stratification by dual graphs, which we now discuss. #### 2.d.1. Stable graphs We denote by \(G_{g,n}\) the small category of stable, \(n\)-pointed graphs of genus \(g\) (where we have fixed a choice of \(1\) object for each isomorphism class). Morphisms \(G\to G^{\prime}\) are given by an edge contraction followed by an isomorphism. (More details in [1] and [13]). There is a natural functor \(G_{g,n+1}\to G_{g,n}\) that forgets the last point and stabilizes the graph. #### 2.d.2. Stratification of moduli of stable curves For \(G\in G_{g,n}\) there is a gluing morphism \[f_{G}\colon\overline{\mathcal{M}}_{G}:=\prod_{v\in V(G)}\overline{\mathcal{M} }_{g(v),n(v)}\to\Big{[}\prod_{v\in V(G)}\overline{\mathcal{M}}_{g(v),n(v)}/ \operatorname{Aut}(G)\Big{]}\to\overline{\mathcal{M}}^{\prime}_{G}\hookrightarrow \overline{\mathcal{M}}_{g,n}.\] We say that \(G\), or \(\overline{\mathcal{M}}_{G}\), or \(f_{G}\), is a (resolved) stratum of \(\overline{\mathcal{M}}_{g,n}\). We regard \(\overline{\mathcal{M}}_{G}\) as a "resolved stratum" and its image \(\overline{\mathcal{M}}^{\prime}_{G}\) as the corresponding "embedded stratum". The codimension \(1\) strata are the following divisors generically parameterizing curves with \(1\) node: 1. the divisor \(\Delta_{\operatorname{irr}}\), generically parameterizing irreducible curves 2. for \(0\leq i\leq g\) and \(S\subseteq[n]\) (except \(i=0\) and \(|S|<2\) and \(i=g\) and \(|S|>n-2\)), the divisor \(\Delta_{i,S}=\Delta_{g-i,S^{c}}\) generically parameterizing curves with \(2\) components, of which one of genus \(i\) carrying the marked points in \(S\). On the (resolved) stratum the normal bundle to \(f_{G}\) splits as a direct sum of line bundles \[N_{f_{G}}=\bigoplus_{e\in E(G)}\mathbb{L}_{e}.\] We denote by \(\Psi_{e}=-c_{1}(\mathbb{L}_{e})\). (Recall that, if \(e\) is the edge whose half edges \(h(e),h^{\prime}(e)\) are based at \(v,v^{\prime}\in V(G)\), then the cotangent line bundles to \(h(e)\) and \(h^{\prime}(e)\) are denoted by \(\mathbb{L}_{h(e)}\) and \(\mathbb{L}_{h^{\prime}(e)}\) and its first Chern classes \(\psi_{h(e)}\) and \(\psi_{h^{\prime}(e)}\). We then have \(\mathbb{L}_{e}=\mathbb{L}_{h(e)}^{\vee}\boxtimes\mathbb{L}_{h^{\prime}(e)}^{\vee}\) and so \(\Psi_{e}=\psi_{h(e)}+\psi_{h^{\prime}(e)}\), but this will not play a role.) In Section 4 we will define what the category of (resolved) strata induced by normal crossing divisors on a DM stack, and will interpret the category \(G_{g,n}\) as the category of strata of the nonsingular DM-stack \(\overline{\mathcal{M}}_{g,n}\) induced by the normal crossing divisor \(\Delta=\Delta_{\text{irr}}\cup\bigcup_{i,S}\Delta_{i,S}\). ## 3. Compactified Jacobians and Universal Brill-Noether Classes In this chapter we introduce the basic objects of study in this paper, compactified universal Jacobians, and extensions of universal Brill-Noether classes by means of Thom-Porteous formula. We also recall the results on the stability space of compactified universal Jacobians that we will need later. ### The universal stability space Here we recall the definition and first results on the stability space of a single curve and on the universal stability space \(V^{d}_{g,n}\) from [10]. **Definition 3.1**.: For a fixed graph \(G\), we define the space of polarizations \[V^{d}_{\text{stab}}(G):=\left\{\phi\in\mathbb{R}^{V(G)}:\sum_{v\in V(G)}\phi(v )=d\right\}\subset\mathbb{R}^{V(G)}.\] For \(V\subseteq V(G)\), we write \(\phi(V)\) for \(\sum_{v\in V}\phi(v)\). Every morphism \(f\colon G\to G^{\prime}\) of graphs induces a morphism \(f_{*}\colon V^{d}_{\text{stab}}(G)\to V^{d}_{\text{stab}}(G^{\prime})\) by setting \[f_{*}\phi(v^{\prime})=\sum_{f(v)=v^{\prime}}\phi(v) \tag{3.2}\] and we define the space of universal polarizations as the limit (or inverse limit) \[V^{d}_{g,n}:=\varprojlim_{G\in G_{g,n}}V^{d}_{\text{stab}}(G),\] i.e. as the space of assignments \(\left(\phi(G)\in V^{d}_{\text{stab}}(G)\colon\ G\in G_{g,n}\right)\) that are compatible with all graph morphisms. We now present a simple description of the universal stability space \(V^{d}_{g,n}\) that follows from [10, Corollary 4.3]. The result requires that we introduce some notation for graphs of "vine curves". **Definition 3.3**.: A _vine curve triple_\((i,t,S)\) consists of two natural numbers \(i,t\) and a subset \(S\subseteq[n]\), such that \(0\leq i\leq g\), \(1\leq t\), \(i+t\leq g+1\), and such that if \((i,t)=(0,1)\) then \(|S|\geq 2\), if \((i,t)=(0,2)\) then \(|S|\geq 1\), if \((i,t)=(g,1)\) then \(|S^{c}|\geq 2\) and if \((i,t)=(g-1,2)\) then \(|S^{c}|\geq 1\). A _vine curve_ is a stable graph \(G(i,t,S)\) associated to a vine curve triple, which consists of two vertices of genus \(i\) and \(g-i\) respectively connected by \(t\) edges, and with marking \(S\) on the first vertex and \(S^{c}\) on the second vertex. We will always assume that \(S\) contains the first marked point. The stability space \(V^{d}_{\mathrm{stab}}(G(i,t,S))\) is an affine subspace of \(\mathbb{R}^{2}\). We can parameterize it by means one variable \(x_{i,t,S}\) by taking the inverse image under the projection onto the first factor. That means, we describe \[V^{d}_{\mathrm{stab}}(G(i,t,S))=\{(x_{i,t,S},d-x_{i,t,S}):x_{i,t,S}\in\mathbb{R }\}\subset\mathbb{R}^{2}.\] We now introduce the stability space of "vine curves" using the previous definition. **Definition 3.4**.: We let \[T^{d}_{g,n}:=\prod_{\begin{subarray}{c}(i,t,S)\\ \text{a vine curve triple}\end{subarray}}V^{d}_{\mathrm{stab}}(G(i,t,S)).\] Then we define: 1. The vector space \(C^{d}_{g,n}\) as the quotient of \(T^{d}_{g,n}\) obtained as the product of all vine curve triples of the form \(V^{d}_{\mathrm{stab}}(G(i,1,S))\). 2. The vector space \(D^{d}_{g,n}\) is the quotient of \(T^{d}_{g,n}\) obtained as the product of all \(V^{d}_{\mathrm{stab}}(G(0,2,\{j\})\) for \(j=1\ldots,n\). Throughout we will use the coordinates \(x_{i,t,S}\) introduced in the end of Definition 3.3 on the spaces \(T^{d}_{g,n}\) and on its quotients \(C^{d}_{g,n}\) and \(D^{d}_{g,n}\). There are natural restriction affine linear maps: \[\tau_{d}\colon V^{d}_{g,n}\to T^{d}_{g,n},\quad\rho_{d}\colon V^{d}_{g,n}\to C ^{d}_{g,n}\times D^{d}_{g,n}\] One of the main results of [14, Section 3] is that the universal stability space embeds into the "vine curves" stability space, and that \(\rho_{d}\) is an isomorphism. **Proposition 3.5**.: _([14, Lemma 3.8, Corollary 3.4]) The affine linear map \(\tau_{d}\) is injective. The vector space homomorphism \(\rho_{0}\) is an isomorphism. Each morphism \(\rho_{d}\) is an isomorphism of affine spaces._ ### The stability hyperplanes We will later see in Section 3.c that for every universal stability condition \(\phi\in V^{d}_{g,n}\) there exists a compactified universal Jacobian parameterizing \(\phi\)-stable (rank 1, torsion free) sheaves on every (flat) family of \(n\)-pointed stable curves of genus \(g\). Here we combinatorially introduce the degenerate locus of \(V^{d}_{g,n}\), which will later be seen to be the locus of \(\phi\)'s such that there exist strictly semistable sheaves on some stable curves. We will introduce the degenerate locus as a union of hyperplanes (which one could think of as a finite, non-centered, toric hyperplane arrangement). This explicit description is taken from [14, Section 5]. **Definition 3.6**.: We say that a polarization \(\phi\in V^{d}_{\mathrm{stab}}(G)\) is _degenerate_ if for some subset \(\varnothing\subsetneq V_{0}\subsetneq V(G)\) the quantity \[\frac{|E(V_{0},V_{0}^{c})|}{2}+\sum_{v\in V_{0}}\phi(v) \tag{3.7}\] is an integer. We say that a universal stability condition \(\phi\in V^{d}_{g,n}\) is _degenerate_ if for some \(G\in G_{g,n}\), the \(G\)-component \(\phi(G)\) is degenerate in \(V^{d}_{\mathrm{stab}}(G)\). The degenerate locus is a locally finite union of affine hyperplanes, and we will soon describe these hyperplane explicitly. Let us start with a simple example. **Example 3.8**.: (Vine curves) If \(G\) is a vine curve, after identifying \(V^{d}_{\mathrm{stab}}(G)=\mathbb{R}\) by projecting onto the first factor (as done in the end of Definition 3.3), we have that the degenerate locus is a locally finite collection of points that only depends on the parity of the number of nodes \(t\). If \(t\) is even, the degenerate locus corresponds to the \(\mathbb{Z}\subset\mathbb{R}\). If \(t\) is odd, the degenerate locus corresponds to the \(\frac{1}{2}+\mathbb{Z}\subset\mathbb{R}\). We now give an explicit description of the degenerate locus in \(V^{d}_{g,n}\), based on [14, Section 5]. By Proposition 3.5, we have that \(V^{d}_{g,n}\subset T^{d}_{g,n}\), where the latter is the stability space of vine curves (one for each topological type), with coordinates \(x_{i,t,S}\) for each vine curve triple \((i,t,S)\) (see Definition 3.3). For each vine curve triple \((i,t,S)\) and integer \(k\), define the (translate of the coordinate) hyperplane \[T^{d}_{g,n}\supset H(i,t,S;k):=\begin{cases}\{x_{i,t,S}=k\}&\text{for even $t$,}\\ \{x_{i,t,S}=\frac{1}{2}+k\}&\text{for odd $t$.}\end{cases} \tag{3.9}\] One main result of [14, Section 5] is that the degenerate locus in the universal stability space is the pull-back of translates of coordinate hyperplanes in the stability space of vine curves. More precisely: **Proposition 3.10**.: _([14, Lemma 5.8]) The degenerate locus in \(V^{d}_{g,n}\) is a union of hyperplanes. Each hyperplane is the inverse image via the affine linear embedding \(\tau_{d}\colon V^{d}_{g,n}\subset T^{d}_{g,n}\) of a hyperplane of the form \(H(i,t,S;k)\)._ This description hides the difficulty that the embedding \(\tau_{d}\) has, in general, a very high codimension. A more explicit description of the degenerate locus can be obtained via the isomorphism \(V^{d}_{g,n}\cong C^{d}_{g,n}\times D^{d}_{g,n}\). When expressing the hyperplanes of (3.9) in terms of the coordinates \(x_{i,1,S}\) and the coordinates \(x_{j}:=x_{0,2,\{j\}}\), by [11, Theorem 2] we have1 Footnote 1: Note that the formula in loc.cit is translated by the coordinates of a “degree-\(d\) canonical stability condition” – a choice of an origin in \(V_{g,n}^{d}\) that we do not discuss here. \[x_{i,t,S}=\frac{2g-2i-t}{2g-2}\cdot\sum_{j\in S}x_{j}+\frac{2i-2+t}{2g-2}\cdot \left(d-\sum_{j\notin S}x_{j}\right) \tag{3.11}\] whenever \(t\geq 2\). Therefore, the stability hyperplanes take the following form \[H(i,1,S;k)=\left\{x_{i,1,S}=k+\frac{1}{2}\right\} \tag{3.12}\] for all vine curve triples (Definition 3.3) of the form \((i,1,S)\) (the boundary divisors in \(\overline{\mathcal{M}}_{g,n}\) that generically parameterize curves with \(2\) components) \[H(i,t,S;k)=\left\{\frac{2g-2i-t}{2g-2}\cdot\sum_{j\in S}x_{j}+\frac{2i-2+t}{2g -2}\cdot\left(d-\sum_{j\notin S}x_{j}\right)=k+\frac{t}{2}\right\} \tag{3.13}\] for all vine curve triples \((i,t,S)\) with \(t\geq 2\). Note that the degenerate locus parameterized by the hyperplanes in (3.12) and (3.13) may come with multiplicities. In other words, there exist \((i_{1},t_{1},S_{1};k_{1})\neq(i_{2},t_{2},S_{2};k_{2})\) such that \(H(i_{1},t_{1},S_{1};k_{1})=H(i_{2},t_{2},S_{2};k_{2})\). We will now analyse these hyperplanes and study when they may coincide. It is immediate to observe that a necessary condition for two hyerplanes of this form to coincide, is that their corresponding subset of marked points must also coincide: **Proposition 3.14**.: _If any two hyperplanes \(H(i_{1},t_{1},S_{1};k_{1})\) and \(H(i_{2},t_{2},S_{2};k_{2})\) coincide, then \(S_{1}=S_{2}\)._ Proof.: Straightforward. First we deal with the hyperplanes of (3.12), occurring on compact type vine curves (or divisorial vine curves). Those are all simple: **Proposition 3.15**.: _The hyperplanes in (3.12) are pairwise distinct and each of them is distinct from any of the hyperplanes in (3.13)._ Proof.: Straightforward. The next proposition is about hyperplanes of the form (3.13) with \(S\neq[n]\). As we shall discuss in Section 5.c, a stability hyperplane of this type witnesses a change of stability on loci of vine curves that are _disjoint_. **Proposition 3.16**.: _If \(S\neq[n]\) and \((i_{1},t_{1};k_{1})\neq(i_{2},t_{2};k_{2})\) are such that \(H(i_{1},t_{1},S;k_{1})\) and \(H(i_{2},t_{2},S;k_{2})\) are equal, then \(2i_{1}+t_{1}=2i_{2}+t_{2}\)._ Proof.: Straightforward. The most interesting vine curves from the point of view of the stability decomposition are those with \(S=[n]\). Over those vine curves it can occur that two stability hyperplanes of the form (3.13) coincide. For example, if \(d=0\), by fixing \(\sum_{j\in[n]}x_{j}=g-1\) one sees that all hyperplanes of the form \(H(i,t,[n];k)\) with \(i+\lceil t/2\rceil+k=g\) coincide (note that this is a finite collection, because of the constraints \(i+t\leq g+1\), \(i\geq 0\) and \(t\geq 2\)). ### Compactified Jacobians, universal and semistable family Here we define, for every nondegenerate \(\phi\in V_{g,n}^{d}\), a fine compactified universal Jacobian \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\), parameterizing \(\phi\)-stable sheaves. The construction is taken from [11, Section 4], in the language of pseudodivisors from [1, Section 4]. Each fine compactified Jacobian will come with a normal crossing stratification category (an abstract definition of this notion will be given and discussed in the next section). We also describe (Theorem 3.29) a quasistable modification of the universal curve \(\overline{\mathcal{C}}_{g,n}(\phi)\to\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) as a certain \((n+1)\)-universal Jacobian \(\overline{\mathcal{J}}_{g,n+1}^{d}(\alpha(\phi))\). **Definition 3.17**.: For \(\phi\in V_{\mathrm{stab}}^{d}(G)\) we say that a pseudodivisor \((E,D)\) is \(\phi\)_-semistable_ if \[\phi(V_{0})-\deg_{V_{0}}(D)+\frac{|E(V_{0},V_{0}^{c})|}{2}\geq 0 \tag{3.18}\] for every \(V_{0}\subseteq V(G^{E})\). We say that \((E,D)\) is \(\phi\)_-stable_ if the inequality above is strict for every \(V_{0}\) such that \(V_{0}\neq V(G^{E})\) and \(V_{0}\) is not contained in the set of exceptional vertices. Given \(v_{0}\in V(G)\), we say that \((E,D)\) is \((\phi,v_{0})\)_-quasistable_ if the inequality is strict for every \(V_{0}\) such that \(V_{0}\neq V(G^{E})\) and \(v_{0}\in V_{0}\). As stipulated in Section 2.c, when \(E=\varnothing\), we will simply write \(D\) for \((\varnothing,D)\). **Remark 3.19**.: By [1, Proposition 4.6] if a pseudodivisor \((E,D)\) on \(G\) is \((\phi,v_{0})\)-quasistable for some \((\phi,v_{0})\), then \(E\subseteq E(G)\) does not disconnect \(G\). **Remark 3.20**.: We have introduced the degenerate locus of \(V_{\mathrm{stab}}^{d}(G(X))\) and of \(V_{g,n}^{d}\) in Definition (3.6). We claim that, in both cases, an element \(\phi\) is nondegenerate if and only if all semistable pseudodivisors are stable. The "only if" is immediate. The other implication is proved in [11, Section 5]. We now define stability for rank \(1\) torsion free sheaves on curves. **Definition 3.21**.: ([14, Definition 4.2]) Let \(C\) be a nodal curve with dual graph \(G(C)\) and let \(\phi\in V^{d}_{\mathrm{stab}}(G(C))\). A rank \(1\) torsion-free sheaf \(F\) of degree \(d\) on \(C\) is _\(\phi\)-(semi)stable_ if its multidegree \(\underline{\deg}(F)\) is a \(\phi\)-(semi)stable pseudodivisor. If \(P\in C^{\mathrm{sm}}\) is a nonsingular point of \(C\) in the component \(C_{v_{0}}\), we say that \(F\) is _\((\phi,P)\)-quasistable_ if \(\underline{\deg}(F)\) is \((\phi,v_{0})\)-quasistable. If \(C^{\prime}\to C\) is a semistable modification of \(C\) and \(C^{\prime}\) is a positively admissible line bundle on \(C^{\prime}\), we say that \(F^{\prime}\) is \(\phi\)-(semi)stable or \((\phi,P)\)-quasistable if so is \(f_{*}(F^{\prime})\). For \(\phi\in V^{d}_{\mathrm{stab}}(G(C))\) and \(P\in C\), we define \(\overline{\mathcal{J}}^{d}_{\phi,P}(C)\) to be the subscheme of \(\mathrm{Simp}^{d}(C)\) parameterizing \((\phi,P)\)-quasistable sheaves. Note that if \(F\) is a rank \(1\) torsion free sheaf on \(C\) then (1) if \(F\) is \((\phi,P)\)-quasistable then it is simple, and (2) the sheaf \(F\) is simple if and only if its multidegree \((E,D)\) has the property that \(E\subseteq G(C)\) is nondisconnecting. **Remark 3.22**.: Let \(\phi\in V^{d}_{\mathrm{stab}}(G(C))\) and \(p\in C^{\mathrm{sm}}\) be as above. Let \(\phi^{\prime}\in V^{d}_{\mathrm{stab}}(G(C))\) be a small perturbation of \(\phi\) obtained by subtracting a small \(\epsilon>0\) to \(\phi\) on the vertex of \(G(C)\) containing \(P\), and by subtracting a small positive amount on all other components (so that \(\sum\phi^{\prime}(v)=\sum\phi(v)=d\)). Then \((\phi,P)\)-quasistability coincides with \(\phi^{\prime}\)-stability which in turn coincides with \(\phi^{\prime}\)-semistability, We are now ready to introduce the notion of _universal_ polarizations and compactified Jacobians. Each universal polarization will give rise to a fine compactified Jacobian, and to a stratification category. Recall that, for any \(1\leq i\leq n\), we denote by \(\sigma_{i}\colon\overline{\mathcal{M}}_{g,n}\to\overline{\mathcal{C}}_{g,n}\) the \(i\)-th smooth section. **Definition 3.23**.: Let \(\phi\in V^{d}_{g,n}\) be a universal polarization. We define \(\mathfrak{C}_{g,n}(\phi)\) to be the category whose objects are \((G,(E_{G},D_{G}))\) where \(G\) is an object of \(G_{g,n}\) and \((E_{G},D_{G})\) is a \(\phi\)-semistable pseudodivisor on \(G\). A morphism \((G,(\varnothing,D_{G}))\to(G^{\prime},(\varnothing,D_{G^{\prime}}))\) in \(\mathfrak{C}_{g,n}(\phi)\) is a morphism \(f\in\mathrm{Mor}_{G_{g,n}}(G,G^{\prime})\) such that the induced homomorphism \(f_{*}\colon\operatorname{Div}(G)\to\operatorname{Div}(G^{\prime})\) on divisors satisfies \(f_{*}(D)=D^{\prime}\). We refer to [1, Section 2.1] for the notion of a morphism \((G,(E_{G},D_{G}))\to(G^{\prime},(E_{G^{\prime}},D_{G^{\prime}}))\) when \(E_{G},E_{G^{\prime}}\) are nonempty. Similarly, we define \(\mathfrak{C}_{g,n}(\phi,\sigma_{i})\) to be the category whose objects are \((G,(E_{G},D_{G}))\) and \((E_{G},D_{G})\) is \((\phi,\sigma_{i})\)-quasistable. (By abuse of notation, \(\sigma_{i}\) gives the choice of the element \(\operatorname{leg}_{G}(i)\in V(G)\) for each stable graph \(G\)). We say that a family of rank \(1\) torsion-free simple sheaves of degree \(d\) on a family of stable curves is _\(\phi\)-(semi)stable_ or \((\phi,\sigma_{i})\)-quasistable if that property holds on all geometric fibers. We define \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\) to be the moduli stack parameterizing \(\phi\)-semistable sheaves on families of stable curves. We define \(\overline{\mathcal{J}}^{d}_{g,n}(\phi,\sigma_{i})\) to be the moduli stack parameterizing \((\phi,\sigma_{i})\)-quasistable sheaves on families of stable curves. **Notation 3.24**.: If \(\phi\in V^{d}_{g,n}\) is nondegenerate then, by Remark 3.20 we have that all semistable sheaves are stable. It follows that for every \(1\leq i\leq n\) we have the equalities \(\mathfrak{C}_{g,n}(\phi)=\mathfrak{C}_{g,n}(\phi,\sigma_{i})\) and \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)=\overline{\mathcal{J}}^{d}_{g,n}(\phi,\sigma_{i})\), where \(\sigma_{i}\colon\overline{\mathcal{M}}_{g,n}\to\overline{\mathcal{C}}_{g,n}\) is the \(i\)-th section. **Remark 3.25**.: For \(\phi\in V^{d}_{g,n}\) degenerate and for all \(1\leq i\leq n\) we can describe \(\mathfrak{C}_{g,n}(\phi,\sigma_{i})\) (resp. \(\overline{\mathcal{J}}^{d}_{g,n}(\phi,\sigma_{i})\)) as \(\mathfrak{C}_{g,n}(\phi)\) (resp. \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{\prime}_{i})\)) for some nondegenerate perturbation \(\phi^{\prime}_{i}\) of \(\phi\). (As done in Remark 3.22 for a single curve). In order to achieve this, we let \(\phi^{\prime}_{i}\) by subtracting from \(\phi\) an arbitrarily small \(\epsilon>0\) on each curve on its irreducible component containing the section \(\sigma_{i}\), and by subtracting a small quantity on all other components (so for all curves, the sum over all irreducible components of the values of \(\phi\) and of \(\phi^{\prime}_{i}\) coincide). The fact that such \(\phi^{\prime}_{i}\) can be constructed in a way that is compatible with graph morphisms follows by using Proposition 3.5. The following guarantees the existence of universal moduli spaces. **Theorem 3.26**.: _([13, Corollary 4.4] and [11]/[11]) For all \(\phi\in V^{d}_{g,n}\) and \(1\leq i\leq n\) the stack \(\overline{\mathcal{J}}^{d}_{g,n}(\phi,\sigma_{i})\) is a nonsingular Deligne-Mumford stack, and the forgetful morphism \(\overline{\mathcal{J}}^{d}_{g,n}(\phi,\sigma_{i})\to\overline{\mathcal{M}}_{g,n}\) is representable, proper and flat._ The moduli stacks of Theorem 3.26 are called _fine compactified universal Jacobians_. As observed in [13, Remark 4.6], the fine compactified (universal) Jacobians produced by this construction are the same as those defined by Esteves and Melo [10, 11]. By virtue of its universal property, the universal family \(\pi\colon\overline{\mathcal{C}}_{g,n}(\phi)\to\overline{\mathcal{J}}^{d}_{g,n }(\phi)\) carries some tautological (or Poincare) sheaves \(F_{\text{tau}}(\phi)\). These are of fiberwise total degree \(d\) and \(\phi\)-stable. They are not unique, but the difference of any two of them is the pullback of a line bundle from \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\). One way to make a definite choice of a tautological sheaf is to assume that it is trivial along a given smooth section. Note that, as described in [10], the total space \(\overline{\mathcal{C}}_{g,n}(\phi)\) is singular. A natural desingularization of \(\overline{\mathcal{C}}_{g,n}(\phi)\), carrying a tautological line bundle \(L_{\text{tau}}(\phi)\), was provided by Esteves-Pacini in [13] by using a semistable modification of the universal family. Here we will give an alternative description of it using a compactified universal Jacobian with one extra point. **Remark 3.27**.: We observe that there is a natural map \(\alpha\colon V^{d}_{g,n}\to V^{d}_{g,n+1}\), with image in the degenerate locus, defined as follows. If \(G\) is the stable graph obtained as the stabilization of the \((n+1)\)-pointed graph \(G^{\prime}\) after the \(n+1\) marking is removed, then there is a natural bijection between the vertices of \(G^{\prime}\) and those of \(G\), except possibly for \(1\) extra genus \(0\) vertex of \(G^{\prime}\). Then define \(\alpha(\phi)\) as the assignment on \(G^{\prime}\) that is defined by this bijection and that is \(0\) on the extra genus \(0\) vertex of \(G^{\prime}\) (when that exists). The extra genus \(0\) vertex could be a tail (when it is connected to the complement by \(1\) edge) or a bridge (when it is connected to the complement by \(2\) edges). The fact that \(\phi\) is compatible for graph morphisms implies the same property for \(\alpha(\phi)\). **Notation 3.28**.: We will slightly abuse the notation and, for \(\phi\in V^{d}_{g,n}\), we will simply write \(\phi\in V^{d}_{g,n+1}\) in place of \(\alpha(\phi)\in V^{d}_{g,n+1}\). We now show that a quasistable modification of the universal curve can be described as the morphism \(\pi^{\prime}\colon\overline{\mathcal{J}}^{d}_{g,n+1}(\phi,\sigma_{i})\to \overline{\mathcal{J}}^{d}_{g,n}(\phi,\sigma_{i})\) that forgets the last point and stabilizes, thus mapping each \((C^{\prime},p_{1},\ldots,p_{n+1},F)\) to \((C,p_{1},\ldots,p_{n},f_{*}F)\), where \(f\colon C^{\prime}\to C\) is the stabilization of \((C,p_{1},\ldots,p_{n})\). In order to do that, for each fixed \(i=1,\ldots,n\), we define a morphism \(\psi\colon\overline{\mathcal{J}}^{d}_{g,n+1}(\phi,\sigma_{i})\to\overline{ \mathcal{C}}_{g,n}(\phi,\sigma_{i})\) by \[(C^{\prime},p_{1},\ldots,p_{n+1},L)\mapsto(C,p_{1},\ldots,p_{n},f_{*}L,f(p_{n +1}))\] where \(f\colon C^{\prime}\to C\) is the stabilization of the curve \((C^{\prime},p_{1},\ldots,p_{n})\). Then we show that \(\psi\) is the stabilization of \(\pi^{\prime}\). **Theorem 3.29**.: _For each \(1\leq i\leq n\), the forgetful morphism \(\psi\) defined above is the unique positively admissible quasistable modification of the universal curve over \(\overline{\mathcal{J}}^{d}_{g,n}\). A tautological line bundle on \(\overline{\mathcal{J}}^{d}_{g,n+1}(\phi,\sigma_{i})\) is_ \[\mathcal{L}_{\rm tau}:=\sigma_{n+1}^{*}(\mathcal{F}^{\prime}_{\rm tau}) \otimes\sigma_{1}^{*}(\mathcal{F}^{\prime-1}_{\rm tau}), \tag{3.30}\] _where \(\mathcal{F}^{\prime}_{\rm tau}=\mathcal{F}^{\prime}_{\rm tau}(\phi)\) is a tautological sheaf on the universal curve \(\pi\colon\overline{\mathcal{C}}_{g,n+1}(\phi,\sigma_{i})\to\overline{ \mathcal{J}}^{d}_{g,n+1}(\phi,\sigma_{i})\)._ Proof.: We apply Proposition 2.3 to show that the morphism \(\psi\) is a quasistable modification of the universal curve. Let us begin by proving that \(\psi\) is a quasistable modification of \(\mathcal{C}_{g,n}(\phi,\sigma_{i})\). Firstly, if \(C=C^{\prime}\), then \(f\) is an isomorphism, so \(\psi\) is an isomorphism locally around \((C^{\prime},p_{1},\ldots,p_{n+1},L)\). If \(C\neq C^{\prime}\), then we have two cases. Either \(p_{n+1}\) belongs in a rational tail and, in this case, \(L=f^{*}f_{*}(L)\) so \(\psi\) is again an isomorphism locally around \((C^{\prime},p_{1},\ldots,p_{n+1},L)\). Or, \(p_{n+1}\) is in a bridge \(E\subset C^{\prime}\) such that no other marked points are in \(E\). We will now focus on this case. If \(\deg_{E}(L)=0\), then by Proposition 2.3, we have that \(L=f^{*}f_{*}(L)\) and again the map \(\psi\) is an isomorphism locally around \((C^{\prime},p_{1},\ldots,p_{n+1},L)\). We are left with the case where \(\deg_{E}(L)=1\). In this case, we have that \(f_{*}(L)\) is not locally free around \(f(p_{n+1})\), which is a node. More so, we have that \(\psi^{-1}(C,p_{1},\ldots,p_{n},f_{*}(L),f(p_{n+1}))\) is isomorphic to \(\mathbb{P}^{1}\). Indeed, every \(L^{\prime}\) obtained from gluing \(L|_{E^{c}}\) and \(\mathcal{O}_{E}(1)\) will have the property that \(\psi(C^{\prime},p_{1},\ldots,p_{n+1},L^{\prime})=(C,p_{1},\ldots,p_{n},f_{*}(L ),f(p_{n+1}))\). The possible gluings are paremeterized by a \(\mathbb{P}^{1}\), and we are done. Secondly we observe that \(\deg_{E}(L_{\text{tau}})=1\) for each exceptional component \(E\) contracted by \(\psi\). In order to show that, it suffices to construct a nonconstant map \(\delta\colon\mathbb{P}^{1}\to E\) such that \(\delta^{*}(L_{\text{tau}})=\mathcal{O}(1)\). Let \((C,p_{1},\ldots,p_{n},L,p)\) correspond to the point in the universal curve that is contracted by \(E\). That means that \(p\) is a node of \(C\) and that \(L\) fails to be locally free at \(p\). We construct a family \(X/\mathbb{P}^{1}\) by gluing two sections on two families \(X_{1}/\mathbb{P}^{1}\) and \(X_{2}/\mathbb{P}^{1}\). The family \(X_{1}\) is the trivial family \(C^{\nu_{p}}\times\mathbb{P}^{1}\) (where \(\nu_{p}\) denotes the normalization at \(p\)) carrying the \(n\) trivial sections \(p_{1},\ldots,p_{n}\) and the gluing sections are the points \(q_{1},q_{2}\) such that \(\nu_{p}(q_{i})=p\). The family \(X_{2}\) is the blowup of the trivial family \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) at \([0:1]\times[0:1]\) and \([1:0]\times[1:0]\) with a further section \(p_{n+1}\) defined as the inverse image of a constant section (different from \([0:1]\) and \([1:0]\)) and the two gluing sections are the strict transforms of the sections \([0:1]\) and \([1:0]\). Then we choose any line bundle \(F\) on \(X\) with the property that \(F|_{X_{1}}=L\) and \(F|_{X_{2}}=\mathcal{O}(\widetilde{\Delta})\), for \(\widetilde{\Delta}\) the strict transform of the diagonal in \(X_{2}\). Then \(F\) is \((\phi,\sigma_{i})\)-quasistable, and so the datum of \((X,F)\) defines a morphism \(\delta\). By construction, we have that \(\delta^{*}(L_{\text{tau}})=\sigma_{n+1}^{*}(F)=\sigma_{n+1}^{*}(\mathcal{O}( \widetilde{\Delta}))\), which equals \(\mathcal{O}(1)\) because \(\sigma_{n+1}^{*}\) intersects \(\widetilde{\Delta}\) at \(1\) reduced point. Then we prove that the direct image \(\psi_{*}L_{\text{tau}}\) of the line bundle defined in (3.30) equals \(F_{\text{tau}}\otimes\pi^{*}(M)\) for some line bundle \(M\). By the previous part combined with Proposition 2.3, we conclude that \(\psi_{*}L_{\text{tau}}\) is rank \(1\) and torsion-free. By [1, Appendix 7], it is enough to prove that this equality occurs on an open subset \(U\) of \(\overline{\mathcal{C}}_{g,n}(\phi)\) whose complement has codimension at least \(2\). It is easy to show equality over the open set \(U\) that is the universal curve over the line bundle locus in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) (this follows because \(\psi_{|\psi^{-1}(U)}\) is an isomorphism over \(U\), and because the restriction of \(F_{\text{tau}}^{\prime}\) to the open set \(\pi^{-1}(\psi^{-1}(U))\subset\overline{\mathcal{C}}_{g,n+1}(\phi,\sigma_{i})\) is a line bundle). This concludes the proof that \(\psi\) is a positively admissible quasistable modification. Uniqueness follows from Corollary 2.4. ### Brill-Noether classes To the data of a flat family \(\pi\colon\mathcal{C}\to S\) of nodal curves of arithmetic genus \(g\) over a nonsingular scheme \(S\) and a rank \(1\) torsion-free \(\mathcal{F}\) on \(\mathcal{C}\) of fiberwise degree \(d\), we can associate the Brill-Noether (or Thom-Porteous) class \[\mathsf{w}_{d}(\mathcal{C}/S,\mathcal{F}):=c_{g-d}(-R^{\bullet}\pi_{*} \mathcal{F}). \tag{3.31}\] This class is supported on the subscheme \[\mathsf{W}_{d}(\mathcal{C}/S,\mathcal{F})=\{s\in S:\ h^{0}(\mathcal{C}_{s}, \mathcal{F}_{s})>0\}\subset S\] and when the latter is of the expected codimension \(g-d\), it coincides with its fundamental class (with a suitably defined scheme structure- see [13, Chapter 14]). If \(h^{0}(\mathcal{C}_{s},\mathcal{F}_{s})=0\) for all \(s\in S\), then the complex \(-R^{\bullet}\pi_{*}\mathcal{F}=R^{1}\pi_{*}\mathcal{F}\) is a vector bundle of rank \(g-1-d\), so the Chern class \(\mathsf{w}_{d}(\mathcal{C}/S,\mathcal{F})\) equals zero. Here are a couple of further basic remarks on these classes. **Remark 3.32**.: If \(I\) is a line bundle on \(S\), then \(\mathsf{w}_{d}(\mathcal{C}/S,\mathcal{F})=\mathsf{w}_{d}(\mathcal{C}/S, \mathcal{F}\otimes\pi^{*}I)\). Indeed, by [13, Example 3.2.2] we have \[c_{j}(-R^{\bullet}\pi_{*}\mathcal{F}\otimes I)=\sum_{i=0}^{j}\binom{g-d-1-i}{j -i}c_{i}(-R^{\bullet}\pi_{*}\mathcal{F})\cdot c_{1}(I)^{j-i} \tag{3.33}\] for all \(j\geq 0\). The result follows because, for \(j=g-d\), the binomial coefficient vanishes unless when \(i=j\). **Remark 3.34**.: Let \(f\colon\mathcal{C}^{\prime}\to\mathcal{C}\) be a semistable modification of the family of nodal curves \(\pi\colon\mathcal{C}\to S\), and let \(\mathcal{L}\) be a positively admissible line bundle on \(\mathcal{C}\) (see Section 2.c). By Proposition 2.3, we deduce \[R^{\bullet}(\pi\circ f)_{*}\mathcal{L}=R^{\bullet}\pi_{*}(f_{*}\mathcal{L}). \tag{3.35}\] Conversely, if \(\mathcal{F}\) is a rank \(1\) torsion free simple sheaf on a family of stable curves \(\mathcal{C}/S\), there exists a quasistable modification \(\mathcal{C}^{\prime}\) of \(\mathcal{C}\) and a line bundle \(\mathcal{L}\) on \(\mathcal{C}\) such that \(R^{\bullet}f_{*}(\mathcal{L})=f_{*}\mathcal{L}=\mathcal{F}\) and thus (3.35) occurs. The same construction and remarks apply to the case of the semistable modification \(\pi\colon\overline{\mathcal{C}}^{\prime}_{g,n}(\phi)\to\overline{\mathcal{J}} ^{d}_{g,n}(\phi)\) of the universal family, and its tautological line bundle \(\mathcal{L}=\mathcal{L}_{\text{tau}}\) (see [14]). We will denote by \(\mathsf{w}_{d}(\phi)\) the corresponding universal class in \(A^{d}(\overline{\mathcal{J}}^{d}_{g,n}(\phi))\) and with \(\mathsf{W}_{d}(\phi)\) the subscheme over which it is supported. **Remark 3.36**.: When restricted to smooth curves, the scheme \(\mathsf{W}_{d}(\phi)_{|\mathcal{J}^{d}_{g,n}}\) is reduced, irreducible, and of relative codimension \(g-d\) (it is the image of the \(d\)-th symmetric product via the Abel map). The closure of \(\mathsf{W}_{d}(\phi)_{|\mathcal{J}^{d}_{g,n}}\) in \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\) is contained in \(\mathsf{W}_{d}(\phi)\) and when the two coincide, we have \(\mathsf{w}_{d}(\phi)=[\mathsf{W}_{d}(\phi)]\). **Remark 3.37**.: In general, the scheme \(\mathsf{W}_{d}(\phi)\) fails to be irreducible and of the expected dimension. For example, if \(\phi\) is a stability condition such that the line bundles of bidegree \((d_{1},d_{2})\) are \(\phi\)-stable on curves in the boundary divisor \(\Delta_{i,S}\), and either \(d_{1}>i\) or \(d_{2}>g-i\), then \(\mathsf{W}_{d}(\phi)\) contains the pullback of \(\Delta_{i,S}\) in \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\). We will discuss more on this matter in Proposition 3.38 and Remark 4.18. We conclude this section by providing sufficient conditions for the Brill-Noether class 3.31 defined by the Thom-Porteous Formula to coincide with the class of the Brill-Noether locus. Some parts of the proof of the next two propositions will require to employ the fact that \(\mathfrak{C}_{g,n}(\phi)\) is a stratification of \(\overline{\mathcal{J}}^{d}_{g,n}(\phi)\), which we will discuss in the next section. For this reason, we postpone the proof of the next result to Section 4.b.1. As in Section 3.b, we fix coordinates for \(V^{d}_{g,n}\cong C^{d}_{g,n}\times D^{d}_{g,n}\), and let \(V^{d}_{g,n}\ni\phi=((x_{i,1,S})_{(i,S)},(x_{1},\ldots,x_{n}))\) where \(x_{j}=x_{0,2,j}\) for each \(1\leq j\leq n\). **Proposition 3.38**.: _We have that \(\mathsf{W}_{d}(\phi)\) is the closure of \(\mathsf{W}_{d}(\phi)_{|\mathcal{J}^{d}_{g,n}}\) (in particular it is reduced, irreducible and of the expected codimension, and \(\mathsf{w}_{d}(\phi)=\mathsf{W}_{d}(\phi)\)) if and only if \(\phi=((x_{i,1,S})_{(i,S)},(x_{1},\ldots,x_{n}))\) is as follows_ 1. _If_ \(d=g-1\)_, for_ \(i-3/2<x_{i,1,S}<i+1/2\) _for all_ \((i,S)\)_._ 2. _If_ \(d=g-2\)_, for_ \(i-3/2<x_{i,1,S}<i-1/2\) _for all_ \((i\geq 1,S)\) _and_ \(-3/2<x_{0,1,S}<1/2\) _for all_ \(S\)_, and_ \[i-2<\frac{2g-2i-t}{2g-2}\cdot\sum_{j\in S}x_{j}+\left(d-\sum_{j\notin S}x_{j} \right)\cdot\frac{2i-2+t}{2g-2}<i+1\] _for all vine curve triples_ \((i,t,S)\) _with_ \(t\geq 2\)_._ 3. _Never if_ \(0<d\leq g-3\)_._ 4. _If_ \(d<0\)_, for_ \(d-1/2<x_{i,1,S}<1/2\) _for all_ \((i,S)\)_, and the coordinates_ \(x_{1},\ldots,x_{n}\) _satisfy_ (3.39) \[d-1<\frac{2g-2i-t}{2g-2}\cdot\sum_{j\in S}x_{j}+\left(d-\sum_{j\notin S}x_{j} \right)\cdot\frac{2i-2+t}{2g-2}<1\] _for all vine curve triples_ \((i,t,S)\) _with_ \(t\geq 2\)_._ 5. _If_ \(d=0\)_, for_ \(-1/2<x_{i,1,S}<1/2\) _for all_ \((i\geq 1,S)\)_, for_ \(-3/2<x_{0,1,S}<1/2\) _for all_ \(S\)_, and when the coordinates_ \(x_{1},\ldots,x_{n}\) _satisfy (_3.39_) for all vine curve triples_ \((i,t,S)\) _with_ \(t\geq 2\)_._ We now explicitly define a stability condition that, in each of the cases (1),(2),(4),(5) listed above, belongs to the ranges that we have identified for \(\mathsf{W}_{d}(\phi)\) to equal the closure of its restriction to the open part. (This shows, in particular, that these ranges are not empty). **Definition 3.40**.: For \(G\in G_{g,n}\), define the stabilized-canonical divisor \(K^{s}_{G}\) to equal zero at every vertex contained in some rational tail2, and for every other \(v\) to equal \(K^{s}(v)=2g(v)-2+\operatorname{val}^{\prime}(v)\), where \(\operatorname{val}^{\prime}(v)\) is the number of edges at \(v\) (counting each loop twice), except the edges that are contained in some rational tail. Footnote 2: a rational tail is a complete subgraph whose genus is \(0\) and that is connected to its complement by exactly \(1\) edge Then define the stabilized canonical element \(\phi^{d}_{\operatorname{scan}}(G)=\frac{d}{2g-2}\cdot K^{s}_{G}\in V^{d}_{g,n}\). Note that the above is different from the canonical stability \(\phi^{d}_{\text{can}}\in V^{d}_{g,n}\) chosen as the origin in [10]. #### 3.d.1. Pull-back via Abel-Jacobi sections Fix integers \(\mathbf{d}=(k;d_{1},\ldots,d_{n})\) such that \(d=d_{1}+\ldots+d_{n}\) and integers \(\mathbf{f}=(f_{i,S})_{i,S}\) for every boundary divisor \(G(g-i,1,S)\in G_{g,n}\). Define the universal line bundle \[\mathcal{L}=\mathcal{L}_{\mathbf{d},\mathbf{f}}=\omega^{k}_{\overline{\mathcal{ C}}_{g,n}/\overline{\mathcal{M}}_{g,n}}\left(\sum_{j=1}^{n}d_{j}x_{j}+\sum_{i,S}f_{i,S^{c}}\cdot C_{i,S^{c}}\right), \tag{3.41}\] where \(C_{i,S^{c}}\subset\overline{\mathcal{C}}_{g,n}\) is the component3 over the boundary divisor \(\Delta_{g-i,S}=\overline{\mathcal{M}}_{G(g-i,1,S)}\subset\overline{\mathcal{ M}}_{g,n}\) that contains the sections in \(S^{c}\). Then define \(\phi=\phi_{\mathbf{d},\mathbf{f}}\in V^{d}_{g,n}\) to be the multi-degree of \(\mathcal{L}\). Footnote 3: these unnatural conventions will simplify the formulas in Remark 7.39 and Example 7.40 If \(\phi^{+}=\phi^{+}_{\mathbf{d},\mathbf{f}}\) is a nondegenerate small perturbation of \(\phi\), then we have that the universal line bundle \(\mathcal{L}\) is \(\phi^{+}\)-stable, and it defines a (Abel-Jacobi) section \(\sigma=\sigma^{+}_{\mathbf{d},\mathbf{f}}\colon\overline{\mathcal{M}}_{g,n} \to\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\). **Remark 3.42**.: Assume \(d<0\), \(k=0\), and \(\mathbf{d}\) satisfies \(d_{i}\leq 1\) for all \(i\), and at most one of the \(d_{i}\)'s equals \(1\). Assume that \(\mathbf{f}\) satisfies \(d\leq f_{i,S^{c}}+\sum_{j\in S}d_{j}\leq 0\) for all \((i,S)\). Then \(\phi_{\mathbf{d},\mathbf{f}}\) satisfies the conditions of Item (4) in Proposition 3.38. By Proposition 3.38 the pullback \[\sigma^{*}_{\mathbf{d},\mathbf{f}}(\mathsf{w}_{d})=0\quad\in\operatorname{A}^ {d}(\overline{\mathcal{M}}_{g,n}), \tag{3.43}\] gives a relation. The LHS of (3.43) can be explicitly written as a linear combination of standard generators of the tautological ring of \(\overline{\mathcal{M}}_{g,n}\) by means of [10, Theorem 1] (see also Corollary 3.7 and Equation 3.9 of _loc.cit_). ## 4. Normal crossing stratification categories and blowups In this section we define the axioms needed for a category of (resolved) strata of a space stratified by normal crossing divisors that are not necessarily simple normal crossing. We use this formalism to write some intersection theoretic formulas (the excess intersection formula and the GRR formula for the total Chern class) that we will use to derive our main result Theorem 7.4. Then we define the blow-up category at a stratum with transversal self-intersection. A construction of such strata categories starting from a normal crossing divisor, and more generally from a toroidal embedding, is given in [14, Definition 3.5]. The main examples we are generalizing are (a) the poset obtained by intersecting the components of a simple normal crossing divisor and (b) the stratification of \(\overline{\mathcal{M}}_{g,n}\) by topological type, induced by the boundary divisors \(\Delta=\Delta_{\text{irr}}\cup\bigcup_{i,S}\Delta_{i,S}\) (see Section 2.d). In the latter case, the relevant category is the category \(G_{g,n}\) of stable \(n\)-pointed graphs of genus \(g\) with morphisms given by graph contractions. ### Categories of resolved strata for a normal crossing stratification Let \(\mathfrak{C}\) be a finite skeletal category with a terminal object \(\bullet\) such that every morphism is an epimorphism. **Remark 4.1**.: In \(\mathfrak{C}\) we have that \(\operatorname{Mor}(\alpha,\alpha)=\operatorname{Aut}(\alpha)\). Indeed, if \(f\in\operatorname{Mor}(\alpha,\alpha)\), then there exist natural numbers \(a>b\) such that \(f^{a}=f^{b}\), and since \(f\) is an epimorphism we have that \(f^{a-b}=\operatorname{Id}_{A}\), which proves that \(f\) is an isomorphism. If \(\alpha\) and \(\beta\) are distinct elements, we also have that if \(\operatorname{Mor}(\alpha,\beta)\neq\varnothing\) then \(\operatorname{Mor}(\beta,\alpha)=\varnothing\). Indeed, assume that there exists \(f\colon\alpha\to\beta\) and \(g\colon\beta\to\alpha\). By the observation we would have that both \(f\circ g\) and \(g\circ f\) are automorphisms, which implies that both \(f\) and \(g\) are isomorphisms, this contradicts the fact that \(\mathfrak{C}\) is skeletal. This means that the set \(\operatorname{Obj}(\mathfrak{C})\) has a natural poset structure given by \(\alpha\geq\beta\) if \(\operatorname{Mor}(\alpha,\beta)\neq\varnothing\). We say that such \(\mathfrak{C}\) is a _(normal crossing) stratification category_ if its underlying poset is ranked with rank function cd with minimum element the terminal object, having \(\operatorname{cd}(\bullet)=0\) and it satisfies the following axiom: **Axiom 1**.: For each \(f\colon\alpha\to\beta\) there exist exactly \(\operatorname{cd}(f):=\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)\) pairs \[(\beta^{\prime},\operatorname{Aut}(\beta^{\prime})g)\ \in\ \operatorname{Obj}( \mathfrak{C})\times(\operatorname{Aut}(\beta^{\prime})\backslash \operatorname{Mor}(\alpha,\beta^{\prime}))\] such that, for each such pair, there exists \(i\colon\beta^{\prime}\to\beta\) with \(\operatorname{cd}(i)=1\) and \(f=i\circ g\). (Note that (a) the existence of such \(i\) is independent of the choice of the representative \(g\) in the left coset \(\overline{g}:=\operatorname{Aut}(\beta^{\prime})g\)); and (b) since \(g\) is an epimorphism, the morphism \(i\) is necessarily unique). From now on we will also fix some notation on \(\mathfrak{C}\). 1. We write \(f_{\alpha}\) for the unique element of \(\operatorname{Mor}(\alpha,\bullet)\). 2. If \(f_{i}\colon\alpha\to\beta_{i}\) are morphisms for \(i=1,\dots,m\), we define \[\operatorname{Aut}(f_{1},\dots,f_{m}):=\{\tau\in\operatorname{Aut}(\alpha);f_ {i}\circ\tau=f_{i}\text{ for every }i=1,\dots,m\}.\] Note that \(\operatorname{Aut}(f_{\alpha})=\operatorname{Aut}(\alpha)\). 3. For each morphism \(f\colon\beta\to\gamma\) and object \(\alpha\in\operatorname{Obj}(\mathfrak{C})\), we define \(\overline{\operatorname{Mor}}(\alpha,f):=\operatorname{Aut}(f)\backslash \operatorname{Mor}(\alpha,\beta)\). When \(f=f_{\beta}\), we simply write \(\overline{\operatorname{Mor}}(\alpha,\beta):=\overline{\operatorname{Mor}}( \alpha,f_{\beta})=\operatorname{Aut}(\beta)\backslash\operatorname{Mor}( \alpha,\beta)\). 4. For each morphism \(f\colon\alpha\to\beta\), we let \(S_{f}\) denote the set of all pairs \((\beta^{\prime},\overline{g})\) satisfying the condition in Axiom (1). Moreover, for each \((\beta^{\prime},\overline{g})\in S_{f}\) we denote by \(i_{\overline{g},f}:=i\colon\beta^{\prime}\to\beta\) the morphism defined in Axiom 1. We define \(S_{\alpha}:=S_{f_{\alpha}}\). Here are the most relevant examples in this paper. **Example 4.2**.: (Simple normal crossing). Let \(X\) be a nonsingular variety and \(D=D_{1}+\ldots+D_{k}\) be a simple normal crossing divisor. To this we can associate a category \(\mathfrak{C}\) whose objects are the strata and morphisms are the inclusions. This category \(\mathfrak{C}\) is finite, skeletal, has a terminal element, every morphism is an epimorphism, it is ranked by codimension, and it satisfies Axiom 1. The category \(\mathfrak{C}\) is _simple normal crossing_ if in addition to Axiom 1, it satisfies: **Axiom 2**.: For every \(\alpha,\beta\in\operatorname{Obj}(\mathfrak{C})\) the set \(\overline{\operatorname{Mor}}(\alpha,\beta)\) has at most one element. **Example 4.3**.: The second example is \(\mathfrak{C}=G_{g,n}\) introduced in Section 2.d. The terminal object here is the trivial graph with 1 vertex of genus \(g\) carrying all the markings, and no edges. The rank function is the number of edges. The set \(S_{f}\) of a morphism \(f\colon G\to G^{\prime}\) is naturally identified with the set of edges of \(G\) that are contracted by \(f\). In particular, \(S_{G}\) equals the edge set \(E(G)\). The rank 1 objects, the boundary divisors, are either graphs with two vertices connected by one edge (corresponding to the divisors \(\Delta_{i,S}\), see 2.d), or the graph consisting of 1 vertex of genus \(g-1\) with 1 loop. **Example 4.4**.: The main example in this paper is the category \(\mathfrak{C}=\mathfrak{C}_{g,n}(\phi)\) that we introduced in Definition 3.23, an enhancement of the category \(G_{g,n}\) discussed above. The terminal object is the trivial graph endowed with the unique function that maps its unique vertex to the integer \(d\). The rank of an object \((G,(E_{G},D_{G}))\) equals \(|\operatorname{Edges}(G)|+|E_{G}|\). For \(f\colon(G^{\prime},(E_{G}^{\prime},D_{G^{\prime}}))\to(G,(E_{G},D_{G}))\), the set \(S_{f}\) is naturally identified with the set of edges contracted by \(f\). The rank 1 objects are \((G,(E_{G},D_{G}))\) with \(G\) a rank 1 object of \(G_{g,n}\) and \((E_{G},D_{G})\) a \(\phi\)-stable pseudodivisor (which implies \(E_{G}=\varnothing\)). **Example 4.5**.: To a nonsingular variety \(X\) (or DM stack) endowed with a normal crossing divisor \(D\), [14, Definition 3.5] associates a stratification category that respects Axiom 1 above. Note that the construction of _loc.cit_ in the case \(X=\overline{\mathcal{M}}_{g,n}\) and \(D=\Delta_{irr}+\sum_{(i,S)}\Delta_{i,S}\), which we discussed in Example 4.3, produces the quotient of the category \(G_{g,n}\) of stable graphs where 2 morphisms are identified whenever they are the same on the corresponding edge sets. (See [14, Figure 2] for examples of automorphisms that are identified to the identity). A similar phenomenon happens for the category of Example 4.4. In this paper we prefer instead to work with the usual category of stable graphs (and its enhancements). The next proposition is the analogue of the fact that the set of strata that contain a given stratum is in natural bijection with the subsets of the divisors that define that stratum. **Proposition 4.6**.: _Given a morphism \(f\colon\alpha\to\beta\) and \(1\leq k\leq\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)\), there is a natural bijection between the set of pairs_ \[\{(\gamma,\overline{j})\in\operatorname{Obj}(\mathfrak{C})\times\overline{ \operatorname{Mor}}(\alpha,\gamma):\ \operatorname{cd}(\gamma)-\operatorname{cd}(\beta)=k\text{ and }\exists h\colon\gamma\to\beta,f=h\circ j\}\] _(note that \(h\) above is unique) and the set \(\mathcal{P}(k,S_{f})\) of subsets of \(S_{f}\) containing \(k\) elements._ We start by observing the following: **Remark 4.7**.: Given a factorization \(f\colon\alpha\xrightarrow{j}\gamma\xrightarrow{h}\beta\), there is a natural inclusion \(j^{*}\colon S_{h}\hookrightarrow S_{f}\) given by \((\beta^{\prime},\overline{g^{\prime}})\mapsto(\beta^{\prime},\overline{g^{ \prime}\circ j})\). Moreover, we claim that for each \(\overline{j}\in\overline{\operatorname{Mor}}(\alpha,h)\) we have a well-defined \(\overline{j}^{*}(S_{h})\). Indeed, the map \(j^{*}\) is the same as \((j\circ\tau)^{*}\) for every \(\tau\in\operatorname{Aut}(\gamma\to\beta)\). Proof.: We first observe that by Remark 4.7, there is a natural map \(\lambda_{k,f}\) from the set of pairs, call it \(X_{k,f}\), to the set \(\mathcal{P}(k,S_{f})\) of \(k\)-elements subsets of \(S_{f}\), obtained by \(\lambda_{k,f}((\gamma,\overline{j})):=j^{*}(S_{h})\). Then we prove that the cardinality of \(X_{k,f}\) equals \(\binom{\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)}{k}\), which is also the cardinality of \(\mathcal{P}(k,S_{f})\). This is achieved by induction on \(\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)\) and double counting. For each \(c\in S_{f}\), let \(X_{k,f,c}\) be the subset of \(X_{k,f}\) of elements whose image via \(\lambda_{f}\) contains \(c\). By induction hypothesis, we have that \(|X_{k,f,c}|=\binom{\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)-1}{k-1}\). By Axiom 1 the number of elements of \(\{(a,b):\ a\in X_{k,f},b\in\lambda_{k,f}(a)\}\) equals \(k\cdot|X_{k,f}|\) and, it also equals \((\operatorname{cd}(\alpha)-\operatorname{cd}(\beta))\cdot\binom{\operatorname{ cd}(\alpha)-\operatorname{cd}(\beta)-1}{k-1}\). These two equalities prove that \(|X_{k,f}|=\binom{\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)}{k}\). Finally we prove that each \(\lambda_{k,f}\) is surjective. First we prove that this is the case for \(k=\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)-1\) (or equivalently when \(\operatorname{cd}(\gamma)=\operatorname{cd}(\alpha)-1\)). By the previous paragraph, \(\lambda_{k,f}\) is, for this \(k\), a function between sets of the same cardinality, so it is equivalent to prove that it is injective. Let \(a_{1},a_{2}\in X_{k,f}\) be such that \(\lambda_{k,f}(a_{1})=\lambda_{k,f}(a_{2})\) and let \(c\) be the only element of \(S_{f}\setminus\lambda_{k,f}(a_{1})\). If \(a_{1}\neq a_{2}\), then \(\lambda_{k,f}^{-1}(\{c\})\) contains at most \(\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)-2\) elements, but in the previous paragraph we have established that \(|X_{k,f,c}|=\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)-1\); this contradicts the assumption \(a_{1}\neq a_{2}\). To prove surjectivity of each \(\lambda_{k,f}\) we argue by induction on \(\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)\). Let \(S\in\mathcal{P}(k,S_{f})\). Choose \(T\supset S\) with \(T\in\mathcal{P}(\operatorname{cd}(\alpha)-\operatorname{cd}(\beta)-1,S_{f})\). By the previous paragraph, there exists \(d=(\delta,\overline{h})\) such that \(\lambda_{k,f}(d)=T\) and a factorization of \(f=h\circ g\) through \(\delta\), so \(T=g^{*}(T^{\prime})\) and \(S=g^{*}(S^{\prime})\) for some \(S^{\prime}\subset T^{\prime}\) subsets of \(S_{h}\). We have \(\operatorname{cd}(\delta)-\operatorname{cd}(\beta)=\operatorname{cd}(\alpha)- \operatorname{cd}(\beta)-1\). By applying the induction hypothesis to \(\lambda_{k,h}\), we find \(c\in X_{k,h}\) such that \(\lambda_{k,h}(c)=S^{\prime}\), and so \(\lambda_{k,f}(g^{*}c)=S\). This concludes the proof of surjectivity. We will now define some important geometric notions in the stratification category. **Definition 4.8**.: Fix \(f_{i}\colon\alpha_{i}\to\beta\) for \(i=1,\ldots,m\), and \(f\colon\gamma\to\beta\). Let \(g_{i}\colon\gamma\to\alpha_{i}\) for \(i=1,\ldots,m\) be a collection of morphisms such that \(f_{i}\circ g_{i}=f\). We say that the collection \((g_{i})\) is _generic_ with respect to the tuple \((f,(f_{i}))\) if \(S_{f}=\bigcup_{i=1}^{m}g_{i}^{*}(S_{f_{i}})\). We say that the collection \((f,(f_{i}))\) is _transversal_ at \((g_{i})\) if \(g_{i}^{*}(S_{f_{i}})\cap g_{j}^{*}(S_{f_{j}})=f^{*}(S_{\beta})\) for very \(i\neq j\). Following the above definition, we write \(\operatorname{Int}(f_{1},\ldots,f_{m})_{f}\) to denote the set of all generic tuples \((g_{1},\ldots,g_{m})\). **Remark 4.9**.: Fix the same data of the above definition. Let \((\tau_{1},\ldots,\tau_{m})\) be a tuple in \(\prod\operatorname{Aut}(f_{i})\) and let \((g_{1},\ldots,g_{m})\) be a generic collection, then \((\tau_{1}\circ g_{1},\ldots,\tau_{m}\circ g_{m})\) is also generic. A similar result holds for an automorphism \(\tau\in\operatorname{Aut}(f)\). That is: \((g_{i})\) is generic if and only if \((g_{i}\circ\tau)\) is generic. This gives a natural left action of \(\prod\operatorname{Aut}(f_{i})\) and right action of \(\operatorname{Aut}(f)\) on \(\operatorname{Int}((f_{i}))\). Following the remark, we define \[\overline{\operatorname{Int}}(f_{1},\ldots,f_{m})_{f}:=\prod\operatorname{Aut }(f_{i})\backslash\operatorname{Int}(f_{1},\ldots,f_{m})_{f}\] and \[\widetilde{\operatorname{Int}}(f_{1},\ldots,f_{m})_{f}:=\operatorname{Int}(f_ {1},\ldots,f_{m})_{f}/\operatorname{Aut}(f).\] Elements of \(\overline{\operatorname{Int}}(f_{1},\ldots,f_{m})\) will be denoted by \((\overline{g}_{1},\ldots,\overline{g}_{m})\), while the elements of \(\widetilde{\operatorname{Int}}(f_{1},\ldots,f_{m})\) will be denoted by \((g_{1},\ldots,g_{m})\operatorname{Aut}(f)\). When \(f_{1}=\ldots=f_{m}=f^{\prime}\), we write \(\operatorname{SInt}((f^{\prime})^{m})_{f}\) to denote the set of sets (not tuples) \(\{\overline{g}_{1},\ldots,\overline{g}_{m}\}\) (here the \(g_{i}\) must be pairwise distinct) such that \((g_{1},\ldots,g_{m})\in\overline{\operatorname{Int}}((f^{\prime})^{m})_{f}\). ### Normal crossing stratifications We say that a category \(\mathfrak{C}\) as in the previous section is the category of strata of a nonsingular DM-stack \(X_{\bullet}\) if there exists a functor \[\mathfrak{C} \to\text{nonsingular DM-stacks}\] \[\alpha \mapsto X_{\alpha}\] \[f\colon\alpha\to\beta \mapsto X_{f}\colon X_{\alpha}\to X_{\beta}\] such that 1. The morphisms \(X_{f}\colon X_{\alpha}\to X_{\beta}\) are proper and local complete intersection of codimension \(\operatorname{cd}(f)\). 2. The quotient stack \(\left[\frac{X_{\alpha}}{\operatorname{Aut}(f)}\right]\) is the normalization of the image of \(X_{f}\). 3. The normal bundle \(N_{f}\) of \(X_{f}\) can be written as \(N_{f}=\bigoplus_{e\in S_{f}}\mathbb{L}_{e}\), where, for a pair \(e=(\beta^{\prime},\overline{g})\in S_{f}\), we define \(\mathbb{L}_{e}:=g^{*}(N_{i_{g,f}})\). 4. If \(f_{i}\colon\alpha_{i}\to\beta\) for \(i=1,2\) are two morphisms, then the following diagram \[\begin{CD}\bigsqcup_{f\colon\,\gamma\to\beta}\\ (g_{1},g_{2})\operatorname{Aut}(f)\in\operatorname{Int}(f_{1},f_{2})_{f}\\ \big{\downarrow}_{X_{\alpha_{2}}}\end{CD}[X_{G}/\operatorname{Aut}(g_{1},g_{2} )]\xrightarrow{X_{g_{1}}}X_{\alpha_{1}}\] is a fiber diagram. From now on we will abuse the notation and, for \(f\in\operatorname{Mor}(\alpha,\beta)\), we simply write \(f\colon X_{\alpha}\to X_{\beta}\) in place of \(X_{f}\colon X_{\alpha}\to X_{\beta}\). **Notation 4.10**.: We will use a prime to denote the image of a morphism \(f\colon X_{\alpha}\to X_{\beta}\). In other words, \(X_{f}^{\prime}:=\operatorname{Im}(f)\subseteq X_{\beta}\) and, in particular, \(X_{\alpha}^{\prime}:=\operatorname{Im}(f_{\alpha})\subseteq X_{\bullet}\). We also say that the objects \(X_{\alpha}^{\prime}\) are the embedded strata and the objects \(X_{\alpha}\) are the (resolved) strata. The two main examples in this paper are that of \(\overline{\mathcal{M}}_{g,n}\) and that of \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\): **Example 4.11**.: The category \(G_{g,n}\) is a category of strata of the nonsingular DM-stack \(\overline{\mathcal{M}}_{g,n}\). If \(G\in G_{g,n}\), we write \(\overline{\mathcal{M}}_{G}\) for the corresponding stratum and \(\overline{\mathcal{M}}_{G}^{\prime}\) for its image in \(\overline{\mathcal{M}}_{g,n}\) ([1, Chapter XII, Section 10]). **Example 4.12**.: The category \(\mathfrak{C}_{g,n}(\phi)\) is a category of strata of the nonsingular DM-stack \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) ([13, Section 3]). If \((G,(E,D))\in\mathfrak{C}_{g,n}(\phi)\), we write \(\mathcal{J}_{G,(E,D)}\) for4 the corresponding stratum and \(\mathcal{J}_{G,(E,D)}^{\prime}\) for its image in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\). Footnote 4: each such stratum also depends on \(\phi\), but we do include this dependency to ease the notation \(\operatorname{Aut}(f)\). The point made in Example 4.12 allows us to complete the proof of Proposition 3.38. The next section is devoted to completing that proof. #### 4.b.1. Proof of Proposition 3.38 Proof.: 1. If \(d=g-1\), the result of 3.38 follows from [11, Theorem 4.1]. 2. Assume that \(d=g-2\). In order to reach our conclusion, we prove that \(\phi\) is in the claimed range if and only if the intersection of \(\mathsf{W}_{d}(\phi)\) with the boundary of \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) has codimension larger than the expected codimension \(2\). Also, the strata that generically parameterize curves whose irreducible components are singular can be excluded, because the existence of a nonzero global section is equivalent if those components are smoothened. Firstly, we analyse the boundary divisors, which have the form \(\mathcal{J}_{(G(i,1,S),D)}\). The range of \(\phi\) in the claim is equivalent to constraining the divisor \(D\) to equal \((i-1,g-i-1)\). It is straightforward to verify that the locus cut out in \(\mathcal{J}_{(G(i,1,S),D)}\) by the condition of admitting a global section has codimension at least \(2\) in \(\mathcal{J}_{(G(i,1,S),D)}\), hence it has codimension at least \(3\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\). If, on the other hand, the divisor \(D\) is of the form \((k-1,g-k-1)\) for \(k\neq i\), then either \(\mathcal{J}_{(G(i,1,S),D)}\) is contained in \(\mathsf{W}_{d}(\phi)\), or their intersection has codimension \(1\) in \(\mathcal{J}_{(G(i,1,S),D)}\). In both cases, their intersection has codimension smaller than or equal to \(2\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\). Then we analyse the codimension \(2\) strata \(\mathcal{J}_{G,D}\). If \(G\) is a tree, the stability condition is uniquely determined by the stability condition on the boundary divisors, and so is the stable degree \(D\) - the problem has been resolved in the previous paragraph. We assume therefore that \(G=G(i,2,S)\) is a vine curve with \(2\) nodes. Using the change of coordinates (3.11), the range identified in our statement is equivalent to requesting that the stable divisor \(D\) on \(G(i,2,S)\) equals one of \((i-2,g-i)\), \((i-1,g-i-1)\), \((i,g-i-2)\) or \((i+1,g-i-3)\). In all these cases, one can check that the generic element of \(\mathcal{J}_{G,D}\) does not admit a global section. Conversely, if \(D\) is not one of those \(4\) cases, the stratum \(\mathcal{J}_{G,D}\) is contained in \(\mathsf{W}_{d}(\phi)\). This concludes our proof. 3. Assume that \(1\leq d\leq g-3\). In order to reach our conclusion, it is enough to prove that for every \(\phi\), the intersection of \(\mathsf{W}_{d}(\phi)\) with some boundary divisor has codimension smaller than or equal to the expected codimension \(g-d\). We take \(i=\lfloor\frac{g}{2}\rfloor\) and pick any \(S\subseteq[n]\), and show that the intersection of \(\mathsf{W}_{d}(\phi)\) with the preimage of \(\Delta_{i,S}\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) contains a locus of codimension smaller than or equal to \(g-d\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\). The stable bidegree \(D\) such that the intersection \(\mathcal{J}_{G(i,1,S),D}\cap\mathsf{W}_{d}(\phi)\) has largest codimension is \(D=(\frac{d}{2},\frac{d}{2})\) for \(d\) even (resp \(D=(\frac{d-1}{2},\frac{d+1}{2})\) for \(d\) odd). The intersection has codimension \(\lceil\frac{g-d-1}{2}\rceil+2\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) and, for \(d\leq g-3\), this number is smaller than or equal to \(g-d\). (4-5) Assume that \(d\leq 0\). Assume first that \(\phi\) is not in the given range. Then, arguing as in the case \(d=g-2\) above, one can check that in some boundary divisor or in some codimension \(2\) vine curve locus (depending on which inequality \(\phi\) fails to satisfy) the intersection with \(\mathsf{W}_{d}(\phi)\) has codimension larger than the expected one (which is \(g\) for \(d=0\), and by which we mean that the locus is not empty when \(d<0\)). Assume now that \(\phi\) is in the given range. We will use the following result: **Proposition 4.13**.: _If \(\phi\in V_{g,n}^{d}\) is nondegenerate and such that the inequality_ \[\phi_{C_{0}}\leq\frac{\left|C_{0}\cap\overline{C_{0}^{c}}\right|}{2}. \tag{4.14}\] _holds for all \((C,p_{1},\ldots,p_{n})\in\overline{\mathcal{M}}_{g,n}\) and for all subcurves \(C_{0}\subseteq C\), then_ 1. _if_ \(d=0\)_, then_ \(F\in\mathsf{W}_{d}(\phi)\) _if and only_ \(F\) _is the trivial line bundle._ 2. _if_ \(d<0\)_, then_ \(\mathsf{W}_{d}(\phi)=\varnothing\)_._ Proof.: Part (a) is [1, Lemma 8, Lemma 9]. Part (b) follows from Lemma 4.15 below. By applying Proposition 4.13, and observing that both \(\phi\) and the multidegree \(D\) of line bundles are stable for graph morphisms, and arguing as in the proof of Proposition 3.5, we conclude that Inequality (4.14) is satisfied for all curves \(C\) and subcurves \(C_{0}\) if and only if it is satisfied for all vine curves \(C\) (and taking \(C_{0}\) to be one of its irreducible components). After applying the change of coordinates (3.11), this is equivalent to the given range. The only remaining case to consider is when \(d=0\) and \(-3/2<x_{0,1,S}<-1/2\) for some \(S\). In that case, the intersection of \(\mathcal{J}_{G(0,1,S),(-1,1)}\) with \(\mathsf{W}_{d}(\phi)\) has codimension \(g+1\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\), hence the intersection is in the closure of the restriction of \(\mathsf{W}_{d}(\phi)\) to the open part. In the proof of Proposition 4.13 we used the following. **Lemma 4.15**.: _Assume \(d<0\). Let \(C\) be a nodal curve, and let \(\phi\in V_{\mathrm{stab}}^{d}(C)\) be such that Inequality (4.14) holds for all subcurves \(C_{0}\subseteq C\)._ _Then every \(\phi\)-stable rank-\(1\) torsion-free sheaf \(F\) on \(C\) satisfies \(H^{0}(C,F)=0\)._ The lemma generalizes to the case \(d<0\) the argument given in in [1, Lemma 3.1] and [1, Lemma 8]. Proof.: Let \(F\) be one such sheaf. If \(F\) is \(\phi\)-stable, the inequality \[\deg_{C_{0}}(F)<\frac{\left|C_{0}\cap\overline{C_{0}^{c}}\right|}{2}-\delta_{C _{0}}(F)+\phi_{C_{0}} \tag{4.16}\] holds for all subcurves \(\varnothing\neq C_{0}\subsetneqq C\). The latter, combined with (4.14), implies the inequality \[\deg_{C_{0}}(F)<\left|C_{0}\cap\overline{C_{0}^{c}}\right|-\delta_{C_{0}}(F) \tag{4.17}\] for all subcurves \(\varnothing\neq C_{0}\subsetneqq C\). The fact that the latter inequality holds on all subcurves \(C_{0}\) implies that \(F\) admits no nonzero global sections. Indeed, if such a section \(s\) existed denote by \(C^{\prime}\) its support. Note that \(C^{\prime}\neq C\) because the degree of \(F\) is negative. Hence, \(C^{\prime}\neq C\) and we have the inequality \[\deg_{C^{\prime}}(F)\geq\left|C^{\prime}\cap\overline{C^{\prime c}}\right|- \delta_{C^{\prime}}(F),\] contradicting (4.17). We conclude this interlude by observing that for all degrees "in the middle", the Brill-Noether cycle cannot be of the expected codimension. **Remark 4.18**.: Assume that \(1\leq d\leq g-5\). We claim that there exist no \(\phi\) such that \(\mathsf{W}_{d}(\phi)\) has the expected codimension \(g-d\). To show this, we argue in a very similar way to the case \(1\leq d\leq g-3\) of the proof of Proposition 3.38. We let \(i=\lfloor\frac{g}{2}\rfloor\) and pick any \(S\subseteq[n]\). In the same way as discussed in loc.cit., for \(1\leq d\leq g-5\), the intersection of \(\mathsf{W}_{d}(\phi)\) with the preimage of \(\Delta_{i,S}\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\) contains a locus of codimension strictly smaller than \(g-d\) in \(\overline{\mathcal{J}}_{g,n}^{d}(\phi)\), and this proves our claim. ### Intersection theory formulas In this section we enunciate and prove some results concerning the intersection theory of this stratification. From now on in this section we fix \(X_{\bullet}\) and its stratification functor. Since most of our computations are done using Chern classes, we will abuse the notation as we now explain. Let \(p_{1},p_{2},q\) be polynomials in variables \(x_{i,j}\) such that \(p_{1}=qp_{2}\). Assume that \(L_{i}\) are elements in the \(K\)-theory of \(X\), and that \(A=p_{1}(c_{j}(L_{i}))\) and \(B=p_{2}(c_{j}(L_{i}))\) are formal polynomials in the Chern classes of \(L_{i}\). We will write \(\frac{A}{B}\) to mean the class \(q(c_{j}(L_{i}))\cap[X]\) in the Chow group of \(X\). More generally, we will write \(\frac{A}{B}\in A^{*}(X)\) to mean that there exists polynomials \(p_{1},p_{2},q\) and \(K\)-theory elements \(L_{i}\) satisfying the conditions in the previous paragraph. The main motivation for this is [11, ], which states that, for a vector bundle \(N\) \[\frac{c(L\otimes\wedge^{\bullet}N)}{c_{\operatorname{rk}N}(N)}\] is a polynomial in the Chern classes of \(L\) and of \(N\). In this language, we have the following excess intersection formula. **Proposition 4.19**.: _Let \(f_{i}\colon\alpha_{i}\to\beta\) for \(i=1,2\) be two morphisms in \(\mathfrak{C}\) and fix classes \(A_{i}/c_{\operatorname{cd}(f_{i})}(N_{f_{i}})\in A^{*}(X_{\alpha_{i}})\), then_ \[f_{1*}\Big{(}\frac{A_{1}}{c_{\operatorname{cd}(f_{1})}(N_{f_{1}})}\Big{)}f_{2* }\big{(}\frac{A_{2}}{c_{\operatorname{cd}(f_{2})}(N_{f_{2}})}\Big{)}=\sum_{ \begin{subarray}{c}f\colon\gamma\to\beta\\ (g_{1},g_{2})\operatorname{Aut}(f)\in\widetilde{\operatorname{int}}(f_{1},f_{ 2})_{f}\end{subarray}}\frac{f_{*}}{|\operatorname{Aut}(g_{1},g_{2})|}\Big{(} \frac{g_{1}^{*}A_{1}g_{2}^{*}A_{2}}{c_{\operatorname{cd}(f)}(N_{f})}\Big{)}\] _where \(g_{i}=1,2\) are the base change morphisms (as in Section 4.b, Item (4))._ Proof.: This follows directly from Item (4) in Section 4.b and from the excess intersection formula (see [11, Proposition 17.4.1]). We will also be using the following corollary of the above formula. **Corollary 4.20**.: _Let \(f_{i}\colon\alpha_{i}\to\beta\) be two morphisms in \(\mathfrak{C}\) and let \(A_{i}/c_{\operatorname{cd}(f_{i})}(N_{f_{i}})\in A^{*}(X_{\alpha_{i}})\) be such that \(A_{i}\) is invariant under \(\operatorname{Aut}(f_{i})\), then_ \[\frac{f_{1*}(\frac{A_{1}}{c_{\operatorname{cd}(f_{1})}(N_{f_{1}})})}{| \operatorname{Aut}(f_{1})|}\frac{f_{2*}(\frac{A_{2}}{c_{\operatorname{cd}(f_{2 })}(N_{f_{2}})})}{|\operatorname{Aut}(f_{2})|}=\sum_{f\colon\gamma\to\beta} \frac{f_{*}}{|\operatorname{Aut}(f)|}\Big{(}\sum_{(\overline{g}_{1},\overline{ g}_{2})\in\overline{\operatorname{int}}(f_{1},f_{2})_{f}}\frac{g_{1}^{*}A_{1}g_{2}^{*} A_{2}}{c_{\operatorname{cd}(f)}(N_{f})}\Big{)}\] Proof.: We expand the formula in Proposition 4.19 to obtain \[f_{1*}\Big{(}\frac{A_{1}}{c_{\operatorname{cd}(f_{1})}(N_{f_{1}})}\Big{)}f_{2* }\big{(}\frac{A_{2}}{c_{\operatorname{cd}(f_{2})}(N_{f_{2}})}\Big{)}=\sum_{ \begin{subarray}{c}f\colon\gamma\to\beta\\ (g_{1},g_{2})\in\operatorname{Int}(f_{1},f_{2})_{f}\end{subarray}}\frac{f_{*}}{ |\operatorname{Aut}(f)|}\Big{(}\frac{g_{1}^{*}A_{1}g_{2}^{*}A_{2}}{c_{ \operatorname{cd}(f)}(N_{f})}\Big{)}\] because \(|(f\colon\gamma\to\beta,(g_{1},g_{2})\operatorname{Aut}(f))|=|\operatorname{ Aut}(f)|/|\operatorname{Aut}(g_{1},g_{2})|\). From there, we have that \[f_{1*}(\frac{A_{1}}{c_{\operatorname{cd}(f_{1})}(N_{f_{1}})})f_{2* }(\frac{A_{2}}{c_{\operatorname{cd}(f_{2})}(N_{f_{2}})})=\\ =\sum_{\begin{subarray}{c}f\colon\gamma\to\beta\\ (\overline{g}_{1},\overline{g}_{2})\in\overline{\operatorname{Int}}(f_{1},f_{ 2})_{f}\end{subarray}}|\operatorname{Aut}(f_{1})||\operatorname{Aut}(f_{2})| \frac{f_{*}}{|\operatorname{Aut}(f)|}(\frac{g_{1}^{*}A_{1}g_{2}^{*}A_{2}}{c_{ \operatorname{cd}(f)}(N_{f})})\] and the result follows. Next, we apply the above to obtain a self-intersection formula. **Corollary 4.21**.: _Let \(f\colon\alpha\to\beta\), then_ \[\left(\frac{f_{*}}{|\operatorname{Aut}(f)|}\Big{(}\frac{A}{c_{\operatorname{ cd}(f)}(N_{f})}\Big{)}\right)^{k}=\sum_{f^{\prime}\colon\gamma\to\beta}\frac{f^{ \prime}_{*}}{|\operatorname{Aut}(f^{\prime})|}(\sum_{(\overline{g}_{1},\dots, \overline{g}_{k})\in\overline{\operatorname{int}}(f,\dots,f)_{f}}\frac{\prod_{ j=1}^{k}g_{i}^{*}(A)}{c_{\operatorname{cd}(f^{\prime})}(N_{f^{\prime}})})\] The latter will be used to prove the following GRR formula for the total Chern class (deduced from the usual one, involving the Chern character). **Proposition 4.22** (GRR for the total Chern class).: _Let \(f\colon\alpha\to\beta\) be a morphism and let \(\mathcal{F}\) be an element in the \(K\)-theory of \(X_{\alpha}\) with rational coefficients. Then_ \[c(\frac{f_{*}(\mathcal{F})}{|\operatorname{Aut}(f)|})=1+\sum_{ \begin{subarray}{c}m\geq 1\\ f^{\prime}\colon\gamma\to\beta\end{subarray}}\frac{f^{\prime}_{*}}{| \operatorname{Aut}(f^{\prime})|}\bigg{(}\sum_{\{\overline{g}_{1},\dots, \overline{g}_{m}\}\in\operatorname{SInt}((f)^{m})_{f^{\prime}}}\frac{\prod_{j= 1}^{m}g_{i}^{*}(c(\bigwedge^{\bullet}N_{f}^{\vee}\otimes\mathcal{F})-1)}{c_{ \operatorname{cd}(f^{\prime})}(N_{f^{\prime}})}\bigg{)}\] (This is inspired by [11, Theorem 15.3]). Proof.: We begin with the usual GRR formula \[\operatorname{ch}\big{(}\frac{f_{*}(\mathcal{F})}{|\operatorname{Aut}(f)|} \big{)}=f_{*}\big{(}\frac{\operatorname{ch}(\mathcal{F})}{|\operatorname{Aut }(f)|}\operatorname{td}(N_{f})^{-1}\big{)},\] which, combining with the formula for the Todd class, implies \[\operatorname{ch}_{n}\big{(}\frac{f_{*}(\mathcal{F})}{|\operatorname{Aut}(f)|} \big{)}=\frac{f_{*}}{|\operatorname{Aut}(f)|}\bigg{(}\frac{\operatorname{ch}_ {n}(\bigwedge^{\bullet}N_{f}^{\vee}\otimes\mathcal{F})}{c_{\operatorname{cd}( f)}(N_{f})}\bigg{)}.\] By the inversion formula to express the total Chern class in terms of the Chern character (see e.g. [12, Equation 3.9]), we deduce \[c\big{(}\frac{f_{*}(\mathcal{F})}{|\operatorname{Aut}(f)|}\big{)}=\exp\big{(} \frac{f_{*}}{|\operatorname{Aut}(f)|}\bigg{(}\frac{\sum_{n\geq 1}\frac{(-1)^{n-1}}{ n}\operatorname{ch}_{n}(\bigwedge^{\bullet}N_{f}^{\vee}\otimes\mathcal{F})}{c_{ \operatorname{cd}(f)}(N_{f})}\bigg{)}\big{)}.\] Setting \(A:=\sum_{n\geq 1}\frac{(-1)^{n-1}}{n}\operatorname{ch}_{n}(\bigwedge^{ \bullet}N_{f}^{\vee}\otimes\mathcal{F})\), we will then compute \[\star:=\exp\bigg{(}\frac{f_{*}}{|\operatorname{Aut}(f)|}\bigg{(}\frac{A}{c_{ \operatorname{cd}(f)}(N_{f})}\bigg{)}\bigg{)}\] \[\star =1+\sum_{k\geq 1}\left(\frac{f_{*}}{|\operatorname{Aut}(f)|}\Big{(} \frac{A}{c_{\operatorname{cd}(f)}(N_{f})}\Big{)}\right)^{k}\frac{1}{k!}\] \[=1+\sum_{k\geq 1}\sum_{f^{\prime}:\;\gamma\to\beta}\frac{f_{*}^{ \prime}}{|\operatorname{Aut}(f^{\prime})|}\Big{(}\sum_{\begin{subarray}{c}( \overline{g}_{1},\ldots,\overline{g}_{k})\in\overline{\operatorname{Int}}((f) ^{k})_{f^{\prime}}\end{subarray}}\frac{\prod_{i=1}^{k}g_{i}^{*}(A)}{c_{ \operatorname{cd}(f^{\prime})}(N_{f^{\prime}})}\cdot\frac{1}{k!}\Big{)}\] \[=1+\sum_{f^{\prime}:\;\gamma\to\beta}\frac{f_{*}^{\prime}}{| \operatorname{Aut}(f^{\prime})|}\Big{(}\sum_{\begin{subarray}{c}m>1\\ \{\overline{g}_{1},\ldots,\overline{g}_{m}\}\in\overline{\operatorname{Int}}(( f)^{m})_{f^{\prime}}\end{subarray}}\frac{\prod_{i=1}^{m}g_{i}^{*}(A)^{k_{i}}}{c_{ \operatorname{cd}(f^{\prime})}(N_{f^{\prime}})}\cdot\frac{1}{(\sum_{i=1}^{m}k _{i})!}\Big{)}\] \[=1+\sum_{f^{\prime}:\;\gamma\to\beta}\frac{f_{*}^{\prime}}{| \operatorname{Aut}(f^{\prime})|}\Big{(}\sum_{\begin{subarray}{c}m>1\\ \{\overline{g}_{1},\ldots,\overline{g}_{m}\}\in\overline{\operatorname{Int}}(( f)^{m})_{f^{\prime}}\end{subarray}}\frac{\prod_{i=1}^{m}(\exp(g_{i}^{*}(A))-1)}{c_{ \operatorname{cd}(f^{\prime})}(N_{f^{\prime}})}\Big{)}\] The claim is then obtained by applying again the inversion formula in the form \[\exp(A)=c(\bigwedge^{\bullet}N_{f}\otimes F).\] ### Blow-up Starting from a category \(\mathfrak{C}\) as in 4.a, here we define the blow-up category at a stratum with transversal self-intersection. Then for a fixed stratification functor \(X\), we interpret the blow-up category as the stratification of the blow-up of the nonsingular DM-stack \(X\) at that stratum. **Definition 4.23**.: We say that an object \(\delta\in\operatorname{Obj}(\mathfrak{C})\)_has transversal self-intersection_ if for every pair \(g_{1},g_{2}\colon\gamma\to\delta\), the sets \(g_{1}^{*}(S_{\delta})\), \(g_{2}^{*}(S_{\delta})\) are either equal or disjoint. **Remark 4.24**.: If \(g_{1}^{*}(S_{\delta})=g_{2}^{*}(S_{\delta})\), then \(\overline{g}_{1}=\overline{g}_{2}\in\overline{\operatorname{Mor}}(\gamma,\delta)\). See Proposition 4.6. **Example 4.25**.: Let \(\mathfrak{C}=G_{2k+1,1}\) for some \(k\geq 1\). We claim that the vine curve graph \(G=G(k,2,\{1\})\) does not have transversal self-intersection. Indeed, let \(G^{\prime}\) be the "triangle" graph with \(2\) vertices of genus \(k\) and a third vertex of genus \(0\) carrying the marking. There are two different morphisms \(g_{1},g_{2}\colon G^{\prime}\to G\) and \(g_{1}^{*}(S_{G})\cap g_{2}^{*}(S_{G})\) consists of \(1\) edge. **Definition 4.26**.: Let \(\delta\) be an object of \(\mathfrak{C}\) with transversal self-intersection. We define _the blowup category \(\operatorname{Bl}_{\delta}\mathfrak{C}\) of \(\mathfrak{C}\) at \(\delta\)_ as follows. First consider the following category. Its set of objects consists of pairs \((\gamma,\mathbf{m})\) where \(\gamma\) is an object of \(\mathfrak{C}\) and \(\mathbf{m}\) is a function \(\overline{\operatorname{Mor}}(\gamma,\delta)\to\mathcal{P}(S_{\gamma})\) such that \[\varnothing\neq\mathbf{m}(\overline{g})\subseteq g^{*}(S_{\delta})\ \text{ for every }\ \overline{g}\in\overline{\operatorname{Mor}}(\gamma,\delta). \tag{4.27}\] Its morphisms \((\gamma_{1},\mathbf{m}_{1})\to(\gamma_{2},\mathbf{m}_{2})\) are morphisms \(f\colon\gamma_{1}\to\gamma_{2}\) such that for every \(\overline{g}_{1}\in\overline{\operatorname{Mor}}(\gamma_{1},\delta)\) we have that one of the conditions hold 1. there exists \(\overline{g}_{2}\in\overline{\operatorname{Mor}}(\gamma_{2},\delta)\) such that \(\overline{g}_{1}=\overline{g_{2}\circ f}\) and \(\mathbf{m}_{1}(\overline{g}_{1})\subseteq f^{*}(\mathbf{m}_{2}(\overline{g}_ {2}))\), 2. or \(\mathbf{m}_{1}(\overline{g}_{1})\cap f^{*}(S_{\gamma_{2}})=\varnothing\). We then define \(\operatorname{Bl}_{\delta}\mathfrak{C}\) as a skeleton of the above category. **Proposition 4.28**.: _The category \(\operatorname{Bl}_{\delta}\mathfrak{C}\) is naturally ranked, and it satisfies Axiom (1) from Section 4.a._ Proof.: Straightforward. **Remark 4.29**.: The rank of \((\gamma,\mathbf{m})\) is \[\operatorname{rk}(\gamma)-\sum_{\overline{g}\in\operatorname{Mor}(\gamma, \delta)}|\mathbf{m}(\overline{g})|.\] Moreover, the set \(S_{(\gamma,\mathbf{m})}\) (the codimension \(1\) strata that contain a fixed stratum \((\gamma,\mathbf{m})\)) is naturally identified with \(S_{\gamma}\setminus(\bigcup_{\overline{g}\in\overline{\operatorname{Mor}}( \gamma,\delta)}\mathbf{m}(\overline{g}))\cup\overline{\operatorname{Mor}}( \gamma,\delta)\). Recall Notation 4.10. We define \(h\colon\widetilde{X}_{\beta}\to X_{\beta}\) to be the blow up of \(X_{\beta}\) at the union of the images \(X^{\prime}_{g_{1}}\subseteq X_{\beta}\) for every \(g_{1}\colon\gamma\to\beta\) such that there exists \(g_{2}\colon\gamma\to\delta\) satisfying \((g_{1},g_{2})\in\operatorname{Int}(f_{\beta},f_{\delta})_{f_{\gamma}}\). We define \(X_{\beta,\mathbf{m}}\) to be \[\prod_{\overline{g}\in\overline{\operatorname{Mor}}(\gamma,\delta)}\mathbb{P} \Big{(}\bigoplus_{e\in\mathbf{m}(\overline{g})}\mathbb{L}_{e}\Big{)}.\] **Proposition 4.30**.: _The functor_ \[\operatorname{Bl}_{\delta}\mathfrak{C} \to\text{\rm nonsingular DM stacks}\] \[(\gamma,\mathbf{m}) \mapsto X_{\gamma,\mathbf{m}}\] _is a stratification of \(\operatorname{Bl}_{X^{\prime}_{\delta}}X_{\bullet}\)._ Proof.: This follows from [10, Section 4.5] (see also [11, Theorem 6, p.90]) where the nonsingular DM stack is constructed as the star subdivision of the cone stack associated to the stratification. **Remark 4.31**.: When there exists no morphism \(\gamma\to\delta\), there exists a unique \(\mathbf{m}\) such that the pair \((\gamma,\mathbf{m})\in\operatorname{Bl}_{\delta}\mathfrak{C}\). The latter is the stratum that corresponds to the strict transform of the image \(X^{\prime}_{\gamma}\subset X_{\bullet}\). **Remark 4.32**.: Suppose that \(\delta\) is a stratum with transversal self intersection and \(f\colon\gamma\to\beta\) is a morphism such that \(\operatorname{Mor}(\beta,\delta)=\varnothing\). Let \((\gamma,\mathbf{m})\) be an object in \(\operatorname{Bl}_{\delta}\mathfrak{C}\). Then the morphism \(f\) lifts to a morphism \((\gamma,\mathbf{m})\to(\beta,\varnothing)\) in \(\operatorname{Bl}_{\delta}\mathfrak{C}\) if and only if \(f^{*}S_{\beta}\cap\bigcup_{\overline{g}\in\overline{\operatorname{Mor}}( \gamma,\delta)}\mathbf{m}(g)=\varnothing\). (That is, when \(X^{\prime}_{\gamma,\mathbf{m}}\) is contained in the strict transform of \(X^{\prime}_{\beta}\) in \(\operatorname{Bl}_{X^{\prime}_{\delta}}X_{\bullet}\)). In this paper, the main example of the above construction is going to be the case where \(\mathfrak{C}\) is the category \(\mathfrak{C}_{g,n}(\phi)\) of Example 4.12, or a blowup of the latter. We now describe the example of \(1\) blowup of \(\mathfrak{C}_{g,n}(\phi)\) at one of the centers that will be relevant for our main result. **Example 4.33**.: Let \(\phi\in V_{g,n}^{d}\) and \((G,D)\in\mathfrak{C}_{g,n}(\phi)\) be the lift of a vine curve \(G=G(i,t,S)\) by some \(\phi\)-stable divisor \(D\). Morphisms \(f\colon(G^{\prime},(E^{\prime},D^{\prime}))\to(G,D)\) correspond to subsets \(T_{f}\subset V(G^{\prime E^{\prime}})\) such that the complete subgraphs \(G(T_{f}),G(T_{f}^{c})\) in \(G^{\prime E^{\prime}}\) are connected and of genus \(i,g-i-t+1\), the markings \(S\) are on \(G(T_{f})\) and the markings \(S^{c}\) are on \(G(T_{f}^{c})\), and \(D^{\prime}(G(T_{f}))=D(v_{1})\) and \(D^{\prime}(G(T_{f}^{c}))=D(v_{2})\) (for \(v_{1},v_{2}\) the two vertices of \(G\)). We let \(E(T_{f})\subseteq E(G^{\prime E^{\prime}})\) be the subset of \(t\) edges that separate \(G(T_{f})\) from \(G(T_{f}^{c})\). Assume that \((G,D)\) is a stratum5 with transversal self-intersection. The category \(\operatorname{Bl}_{(G,D)}\mathfrak{C}_{g,n}(\phi)\) defined above stratifies the blowup \(\operatorname{Bl}_{\mathcal{T}^{\prime}(G,D)}\overline{\mathcal{J}}_{g,n}^{d} (\phi)\). We can describe more explicitly its objects as tuples \((G^{\prime},E^{\prime},D^{\prime},\alpha)\) such that \((G^{\prime},(E^{\prime},D^{\prime}))\in\operatorname{Obj}(\mathfrak{C}_{g,n} (\phi))\) and \(\alpha\) is a choice, for each morphism \(f\colon(G^{\prime},D^{\prime})\to(G,D)\) (up to automorphisms of \((G,D)\)), of a subset \(\varnothing\neq\alpha(\operatorname{Aut}(G,D)f)\subseteq E(T_{f})\). Footnote 5: because of Lemma 5.27, in this paper we will never need to blowup any strata of the form \((G,E,D)\) with \(E\neq\varnothing\) We now define the psi-classes associated to a given stratum \(\gamma\in\mathfrak{C}\). Recall that each \(e\in S_{\gamma}\) corresponds to a morphism \(j_{e}\colon\gamma\to\beta_{e}\) where the latter has rank \(1\). Then define the psi-classes \[\Psi_{\beta_{e}}:=-c_{1}(\mathbb{L}_{e})=-c_{1}(N_{X_{\beta_{e}}}X_{\bullet}),\quad\psi_{\gamma,e}:=j_{e}^{*}\Psi_{\beta_{e}} \tag{4.34}\] (see Item (3) in the beginning of Section 4.b) for \(\mathbb{L}_{e}\)). We will now state and prove a pushforward formula for monomials in psi-classes under the blowdown morphism. We begin by introducing some notation. Recall Remark 4.29. For an object \((\gamma,\mathbf{n})\) in \(\operatorname{Bl}_{\delta}\mathfrak{C}\) we define the sets \[S_{\gamma\setminus\delta} :=S_{\gamma}\setminus\bigcup_{j\colon\gamma\to\delta}j^{*}(S_{ \delta}),\] \[\operatorname{FU}_{\delta}(\gamma,\mathbf{n}) :=\bigcup_{j\colon\gamma\to\delta}j^{*}(S_{\delta})\setminus \mathbf{n}(\overline{j}),\] \[\operatorname{CU}_{\delta}(\gamma,\mathbf{n}) :=\bigcup_{j\colon\gamma\to\delta}\mathbf{n}(\overline{j}).\] (The symbols \(\operatorname{FU}\) and \(\operatorname{CU}\) will acquire some meaning in Section 7 as certain collection of edges, see Equation (7.19).) Note that the unions can equivalently be taken over \(\overline{\operatorname{Mor}}(\gamma,\delta)\) instead of over all morphisms. (See Remark 4.7). We define \(H^{\delta}_{\gamma,\mathbf{n}}((g^{\prime}_{e^{\prime}})_{e^{\prime}\in S_{\gamma, \mathbf{n}}})\) as the set of tuples \(((a_{e})_{e\in S_{\gamma}},(g_{e})_{e\in S_{\gamma}})\) of non-negative integers satisfying \(a_{e}=0\) for every \(e\notin\operatorname{FU}_{\delta}(\gamma,\mathbf{n})\), \[\sum_{e\in\mathbf{n}(\overline{j})}(g_{e}+1)=g^{\prime}_{\overline{j}}+1+\sum_ {e\in j^{*}(S_{\delta})\setminus\mathbf{n}(\overline{j})}a_{e}\] for every \(\overline{j}\in\overline{\operatorname{Mor}}(\gamma,\delta)\subseteq S_{ \gamma,\mathbf{n}}\), and \(g_{e}=g^{\prime}_{e}\) for every \(e\in S_{\gamma}\setminus\operatorname{CU}_{\delta}(\gamma,\mathbf{n})\subseteq S _{\gamma,\mathbf{n}}\). For a morphism \(h\colon(\gamma,\mathbf{n})\to(\beta,\mathbf{m})\) and a tuple \((g^{\prime}_{e^{\prime}})_{e^{\prime}\in S_{\beta,\mathbf{m}}}\) we define \(h^{*}(g^{\prime}_{e^{\prime}})\) as the tuple \[(h^{*}(g^{\prime}_{e^{\prime}}))_{\widetilde{e}}:=\begin{cases}g^{\prime}_{e^ {\prime}}&\text{ if }\widetilde{e}=h^{*}(e^{\prime})\text{ for some }e^{\prime}\\ -1&\text{ if }\widetilde{e}\in S_{\gamma,\mathbf{n}}\setminus h^{*}(S_{\beta, \mathbf{m}}).\end{cases}\] We define \(M_{\delta}(\gamma)\) to be the set of all function \(\mathbf{m}\colon\overline{\operatorname{Mor}}(\gamma,\delta)\to\mathcal{P}(S _{\gamma})\) satisfying Equation (4.27). **Corollary 4.35**.: _Let \(p\colon\operatorname{Bl}_{X_{\delta^{\prime}}}X_{\bullet}\to X_{\bullet}\) be the blowdown morphism, and fix integers \((g^{\prime}_{e^{\prime}}\geq 0)_{e^{\prime}\in S_{\beta,\mathbf{m}}}\). Then the pushforward_ \[p_{*}\frac{f_{(\beta,\mathbf{m})*}}{|\operatorname{Aut}(\beta,\mathbf{m})|} \Big{(}\prod_{e^{\prime}\in S_{\beta,\mathbf{m}}}\Psi^{g^{\prime}_{e}}_{e^{ \prime}}\Big{)}\] _equals_ \[\sum_{\gamma\in\mathfrak{C}}\frac{f_{\gamma*}}{|\operatorname{Aut}(\gamma)|} \Bigg{(}\sum_{\begin{subarray}{c}\mathbf{n}\in M_{\delta}(\gamma)\\ h\in\overline{\operatorname{Mor}}((\gamma,\mathbf{n}),(\beta,\mathbf{m}))\end{subarray} }\sum_{\begin{subarray}{c}(a_{e},g_{e})\in\\ H^{\delta}_{\gamma,\mathbf{n}}(h^{*}(g^{\prime}_{e^{\prime}}))\end{subarray}}(- 1)^{a_{e}}\binom{g_{e}}{a_{e}}\Psi^{g_{e}-a_{e}}_{e}\Bigg{)}.\] Proof.: Follows from [1, Theorem 4.8] (or [1, Theorem 4.2]). ## 5. Combinatorial aspects of Wall-Crossing In this section we fix two stability conditions \(\phi^{+}\) and \(\phi^{-}\) "on opposite sides of a stability hyperplane \(H\)" (Definition 5.1), and give a description of what will turn out to be the stratification category \(\widetilde{\mathfrak{C}}=\widetilde{\mathfrak{C}}(\phi^{+},\phi^{-})\) of the resolution \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\), which we will formally construct in the next section. Objects of this category are defined in Definition 5.28 as triples \((G,D,\alpha)\) where \((G,D)\) is an object of \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\) and \(\alpha\) is a certain "vine function" to be introduced in Definition 5.15. This generalizes the case of \(1\) blowup, described in Example 4.33. Then we study the subcategory \(\widetilde{\mathfrak{C}}_{E}=\widetilde{\mathfrak{C}}(\phi^{+},\phi^{-})\) of the strata that are in the intersection of the exceptional divisors -- this is the category appearing in our main result Theorem 7.4. The condition that singles out objects of \(\widetilde{\mathfrak{C}}_{E}\subset\widetilde{\mathfrak{C}}\) is that the function \(\alpha\) should be _full_, as defined in Definition 5.15. We then see in Proposition 5.21 that the datum of a full vine function is equivalent to that of a full forest, and this gives a simpler description of the objects of \(\widetilde{\mathfrak{C}}_{E}\), which is the one that we will use in Theorem 7.4. Recall from 3.b that there are 3 types of hyperplanes. If \(H\) is as in (3.12), then no blowup is required and \(\mathfrak{C}_{g,n}(\phi)=\widetilde{\mathfrak{C}}\). The second case is when \(H\) is as in (3.13) and \(S\neq[n]\). That case will be discussed in 5.c. The most difficult case is when \(H\) is as in (3.13) and \(S=[n]\). **Definition 5.1**.: Let \(\phi^{+},\phi^{-}\in V^{d}_{g,n}\) be nondegenerate, and let \(H\) be a stability hyperplane (see Section 3.b). We say that _the polarizations \(\phi^{\pm}\) are on opposite sides of the hyperplane \(H\)_ if \(\phi^{0}=\frac{\phi^{-}+\phi^{+}}{2}\in H\) is the only degenerate point of the segment \([\phi^{+},\phi^{-}]\subset V^{d}_{g,n}\). In other words, \(H\ni\phi_{0}\)-semistability implies \(\phi^{+}\) or \(\phi^{-}\) stability, and \(\phi^{\pm}\) are small perturbations of \(\phi_{0}\). Throughout we fix \(H,\phi^{\pm}\) and \(\phi_{0}\) as in the above definition. ### Extremal sets, vine functions and full forests In this section we prove some of our bulk combinatorial results that have to do with wall-crossing, focusing on the "extremal" multidegree, i.e. those multidegrees that are \(\phi^{+}\)-stable but are not \(\phi^{-}\)-stable. _We shall fix a \(n\) pointed graph \(G\) of genus \(g\) and a divisor \(D\) on \(G\) throughout_. (We are not imposing stability conditions here, but the cases we are interested in are either \(G=G^{\prime E^{\prime}}\) for some \(E^{\prime}\subseteq E(G)\) where \(G^{\prime}\) is stable, or \(G\) is obtained by forgetting the last marking on a stable \((n+1)\) pointed graph.) For a subset \(V\subseteq V(G)\) we define6 Footnote 6: We note that the existing definition of \(\beta(V)\) in [10] and [1] are based on a different sign convention, however all relevant properties remain the same. \[\beta^{\star}(V):=-\deg(D|_{V})+\phi^{\star}(V)+\frac{|E(V,V^{c})|}{2},\ \ \ \ \ \text{for}\ \star=+,-,0.\] Note that \(D\) is \(\phi^{\star}\)-semistable (Definition 3.17) if and only if \(\beta^{\star}(V)\geq 0\) for every \(V\subset V(G)\). Moreover, we have the following relation for \(\beta^{\star}\) (see [1, Lemma 4.1]) \[\beta^{\star}(V)+\beta^{\star}(W)-|E(V\setminus W,W\setminus V)|=\beta^{ \star}(V\cap W)+\beta^{\star}(V\cup W). \tag{5.2}\] From now on in this chapter we will assume that \(D\) is \(\phi^{+}\) semistable on \(G\). A subset \(V\subsetneq V(G)\) is called _extremal_ (with respect to \(\phi^{+},\phi^{-}\) and \(D\)) if \[\beta^{+}(V)>0\ \text{and}\ \beta^{-}(V)<0 \tag{5.3}\] In particular, this implies \[\phi^{+}(V)>\phi^{-}(V).\] Note that if \(V\) is extremal, then \(\beta^{0}(V)=0\). We are now ready to define the main object of study in this section. **Definition 5.4**.: We define the poset \(\operatorname{Ext}(G,D)=\operatorname{Ext}_{\phi^{+},\phi^{-}}(G,D)\) as \[\{V\subseteq V(G);V\text{ is extremal, connected and with connected complement}\}\] with the ordering given by inclusion. **Remark 5.5**.: If \(\iota\colon(G,D)\to(G^{\prime},D^{\prime})\) is a specialization and \(V^{\prime}\) is in \(\operatorname{Ext}(G^{\prime},D^{\prime})\), we have that \(\iota^{-1}(V^{\prime})\) is in \(\operatorname{Ext}(G,D)\). **Remark 5.6**.: Each element \(V\) of \(\operatorname{Ext}(G,D)\) corresponds to a morphism from \(G\) to an extremal vine curve stratum \((G^{\prime},D^{\prime})\) (up to automorphisms of \((G^{\prime},D^{\prime})\)) obtained by contracting \(E(V,V)\) and \(E(V^{c},V^{c})\). We have the following results for extremal subsets. **Proposition 5.7**.: _Let \(V_{1}\) and \(V_{2}\) be two extremal subsets. Then \(V_{1}\cap V_{2}\) and \(V_{1}\cup V_{2}\) are either empty, extremal or equal to \(V(G)\). Moreover, we have that \(E(V_{1}\setminus V_{2},V_{2}\setminus V_{1})=\varnothing\) in all cases._ Proof.: Write \(H_{0}=V_{1}\cap V_{2}\), \(H_{1}=V_{1}\setminus H_{0}\), \(H_{2}=V_{2}\setminus H_{0}\) and \(H_{3}=V_{1}^{c}\cap V_{2}^{c}\) (see Figure 1). Define \(\alpha=|E(H_{1},H_{2})|=|E(V_{1}\setminus V_{2},V_{2}\setminus V_{1})|\). Since \(\beta^{-}(H_{0}\cup H_{1}),\beta^{-}(H_{0}\cup H_{2})<0\) we have that \(\beta^{0}(H_{0}\cup H_{1})=\beta^{0}(H_{0}\cup H_{2})=0\). By (5.2) we have \[\beta^{0}(H_{0}\cup H_{1})+\beta^{0}(H_{0}\cup H_{2})-\alpha=\beta^{0}(H_{0}) +\beta^{0}(H_{0}\cup H_{1}\cup H_{2}),\] and, because \(\beta^{0}(H)\geq 0\) for every \(H\), we deduce that \(\alpha=0\), and \(\beta^{0}(H_{0})=\beta^{0}(H_{0}\cup H_{1}\cup H_{2})=0\). If \(H_{0}\neq\varnothing\), then \(\beta^{+}(H_{0})>0\) and if \(H_{0}\cup H_{1}\cup H_{2}\neq V(G)\), then \(\beta^{+}(H_{0}\cup H_{1}\cup H_{2})>0\). Since \(\beta^{0}=\frac{\beta^{+}+\beta^{-}}{2}\), we have that \(\beta^{-}(H_{0})<0\) (respectively, \(\beta^{-}(H_{0}\cup H_{1}\cup H_{2})<0\)) if \(H_{0}\neq\varnothing\) (respectively, if \(H_{0}\cup H_{1}\cup H_{2}\neq V(G)\)). This finishes the proof. **Proposition 5.8**.: _Let \(V\) be an extremal set._ _If \(V=V_{1}\sqcup V_{2}\) with \(E(V_{1},V_{2})=\varnothing\) and \(V_{1},V_{2}\neq\varnothing\), then \(V_{1}\) and \(V_{2}\) are extremal. Similarly, if \(V^{c}=W_{1}\sqcup W_{2}\) with \(E(W_{1},W_{2})=\varnothing\) and \(W_{1},W_{2}\neq\varnothing\), then \(W_{1}^{c}\) and \(W_{2}^{c}\) are extremal._ Figure 1. Proof.: For the first part, we have that \(\beta^{0}(V)=\beta^{0}(V_{1})+\beta^{0}(V_{2})\), since \(\beta^{0}(V)=0\), then \(\beta^{0}(V_{1})=\beta^{0}(V_{2})=0\) as well. So, \(\beta^{-}(V_{i}),\beta^{-}(V_{2})<0\). The second part is proven similarly. In what follows we will also need an additional hypothesis. **Hypothesis 1**.: If \(V\subseteq V(G)\) is extremal, then \(\operatorname{leg}(1)\in V\). In particular, by Proposition 5.8, we have that if \(V\subseteq V(G)\) is extremal, then \(G(V)\) is connected. From now on in this chapter we will always assume Hypothesis 1. Hypothesis (1) fixes the following convention on \(\phi^{+},\phi^{-}\): **Remark 5.9**.: If \(V\) is an extremal set in \(\operatorname{Ext}(G,D)\) then \(\phi^{+}(V)>\phi^{-}(V)\). In particular, if we set \(S:=\operatorname{leg}^{-1}(V)\) and \(i:=g(V)\) and \(t=|E(V,V^{c})|\), then \[x_{i,t,S}^{+}>x_{i,t,S}^{-}.\] for fixed coordinates \(\phi^{\pm}=(x_{i,t,S}^{\pm})_{(i,t,S)}\in V_{g,n}^{d}\) as discussed in Section 3.b. Also, since \(\phi^{+}\) and \(\phi^{-}\) are on opposite sides of a hyperplane \(H=H(i_{0},t_{0},S_{0})\), Hypothesis 1 is always satisfied upon possibly switching \(\phi^{+}\) and \(\phi^{-}\). **Remark 5.10**.: The results in this section hold more generally for when \(\phi^{+}\) and \(\phi^{-}\) lie on opposite sides of a higher codimension stability plane (not necessarily a hyperplane, as in Definition 5.1), in which case Hypothesis 1 becomes restrictive. Here are some important properties of \(\operatorname{Ext}(G,D)\) that follow from Hypothesis 1: **Corollary 5.11**.: _Let \(V_{1},V_{2}\in\operatorname{Ext}(G,D)\), then either \(V_{1}\cup V_{2}=V(G)\) or there exists \(V\in\operatorname{Ext}(G,D)\) such that \(V_{1}\cup V_{2}\subseteq V\)._ Proof.: Assume that \(V_{1}\cup V_{2}\neq V(G)\). Then, by Proposition 5.7, we have that \(V_{1}\cup V_{2}\) is extremal. By Hypothesis 1, we have that \(V_{1}\cup V_{2}\) is also connected. Let \(W\) be a connected component of \((V_{1}\cup V_{2})^{c}\). By Proposition 5.8, we have that \(W^{c}\) is extremal. Moreover, since \(V_{1}\cup V_{2}\) is connected (and \(G\) is connected), so is \(W^{c}\). This proves that \(W^{c}\in\operatorname{Ext}(G,D)\) and \(V_{1}\cup V_{2}\subseteq W^{c}\). **Corollary 5.12**.: _Let \(V_{1},V_{2}\) be elements of \(\operatorname{Ext}(G,D)\) such that \(V_{1},V_{2}\subseteq V\) for some \(V\in\operatorname{Ext}(G,D)\). Then \(V_{1}\cap V_{2}\in\operatorname{Ext}(G,D)\)._ Proof.: By Proposition 5.7, we have that \(V_{1}\cap V_{2}\) is extremal and by Hypothesis 1 we have that \(V_{1}\cap V_{2}\) is nonempty and connected. All that is left is to prove that \((V_{1}\cap V_{2})^{c}=V^{c}\cup(V\setminus V_{1})\cup(V\setminus V_{2})\) is connected. But this follows from the fact that \(V^{c}\), \(V_{1}^{c}=V^{c}\cup(V\setminus V_{1})\) and \(V_{2}^{c}=V^{c}\cup(V\setminus V_{2})\) are connected. We are now ready to introduce a key notion to describe the blowup category of \(\mathfrak{C}_{g,n}(\phi)\) at some vine curve strata. **Definition 5.13**.: For each \((G,D)\) and lower set \(L\subseteq\operatorname{Ext}(G,D)\), we say that a function \(\alpha\colon L\to\mathcal{P}(E(G))\) is a _vine function_ if the following conditions hold 1. \(\alpha(V)\subseteq E(V,V^{c})\) for every \(V\in L\). 2. For all \(V\in L\) we have \(\alpha(V)=\varnothing\) if and only if there exists \(V^{\prime}\subsetneqq V\) with \(V^{\prime}\in\operatorname{Ext}(G,D)\) such that \(\alpha(V^{\prime})\cap E(V,V^{c})\neq\varnothing\). We usually think that \(L\) is part of the datum of \(\alpha\), and write \(L_{\alpha}\) for the domain of a vine function \(\alpha\). We also define \(|\alpha|=\bigcup_{V\in L_{\alpha}}\alpha(V)\subseteq E(G)\). **Definition 5.14**.: Given a specialization \(\iota\colon(G,D)\to(G^{\prime},D^{\prime})\) we say that the vine functions \(\alpha\) and \(\alpha^{\prime}\) are _compatible with_\(\iota\) if 1. \(\iota^{-1}(V^{\prime})\in L_{\alpha}\) for every \(V^{\prime}\in L_{\alpha^{\prime}}\). 2. if \(\alpha^{\prime}(V^{\prime})\neq\varnothing\), then \(\alpha(\iota^{-1}(V^{\prime}))\neq\varnothing\). 3. if \(e^{\prime}\notin|\alpha^{\prime}|\) then \(\iota_{E}(e)\notin|\alpha|\). In that case we write \(\iota\colon(G,D,\alpha)\to(G^{\prime},D^{\prime},\alpha^{\prime})\) and say that the first triple specializes to the second. We will also need to introduce some subcategories of the stratification category of \(\mathfrak{C}_{g,n}(\phi)\), which will use only some vine functions which we call "full". We now introduce those, and then discuss how this notion is equivalent to the combinatorial notion of a full forest. **Definition 5.15**.: We say that \(\alpha\) is _full_ if \(L_{\alpha}=\operatorname{Ext}(G,D)\), and \(|\alpha|=E(G)\). We will show in Proposition 5.21 how full vine functions are equivalent to the following notion. **Definition 5.16**.: A forest \(V_{\bullet}\subseteq\operatorname{Ext}(G,D)\) is a _full forest_ in \(\operatorname{Ext}(G,D)\) if 1. it contains all maximal elements of \(\operatorname{Ext}(G,D)\), and 2. the edge set satisfies \(E(G)=\bigcup_{V\in V_{\bullet}}E(V,V^{c})\). We first prove some intermediate results in that direction. For a forest \(V_{\bullet}\subseteq\operatorname{Ext}(G,D)\), and for each \(V^{\prime}\in\operatorname{Ext}(G,D)\), we define \[\operatorname{next}(V^{\prime})=\operatorname{next}_{V_{\bullet}}(V^{\prime}) :=\bigcap_{V^{\prime}\subsetneqq V\in V_{\bullet}}V \tag{5.17}\] (with the usual convention that the intersection over the empty set equals \(V(G)\)). **Lemma 5.18**.: _Let \(V_{\bullet}\subseteq\operatorname{Ext}(G,D)\) be a full forest and let \(V_{1}\) and \(V_{2}\) be two incomparable elements in \(V_{\bullet}\). Then \(V_{1}\cup V_{2}=V(G)\) and \(E(V_{1}^{c},V_{2}^{c})=\varnothing\)._ Proof.: By Corollary 5.11 we have that either \(V_{1}\cup V_{2}=V(G)\), or there exists \(V\in\operatorname{Ext}(G,D)\) such that \(V_{1},V_{2}\subseteq V\). If the latter holds, Part (1) of Definition 5.16 implies that \(V_{1}\) and \(V_{2}\) are comparable, a contradiction. So \(V_{1}\cup V_{2}=V(G)\). The fact that \(E(V_{1}^{c},V_{2}^{c})=\varnothing\) follows from Proposition 5.7. **Proposition 5.19**.: _Let \(V_{\bullet}\subseteq\operatorname{Ext}(G,D)\) be a full forest. Let \(V^{\prime}\in\operatorname{Ext}(G,D)\) be a nonmaximal element and let \(V_{1},\ldots,V_{m}\) be the elements of \(V_{\bullet}\) that are minimal among those containing \(V^{\prime}\). Then:_ 1. _For every_ \(i=1,\ldots,m\)_,_ \[E(\operatorname{next}(V^{\prime})\setminus V^{\prime},V_{i}^{c})\neq\varnothing.\] 2. \(\operatorname{next}(V^{\prime})\neq V^{\prime}\)_._ 3. \(G(\operatorname{next}(V^{\prime}))\) _is connected._ 4. _If_ \(V^{\prime}\in V_{\bullet}\)_, then_ \(E(\operatorname{next}(V^{\prime})\setminus V^{\prime},\operatorname{next}(V^ {\prime})\setminus V^{\prime})=\varnothing\)__ 5. _If_ \(V\in\operatorname{Ext}(G,D)\) _is such that_ \(\operatorname{next}(V^{\prime})\subseteq V\)_, then there exists_ \(i\in\{1,\ldots,m\}\) _such that_ \(V_{i}\subseteq V\)_._ 6. _all minimal elements of_ \(\operatorname{Ext}(G,D)\) _belong to_ \(V_{\bullet}\)_._ 7. _We have_ \(E(G)=\bigsqcup_{V\in V_{\bullet}}E(V,\operatorname{next}(V)\setminus V)\)_._ Proof.: For (1), we first notice that \(V_{i},V_{j}\) are incomparable if \(i\neq j\), because of the minimality condition. By Lemma 5.18, we have that \(E(V_{i}^{c},V_{j}^{c})=\varnothing\) for all \(i\neq j\). Since \(V^{\prime c}=(\operatorname{next}(V^{\prime})\setminus V^{\prime})\cup\bigcup_ {i=1}^{m}V_{i}^{c}\) and \(V^{\prime c}\) is connected, we must have that \(E(\operatorname{next}(V^{\prime})\setminus V^{\prime},V_{i}^{c})\neq\varnothing\). Item (2) follows immediately from (1). For (3), just notice that \(\operatorname{next}(V^{\prime})\) is an extremal element by Proposition 5.7, then it is connected by Hypothesis 1. The existence of an edge between vertices of \(\operatorname{next}(V^{\prime})\setminus V^{\prime}\) would witness the failure for \(V_{\bullet}\) to be full. Indeed, let \(V\in V_{\bullet}\) be an element. If \(V\) is incomparable with \(V^{\prime}\), by Lemma 5.18, we have that \(V\cup V^{\prime}=V(G)\), and that implies that \(\operatorname{next}(V^{\prime})\setminus V^{\prime}\subseteq V\) and hence \(E(\operatorname{next}(V^{\prime})\setminus V^{\prime},\operatorname{next}(V^ {\prime})\setminus V^{\prime}))\cap E(V,V^{c})=\varnothing\). If \(V^{\prime}\subseteq V\), then \(V_{i}\subseteq V\) for some \(i\), and hence \(\operatorname{next}(V^{\prime})\subseteq V\) which implies that \(E(\operatorname{next}(V^{\prime})\setminus V^{\prime},\operatorname{next}(V^ {\prime})\setminus V^{\prime}))\cap E(V,V^{c})=\varnothing\). If \(V\subset V^{\prime}\), then it is clear that \(E(\operatorname{next}(V^{\prime})\setminus V^{\prime},\operatorname{next}(V^ {\prime})\setminus V^{\prime}))\cap E(V,V^{c})=\varnothing\). By the condition that \(V_{\bullet}\) is a full forest, we have that \(E(\operatorname{next}(V^{\prime})\setminus V^{\prime},\operatorname{next}(V^ {\prime})\setminus V^{\prime}))\cap E(G)=\varnothing\) and this completes the proof of Item (4). Item (5). This follows from the fact that \(V(G)=\operatorname{next}(V^{\prime})\cup\bigcup_{j=1}^{m}V_{j}^{c}\) and hence \(V^{c}=\bigcup_{j=1}^{m}V^{c}\cap V_{j}^{c}\). Since \(G(V^{c})\) is connected and \(E(V_{i}^{c},V_{j}^{c})=\varnothing\) for \(j\neq i\) (Lemma 5.18), so we have that there exists \(i\in\{1,\ldots,m\}\) such that \(V^{c}\cap V_{j}^{c}=\varnothing\) for every \(j\neq i\). This means that \(V_{i}\subset V\). Item (6). Let \(V_{0}\) be a minimal element of \(\operatorname{Ext}(G,D)\). If \(V_{0}\) is also maximal, there is nothing to do. So we can assume that \(V_{0}\) is nonmaximal. Assume by contradiction that \(V_{0}\notin V_{\bullet}\). From Items (2) and (3) we deduce that \(E(V_{0},\operatorname{next}(V_{0})\setminus V_{0})\neq\varnothing\). Let \(V\in V_{\bullet}\) we will prove that \(E(V_{0},\operatorname{next}(V_{0})\setminus V_{0})\cap E(V,V^{c})=\varnothing\) and get a contradiction. If \(V_{0}\subseteq V\), then \(\operatorname{next}(V_{0})\subset V\) (recall that \(V_{0}\neq V\) because \(V_{0}\notin V_{\bullet}\)), then \(E(V_{0},\operatorname{next}(V_{0})\setminus V_{0})\cap E(V,V^{c})=\varnothing\). If \(V_{0}\not\subseteq V\), we have that \(V_{0}\cap V\subsetneq V_{0}\) is extremal by Proposition 5.7; by the minimality of \(V_{0}\) in \(\operatorname{Ext}(G,D)\) we have that \((V_{0}\cap V)^{c}=V_{0}^{c}\cup V^{c}\) is not connected. Since \(V_{0}^{c},V^{c}\) are connected, we have that \(V_{0}^{c}\cap V^{c}=\varnothing\) and hence \(V_{0}\cup V=V(G)\). By Proposition 5.7 we have that \(E(V_{0}^{c},V^{c})=\varnothing\). Hence \(E(V_{0},V_{0}^{c})\cap E(V,V^{c})=\varnothing\), indeed \(E(V_{0},V_{0}^{c})=E(V_{0}\cap V,V_{0}^{c})\) and \(E(V,V^{c})=E(V_{0}\cap V,V^{c})\). In particular \(E(V_{0},\operatorname{next}(V_{0})\setminus V_{0})\cap E(V,V^{c})=\varnothing\). Item (7). Let \(e\) be an edge of \(G\) and let \(V^{\prime}\) be an element in \(V_{\bullet}\) that is maximal among those with the property that \(e\in E(V^{\prime},V^{\prime c})\). For each \(V^{\prime}\subsetneq V\in V_{\bullet}\), we must have that \(e\in E(V,V)\), so we have that \(e\in E(V^{\prime},\operatorname{next}(V^{\prime})\setminus V^{\prime})\). **Proposition 5.20**.: _Let \(\alpha\) be a vine function and define_ \[V_{\bullet}:=\{V\in\operatorname{Ext}(G,D):\ \alpha(V)\neq\varnothing\} \subseteq\operatorname{Ext}(G,D).\] _Then the following hold._ 1. _For every_ \(V_{0}\in\operatorname{Ext}(G,D)\)_, we have that_ \(V_{\bullet}\cap\operatorname{Ext}(G,D)_{\subseteq V_{0}}\) _is a chain._ 2. _The poset_ \(V_{\bullet}\) _is a forest._ 3. \(\alpha(V)\subseteq E(V,\operatorname{next}(V)\setminus V)\) _for every_ \(V\in V_{\bullet}\)_._ Proof.: Let \(V_{1},V_{2}\in V_{\bullet}\) be such that \(V_{1},V_{2}\subseteq V_{0}\) and \(V_{1}\) and \(V_{2}\) are incomparable. By Corollary 5.12, we have that \(V_{1}\cap V_{2}\in\operatorname{Ext}(G,D)\). Moreover, we have that \[E(V_{1}\cap V_{2},(V_{1}\cap V_{2})^{c})\subseteq E(V_{1},V_{1}^{c})\cup E(V_ {2},V_{2}^{c}).\] By the fact that \(\alpha\) is a vine function, we have that there exists \(V^{\prime}\subseteq V_{1}\cap V_{2}\) such that \(\alpha(V^{\prime})\cap E(V_{1}\cap V_{2},(V_{1}\cap V_{2})^{c})\neq\varnothing\). This means that either \(\alpha(V^{\prime})\cap E(V_{1},V_{1}^{c})\neq\varnothing\), or \(\alpha(V^{\prime})\cap E(V_{2},V_{2}^{c})\neq\varnothing\), which contradicts the fact that \(\alpha\) is a vine function and \(\alpha(V_{1}),\alpha(V_{2})\neq\varnothing\). This concludes the proof of Item (1). Item (2) follows directly. We now prove (3). If \(\operatorname{next}(V)=V(G)\) there is nothing to prove. Otherwise, we have \[E(V,V^{c})\setminus E(V,\operatorname{next}(V)\setminus V)\subseteq\bigcup_{ V\subsetneq V^{\prime}\in V_{\bullet}}E(V^{\prime},V^{\prime c}),\] so \(\alpha(V)\cap(E(V,V^{c})\setminus E(V,\operatorname{next}(V)\setminus V))\neq\varnothing\) would imply \(\alpha(V)\cap E(V^{\prime},V^{\prime c})\neq\varnothing\) for some \(V^{\prime}\in V_{\bullet}\) with \(V\subsetneq V^{\prime}\), thus contradicting the assumption that \(\alpha\) is a vine function. We are now ready to prove the equivalence of full vine functions and full forests. **Proposition 5.21**.: _For each \((G,D)\), the mapping defined in Proposition 5.20 induces a natural bijection between full vine functions and full forests in \(\operatorname{Ext}(G,D)\)._ Proof.: Assume that \(\alpha\) is a full vine function and define \(V_{\bullet}=V_{\bullet}^{\alpha}\) as in Proposition 5.20. By loc.cit we have that \(V_{\bullet}\) is a forest and that \(\alpha(V)\subseteq E(V,\operatorname{next}(V)\setminus V)\) for every \(V\). The condition that \(\bigcup_{V\in V_{\bullet}}\alpha(V)=\bigcup_{V\in\operatorname{Ext}(G,D)} \alpha(V)=E(G)\) implies that \(\bigcup E(V,\operatorname{next}(V)\setminus V)=E(G)\), which implies \(\bigcup E(V,V^{c})=E(G)\). This proves that \(V_{\bullet}\) is a full forest. We now study the inverse mapping. For a full forest \(V_{\bullet}\), define \[\alpha:=\alpha_{V_{\bullet}}(V_{0})=\begin{cases}\varnothing&\text{if }V_{0} \notin V_{\bullet};\\ E(V_{0},\operatorname{next}(V_{0})\setminus V_{0})&\text{if }V_{0}\in V_{ \bullet}.\end{cases}\] We claim that \(\alpha\) is a vine function. Condition (1) in Definition 5.13 follows from the fact that \(E(V_{0},\operatorname{next}(V_{0})\setminus V_{0})\subseteq E(V_{0},V_{0}^{c})\) for every \(V\in V_{\bullet}\). Let us prove Condition (2). First, we see that \(\alpha_{V_{\bullet}}(V_{0})=\varnothing\) if and only if \(V_{0}\notin V_{\bullet}\). By the definition of \(\alpha\) it is clear that if \(V_{0}\notin V_{\bullet}\), then \(\alpha(V_{0})=\varnothing\). On the other hand, if \(\alpha(V_{0})=\varnothing\) and \(V_{0}\in V_{\bullet}\), then \(E(V_{0},\operatorname{next}(V_{0})\setminus V_{0})=\varnothing\), but this is a contradiction with the fact that \(\operatorname{next}(V_{0})\) induces a connected subgraph of \(G\) and \(\operatorname{next}(V_{0})\setminus V_{0}\neq\varnothing\) (see Proposition 5.19). Now we show that if \(\alpha(V_{0})=\varnothing\), then we can find \(V^{\prime}\in\operatorname{Ext}(G,D)\) with \(V^{\prime}\subseteq V_{0}\), such that \(\alpha(V^{\prime})\cap E(V_{0},V_{0}^{c})\neq\varnothing\). By the previous paragraph, we have that \(V_{0}\notin V_{\bullet}\). Since \(V_{\bullet}\) contains all minimal elements of \(\operatorname{Ext}(G,D)\) (by Proposition 5.19), we have that there exists \(V\in V_{\bullet}\) contained in \(V_{0}\). Let \(V^{\prime}\) be the maximum such element. This maximum exists because \(V_{\bullet}\) is a forest that contains all maximal elements of \(\operatorname{Ext}(G,D)\). By Item (5) of Proposition 5.19 and the maximality of \(V^{\prime}\), we have that \(\operatorname{next}(V^{\prime})\not\subseteq V_{0}\). Moreover, by Item (4) of Proposition 5.19 we have that \(E(\operatorname{next}(V^{\prime})\setminus V_{0},\operatorname{next}(V^{ \prime})\cap V_{0}\setminus V^{\prime})=\varnothing\). Since \(\operatorname{next}(V^{\prime})\) induces a connected subgraph (this is Item (3) of Proposition 5.19), we have that \(E(V^{\prime},\operatorname{next}(V^{\prime})\setminus V_{0})\neq\varnothing\), and since \[E(V^{\prime},\operatorname{next}(V^{\prime})\setminus V_{0})\subseteq E(V^{ \prime},\operatorname{next}(V^{\prime})\setminus V^{\prime})\cap E(V_{0},V_{ 0}^{c})=\alpha(V^{\prime})\cap E(V_{0},V_{0}^{\prime})\] we have that \(\alpha(V^{\prime})\cap E(V_{0},V_{0}^{\prime})\neq\varnothing\) as needed. We now prove that if there exists a \(V^{\prime}\in\operatorname{Ext}(G,D)\) with \(V^{\prime}\subsetneqneq V_{0}\) such that \(\alpha(V^{\prime})\cap E(V_{0},V_{0}^{c})\neq\varnothing\), then \(\alpha(V_{0})=\varnothing\). Assume by contradiction that there exist \(V_{0},V^{\prime}\in V_{\bullet}\) such that \(\alpha(V_{0})\neq\varnothing\), and that \(V^{\prime}\subsetneq V_{0}\) and \(\alpha(V^{\prime})\cap E(V_{0},V_{0}^{c})\neq\varnothing\). Since \(\alpha(V_{0}),\alpha(V^{\prime})\neq\varnothing\), we have that \(V_{0},V^{\prime}\in V_{\bullet}\). Since \(V^{\prime}\subsetneqneq V_{0}\), we have that \(\operatorname{next}(V^{\prime})\subseteq V_{0}\) as well, so this means that \(E(V^{\prime},\operatorname{next}(V^{\prime})\setminus V^{\prime})\cap E(V_{0}, V_{0}^{c})=\varnothing\), a contradiction (recall that \(E(V^{\prime},\operatorname{next}(V^{\prime})\setminus V^{\prime})=\alpha(V^{ \prime})\)). This concludes the proof that \(\alpha\) is a vine function. The fact that \(\alpha\) is full follows from Item (7) of Proposition 5.19. **Definition 5.22**.: A morphism \(\iota\colon(G,D,V_{\bullet})\to(G^{\prime},D^{\prime},V_{\bullet}^{\prime})\) is a morphism \(\iota\colon(G,D)\to(G^{\prime},D^{\prime})\) such that \(\iota^{-1}(V^{\prime})\in V_{\bullet}\) for every \(V^{\prime}\in V_{\bullet}^{\prime}\). This is equivalent of saying that \(\iota\) is compatible with \(\alpha_{V_{\bullet}}\) and \(\alpha_{V_{\bullet}^{\prime}}\). Given morphisms \(\iota_{i}\colon(G,D,\alpha)\to(G_{i},D_{i},\alpha_{i})\) for \(i=1,\ldots,k\), we say that the collection \((\iota_{1},\ldots,\iota_{k})\) is _generic_ if 1. for every edge \(e\in E(G)\setminus|\alpha|\) there exists some \(i\in\{1,\ldots,k\}\) and some \(e^{\prime}\in E(G_{i})\setminus|\alpha_{i}|\) such that \(e=\iota_{i,E}(e^{\prime})\); 2. for every \(V\in L_{\alpha}\) such that \(\alpha(V)\neq\varnothing\), there exists \(i\) and \(V^{\prime}\in L_{\alpha_{i}}\) with \(\alpha_{i}(V^{\prime})\neq\varnothing\) such that \(\iota^{-1}(V^{\prime})=V\). **Remark 5.23**.: We will see in the next section how this definition of "generic" matches the one given in Definition 4.8. **Proposition 5.24**.: _Let \(\iota_{i}\colon(G,D,\alpha)\to(G_{i},D_{i},\alpha_{i})\) be a generic collection. If all \(\alpha_{i}\) are full, then \(\alpha\) is full as well._ Proof.: This follows directly. The next result will imply that all the strata that we blow-up have transversal self-intersection. **Proposition 5.25**.: _Assume \((G,D,\alpha)\) is such that \(G\) is a vine curve and \(L_{\alpha}=\varnothing\). Let \(f_{i}\colon(G^{\prime},D^{\prime},\alpha^{\prime})\to(G,D,\alpha)\) for \(i=1,2\) be generic. Assume that for all \(V\in\operatorname{Ext}(G^{\prime},D^{\prime})\) such that \(V\subseteq f_{1}^{*}(v_{0})\cap f_{2}^{*}(v_{0})\), we have \(V\in L_{\alpha^{\prime}}\). Then \((f_{1},f_{2})\) is transversal._ Proof.: Denote by \(V_{i}=f_{i}^{*}(\{v_{0}\})\). Assume that \(V_{1}\cup V_{2}\neq V(G)\). This means that \(V_{1}\cap V_{2}\in\operatorname{Ext}(G,D)\) and hence in \(L_{\alpha^{\prime}}\). Since \(E(V_{1}\cap V_{2},(V_{1}\cap V_{2})^{c})\subseteq E(V_{1},V_{1}^{c})\cup E( V_{2},V_{2}^{c})\), that would mean that there exists \(V^{\prime}\in L_{\alpha^{\prime}}\) such that \(\alpha(V^{\prime})\cap E(V_{1}\cap V_{2},(V_{1}\cap V_{2})^{c})\neq\varnothing\), and in turn, if \(e\in\alpha(V^{\prime})\subseteq E(V_{1},V_{1}^{c})\) we would have a contradiction with the fact that if \(e\notin|\alpha|\) but \(f_{i}^{*}(e)\in|\alpha^{\prime}|\). So \(V_{1}\cup V_{2}=V(G)\), which means that \(E(V_{1},V_{1}^{c})\cap E(V_{2},V_{2}^{c})=\varnothing\) The next proposition will be used to prove that the category \(\widetilde{\mathfrak{C}}_{Y}\), defined later, is a simple normal crossing stratification (as in Example 4.2). **Proposition 5.26**.: _Let \((G,D)\) be a stable, \((n+1)\)-marked vine curve such that \(v_{n+1}\notin V\) for every \(V\in\operatorname{Ext}(G,D)\). Let \(V_{\bullet}\) be a full forest, and \(f_{1},f_{2}\colon(G^{\prime},D^{\prime},V^{\prime}_{\bullet})\to(G,D,V_{ \bullet})\) be two morphisms. Then, \(f_{1}\in\operatorname{Aut}(G,D,V_{\bullet})f_{2}\)._ Proof.: Upon further contraction of \((G^{\prime},D^{\prime},V^{\prime}_{\bullet})\), we can assume that \(f_{1},f_{2}\) are generic. This means that either \((G^{\prime},D^{\prime},V^{\prime}_{\bullet})=(G,D,V_{\bullet})\) or \(V^{\prime}_{\bullet}=\{V^{\prime}_{1},V^{\prime}_{2}\}\). We have two cases, either \(V^{\prime}_{1}\subseteq V^{\prime}_{2}\) (up to swapping \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\)) or \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\) are incomparable. In the latter case, we have that \(V^{\prime}_{1}\cup V^{\prime}_{2}=V(G^{\prime})\) by Corollary 5.11, but that contradicts the fact that \(v_{n+1}\notin V^{\prime}_{1}\cup V^{\prime}_{2}\). In the former case, we have that \(g(V^{\prime}_{1})=g(V^{\prime}_{2})\), \(|E(V^{\prime}_{1},V^{\prime c}_{1})|=|E(V^{\prime}_{2},V^{\prime c}_{2})|\), \(\operatorname{leg}^{-1}(V_{1})=\operatorname{leg}^{-1}(V_{2})\) and \(D(V_{1})=D(V_{2})\). But that means that all the vertices in \(V_{2}^{\prime}\setminus V_{1}^{\prime}\) have genus \(0\), there are no marked points and the degree of \(D\) equals \(0\). This is a contradiction with the fact that \((G^{\prime},D^{\prime})\) is \(\phi^{+}\)-stable. We conclude this section by observing that the existence of a full function/forest rules out the presence of exceptional vertices. **Lemma 5.27**.: _If \(\operatorname{Ext}(G,D)\) admits a full forest, then \(G\) is stable._ Proof.: Because \(D\) is \(\phi^{+}\) semistable, if \(G\) fails to be stable, it contains an exceptional vertex \(v\) (meaning that \(v\) has genus \(0\), no marked points, it has valence \(2\) and \(D(v)=1\)). Let \(e_{1},e_{2}\) be the two edges of \(G\) that contain \(v\). If \((G,D)\) admits a full forest \(V_{\bullet}\), then there are \(V_{1},V_{2}\) such that \(e_{i}\in E(V_{i},V_{i}^{c})\) for \(i=1,2\). If \(v\notin V_{i}\) for some \(i\), then \(\beta_{D}^{\star}(V_{i})=\beta^{\star}(V_{i}\cup\{v\})+1\), in particular \(\beta_{D}^{\star}(V_{i})>0\), a contradiction. This means that \(v\in V_{1}\cap V_{2}\). On the other hand, we have that \(e_{1},e_{2}\in E(V_{1}\cap V_{2},(V_{1}\cap V_{2})^{c})\), that \(V_{1}\cap V_{2}\) is extremal (by 5.7), hence connected, and \(\operatorname{leg}(1)\in(V_{1}\cap V_{2})\). This is a contradiction. Recall that, as stipulated in Section 2.c, when \(E\) is empty, we simply write \((G,D)\) in place of \((G,(\varnothing,D))\). ### The stratification categories In light of the results of the previous section, we are now ready to define the stratification category \(\widetilde{\mathfrak{C}}\), and some other categories \(\widetilde{\mathfrak{C}}_{E}\) and \(\widetilde{\mathfrak{C}}_{Y}\) that will play an important role in our proof of Theorem 7.4. In the next chapter we will interpret these categories as stratification categories of the resolution of the identity map \(\operatorname{\mathsf{Id}}\colon\overline{\mathcal{J}}_{g,n}^{d}(\phi^{+}) \dashrightarrow\overline{\mathcal{J}}_{g,n}^{d}(\phi^{-})\). **Definition 5.28**.: We define \(\widetilde{\mathfrak{C}}=\widetilde{\mathfrak{C}}(\phi^{+},\phi^{-})\) as a skeleton of the category whose objects are triples \((G,D,\alpha)\) such that \((G,D)\) is an object of \(\mathfrak{C}_{g,n}(\phi)\) and \(\alpha\) is a vine function with the property that \(L_{\alpha}=\operatorname{Ext}(G,D)\). Morphisms are given as in Definition 5.14. We define \(\widetilde{\mathfrak{C}}_{E}\) as the full subcategory of \(\widetilde{\mathfrak{C}}\) whose objects \((G,D,\alpha)\) with \(\alpha\)_full_. Equivalently (Proposition 5.20) objects are triples \((G,D,V_{\bullet})\) with \(V_{\bullet}\) a full forest in \(\operatorname{Ext}(G,D)\). We then define the category \(\widetilde{\mathfrak{C}}_{Y}\) as a skeleton of the category whose objects are triples \((G,D,V_{\bullet})\) where \(G\) is a \(n+1\) pointed stable graph of genus \(g\), the divisor \(D\) is \((\phi^{+},\operatorname{leg}(1))\)-quasistable, and \(V_{\bullet}\) is a full forest such that \(\operatorname{leg}(n+1)\notin V\) for every \(V\in V_{\bullet}\). (By Lemma 5.18, this implies that \(V_{\bullet}\) is a chain). Morphisms are specializations as in Definition 5.14. These categories will be interpreted geometrically in Remark 6.5. Note that the rank \(1\) objects (the divisors) of \(\widetilde{\mathfrak{C}}_{E}\) are the triples \((G,D,V_{\bullet})\) such that \(G\) is a vine curve and \(V_{\bullet}\) contains a single element (by Hypothesis 1, the vertex containing the first marking). ### The case of "good" hyperplanes In this section, we fix a hyperplane \(H=H(i,t,S;k)\) that satisfies \(S^{c}\neq\varnothing\). In this case, we prove that the corresponding exceptional vine curves loci in the compactified universal Jacobian have pairwise empty intersections. The main result of this section is: **Proposition 5.29**.: _The objects of \(\widetilde{\mathfrak{C}}_{E}\) are triples \((G,D,V_{\bullet})\) satisfying either_ _(1) \(G\) has no edges and \(V_{\bullet}\) is empty. This is the terminal object._ _(2) \(G\) is a vine curve and \(V_{\bullet}\) has a single element \(V=\{\operatorname{leg}(1)\}\)._ By Proposition 3.16, each vine curve as in (2) above is necessarily of the form \(G(i-j,t+2j,S)\), for all \(j\) satisfying \(-t/2<j\leq\min(i,g+1-t-i)\). **Proposition 5.30**.: _Let \(V\in\operatorname{Ext}(G,D)\), then \(\operatorname{leg}^{-1}(V)=S\)._ Proof.: Let \(G^{\prime}:=G/(E(V,V)\cup E(V^{c},V^{c}))\) be the vine curve associated to \(V\). If \(\operatorname{leg}^{-1}(V)\neq S\), then \((\phi_{0})_{G^{\prime}}\) is nondegenerate by Proposition 3.14. This contradicts the assumption that \(V\) is extremal. For our next result, recall that the canonical divisor \(K_{G}^{\operatorname{log}}\) of a graph \(G\) is defined by \(K_{G}^{\operatorname{log}}(v)=2g(v)-2+|E(v)|+|\operatorname{leg}^{-1}(v)|\) for all \(v\in V(G)\). **Lemma 5.31**.: _If \(V_{1},V_{2}\in\operatorname{Ext}(G,D)\), then \(K_{G}^{\operatorname{log}}(V_{1})=K_{G}^{\operatorname{log}}(V_{2})\)._ Proof.: We have that \(K_{G}^{\operatorname{log}}(V)=2g(V)-2+|E(V,V^{c})|+|\operatorname{leg}^{-1}(V)|\). By Proposition 5.30, we have that \(|\operatorname{leg}^{-1}(V_{1})|=|\operatorname{leg}^{-1}(V_{2})|\). By Proposition 3.16 we conclude that \(2g(V_{1})-2+|E(V_{1},V_{1}^{c})|=2g(V_{2})-2+|E(V_{2},V_{2}^{c})|\). **Proposition 5.32**.: _If \((G,D)\) is a \(\phi^{+}\)-stable pair, then \(|\operatorname{Ext}(G,D)|\leq 1\)._ Proof.: Assume that we have \(V_{1}\neq V_{2}\) elements of \(\operatorname{Ext}(G,D)\). By Proposition 5.30, we have that \(\operatorname{leg}^{-1}(V_{1})=\operatorname{leg}^{-1}(V_{2})\neq\varnothing\), so \(V_{1}\cap V_{2}\neq\varnothing\). Moreover, \(\operatorname{leg}^{-1}(V_{1}^{-c})=\operatorname{leg}^{-1}(V_{2}^{c})\neq\varnothing\), so \(V_{1}\cup V_{2}\neq V(G)\). By Propositions 5.7, 5.8 and 5.30 we have that \(V_{1}\cap V_{2}\in\operatorname{Ext}(G,D)\). This means that we can assume \(V_{1}\subseteq V_{2}\). By Lemma 5.31 we have that \(K_{G}^{\operatorname{log}}(V_{1})=K_{G}^{\operatorname{log}}(V_{2})\), which implies that \(K_{G}^{\operatorname{log}}(V_{2}\setminus V_{1})=0\), which is a contradiction with the fact that \(G\) is stable. **Corollary 5.33**.: _We have that \((G,D)\) has a full forest \(V_{\bullet}\) if and only if \(G\) is a vine curve and \(\beta_{D}^{-}(\{\operatorname{leg}(1)\})<0\)._ Proof.: By Proposition 5.32 we have that \(\operatorname{Ext}(G,D)\) has at most one element. Since \(V_{\bullet}\) must be nonempty, we have that \(V_{\bullet}=\operatorname{Ext}(G,D)\). Since \(V_{\bullet}=\{V\}\) is a full forest, we must have that \(E(G)=E(V,V^{c})\), and that \(G(V)\) and \(G(V^{c})\) are connected. This means that both \(V\) and \(V^{c}\) are singletons and hence that \(G\) is a vine curve. Proof of Proposition 5.29.: Let \((G^{\prime},D^{\prime})\) be a pair with different specializations \[\iota_{1}\colon(G,D)\to(G_{1},D_{1})\,\text{ and }\,\iota_{2}\colon(G,D)\to(G_{2},D_{2})\] to extremal pairs. By Remark 5.5 we have that \(\operatorname{Ext}(G^{\prime},D^{\prime})\supseteq\iota_{1}^{-1}(\operatorname {Ext}(G_{1},D_{1}))\cup\iota_{2}^{-1}(\operatorname{Ext}(G_{2},D_{2}))\), which means that \(\operatorname{Ext}(G^{\prime},D^{\prime})\) has at least \(2\) elements, contradicting Proposition 5.32. ## 6. Nonsingular resolution of the identity Let \(\phi^{-},\phi^{+}\in V^{d}_{g,n}\) be on opposite sides of a stability hyperplane \(H\) (Definition 5.1). In this section we construct a nonsingular resolution \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) of the identity map \(\operatorname{Id}\colon\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\dashrightarrow \overline{\mathcal{J}}^{d}_{g,n}(\phi^{-})\). We construct \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) as an iterated blow up of \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\) at certain strata \((G^{i},D^{i})\) of vine curves with extremal bidegrees. To define the order in which we blow up the vine curves, we first introduce a partial order. Let \((G_{i},D_{i})\), for \(i=1,2\), be a pair where \(G_{i}\) is a vine curve and \(D_{i}\) is an extremal bidegree, we also set \(v_{i}\) to be the vertex of \(G_{i}\) such that \(\beta_{D_{i}}^{-}(\{v_{i}\})<0\) (i.e., it is the vertex with the first marked point, in particular \(\phi^{+}\) ans \(\phi^{-}\) satisfy Hypothesis 1). We say that \((G_{1},D_{1})\leq(G_{2},D_{2})\) if there exists \((G,D)\in\mathfrak{C}_{g,n}(\phi^{+})\) and morphisms \(f_{i}\colon(G,D)\to(G_{i},D_{i})\) such that \(f_{1}^{-1}(v_{1})\subseteq f_{2}^{-1}(v_{2})\). Note that, in particular, \(f_{i}^{-1}(v_{i})\in\operatorname{Ext}(G,D)\) (see Remark 5.5). The next proposition guarantees that this preorder is indeed a partial order. **Proposition 6.1**.: _Assume \((G_{1},D_{1}),(G_{2},D_{2})\) are vine curve strata with extremal bidegrees for \(\phi^{+},\phi^{-}\). Then \((G_{1},D_{1})\leq(G_{2},D_{2})\) if and only if \(\operatorname{leg}_{G_{1}}^{-1}(v_{1})\subseteq\operatorname{leg}_{G_{2}}^{-1 }(v_{2})\) and \(g_{G_{1}}(v_{1})\leq g_{G_{2}}(v_{2})\) and \(g_{G_{1}}(v_{1})+|E(G_{1})|\leq g_{G_{2}}(v_{2})+|E(G_{2})|\)._ Proof.: The "only if" part follows immediately from the existence of a common degeneration \((G,D)\), and the fact that for \(V_{1}\subseteq V_{2}\) then if \(V_{1},V_{2}\in\operatorname{Ext}(G,D)\) or if \(V_{1}^{c},V_{2}^{c}\in\operatorname{Ext}(G,D)\), then \(g(V_{1})\leq g(V_{2})\). (Because elements of \(\operatorname{Ext}(G,D)\) and complements of elements of \(\operatorname{Ext}(G,D)\) are connected). For the "if" part, consider the graph \(G\) with \(3\) vertices \(w_{1}\), \(w_{2}\), \(w_{3}\) with \(|E(w_{1},w_{3})|=\lambda\) and \(|E(w_{1},w_{2})|=|E(G_{1})|-\lambda\) and \(|E(w_{2},w_{3})|=|E(G_{2})|-\lambda\). Set \(g_{G}(w_{1})=g_{G_{1}}(v_{1})\) and \(g_{G}(w_{2})=g_{G_{2}}(v_{2})-g_{G_{1}}(v_{1})+\lambda-|E(G_{1})|+1\), and \(g_{G}(w_{3})\) so that \(g(G)=g\). The numerical assumptions in the claim guarantee the existence of \(\lambda\) such that \(E(w_{i},w_{j})\geq 1\) for all \(i\neq j\) and \(g(w_{2}),g(w_{3})\geq 0\). It is then straightforward to check that the given graph \(G\) admits a morphism to \(G_{1}\) (by contracting \(E(w_{2},w_{3})\)) and to \(G_{2}\) (by contracting \(E(w_{1},w_{2})\)). We are now ready to construct our resolution of the identity map. **Construction 6.2**.: _Take any extension to a total order of the partial order defined above on the set of pairs of vine curves with an extremal bidegree, and denote this extension by \((G^{1},D^{1})<(G^{2},D^{2})<\ldots<(G^{m},D^{m})\)._ _Define \(J_{i}\) inductively as follows: \(J_{0}=\overline{\mathcal{J}}(\phi^{+})\) and_ \[J_{i}=\operatorname{Bl}_{J_{G^{i},D^{i},\alpha^{i}}}(J_{i-1})\] _where \(\alpha^{i}\) is the only vine function on \((G^{i},D^{i})\) with \(L_{\alpha^{i}}=\varnothing\), and \(J_{G,D,\alpha^{i}}\) is the strict transform of \(J_{G^{i},D^{i}}\subset\overline{\mathcal{J}}_{g,n}^{d}(\phi^{+})\). Following this, let \(\widetilde{\mathcal{J}}_{g,n}^{d}(\phi^{+},\phi^{-}):=J_{m}\)._ _Similarly, let \(G^{i}(P)\) be the same vine curve \(G^{i}\) with an additional marked point \(P\) on the vertex that is not \(\operatorname{leg}_{G_{i}}(1)\), and denote by \(D^{i}(P)\) and \(\alpha^{i}(P)\) the obvious lifts. Then define \(J_{i}(P)\) inductively, starting from \(J_{0}(P)=\overline{\mathcal{J}}_{g,n+1}(\phi^{+},P)\), and then_ \[J_{i}(P)=\operatorname{Bl}_{J_{G^{i}(P),D^{i}(P),\alpha^{i}(P)}}(J_{i-1}),\] _and finally \(\widetilde{\mathcal{J}}_{g,n+1}^{d}(\phi^{+},\phi^{-};P):=J_{m}(P)\)._ The first point to observe is that this blowup does not depend upon the chosen extension to a total order. **Proposition 6.3**.: _Let \((G_{i},D_{i})\), for \(i=1,2\), be two pairs where \(G_{i}\) is a vine curve and \(D_{i}\) is an extremal bidegree. Set \(\alpha_{i}\) to be the unique vine function on \((G_{i},D_{i})\) such that \(L_{\alpha_{i}}=\varnothing\). If \(f_{1},f_{2}\) are morphisms \(f_{i}\colon(G,D,\alpha)\to(G_{i},D_{i},\alpha_{i})\) where \(L_{\alpha}\) contains_ \[\{V\in\operatorname{Ext}(G,D);V\subsetneqq f_{i}^{-1}(v_{i})\},\] _then \(f_{1}^{*}(E(G_{1}))\cap f_{2}^{*}(E(G_{2}))=\varnothing\)._ Proof.: This follows from Proposition 6.1 and from a straightforward analysis of the possible common degenerations of two pairs \((G_{1},D_{1})\) and \((G_{2},D_{2})\) that satisfy the two inequalities given in _loc.cit_. We immediately deduce: **Corollary 6.4**.: _The blowup \(\widetilde{\mathcal{J}}_{g,n}^{d}(\phi^{+},\phi^{-})\to\overline{\mathcal{J}}_ {g,n}^{d}(\phi^{+})\) is independent of the chosen extension to a total order. (It only depends on the partial order). The same is true of \(\widetilde{\mathcal{J}}_{g,n+1}^{d}(\phi^{+},\phi^{-};P)\)._ Proof.: By Proposition 6.3, if two vine curves are incomparable under the partial order, their intersection is transversal, hence swapping the order of the two blowups does not change the result. We let \(\widetilde{\mathfrak{C}}\) be the category whose objects are \((G,D,\alpha)\), where \(\alpha\) is a vine function with \(L_{\alpha}=\operatorname{Ext}(G,D)\). The morphisms of \(\widetilde{\mathfrak{C}}\) are given in Definition 5.14. **Remark 6.5**.: The category \(\widetilde{\mathfrak{C}}\) defined in 5.28 is the category obtained by blowing up \(\mathfrak{C}_{g,n}(\phi^{+})\) (as in Definition 4.26) at \((G^{1},D^{1})\), then at \((G^{2},D^{2})\),..., and finally at \((G^{m},D^{m})\). The case \(m=1\) was discussed in Example 4.33, and the general case follows in the same way. The category \(\widetilde{\mathfrak{C}}_{E}\) is the subcategory of \(\widetilde{\mathfrak{C}}\) generated by the exceptional divisors only. The category \(\widetilde{\mathfrak{C}}_{Y}\) is the subcategory of the stratification category of \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\) (same as \(\widetilde{\mathfrak{C}}\), but with an extra marking), whose elements are the intersection of the components over the exceptional divisors that do not contain the first marking. Our main result in this section is then **Theorem 6.6**.: _The moduli stack \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) is nonsingular and the category \(\widetilde{\mathfrak{C}}=\widetilde{\mathfrak{C}}(\phi^{+},\phi^{-})\) is its blowup stratification from \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\). The same result holds for \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\), and the forgetful morphism \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\to\widetilde{ \mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) is the quasistable modification of the universal curve._ This follows as a combination of results in Section 4.d and Section 5.a. Proof.: To prove the first statement we apply Proposition 4.30 to the category \(\mathfrak{C}_{g,n}(\phi)\) of Example 4.12. The fact that each stratum \((G^{i},D^{i},\alpha^{i})\) has transversal self intersection in \(J_{i-1}\) is Proposition 5.25 (see also Remark 6.5). The second part follows from Theorem 3.29, and the fact that blowup commutes with flat base change. **Remark 6.7**.: Note that the vine curve strata \(G^{i}\) that are part of the datum of our blowup, do not necessarily have themselves transversal self-intersection (see Example 4.25) in \(\overline{\mathcal{M}}_{g,n}\), so the procedure of Section 4.d cannot be applied to blow up the strata \(G^{1},\ldots,G^{m}\) in \(\overline{\mathcal{M}}_{g,n}\) to produce a nonsingular DM stack with a stratification. Moreover, similarly to Example 4.25, one can also see that the strata \((G^{i},D^{i})\) do not themselves have transversal self-intersection in \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\). In our construction each stratum \((G^{i},D^{i})\) only acquires a transversal self-intersection once lifted to a stratum of \(J_{i-1}\) by means of the function \(\alpha^{i}\). Let \(Y^{\prime}\) be the Cartier divisor in \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\) given by the sum of all strata that correspond to \((G,D,\alpha)\), where \((G,D)\) is a simple vine curve: \[Y^{\prime}=\sum_{i=1}^{m}J^{\prime}_{G^{i},D^{i},\alpha^{i}}\] Let \(\mathcal{L}\) be the sheaf in \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\) obtained by pulling back a tautological sheaf in \(\overline{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},P)\) (see Theorem 3.29). We have the following result. **Theorem 6.8**.: _The line bundle \(\mathcal{L}(-Y^{\prime})\) is \(\phi^{-}\)-stable. In particular, \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) comes with two morphisms that resolve the identity map \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\dashrightarrow\overline{\mathcal{ J}}^{d}_{g,n}(\phi^{-})\). The first is the blow-down morphism (also defined by \(\mathcal{L}\)), and the second is the morphism defined by \(\mathcal{L}(-Y^{\prime})\)._ Proof.: By Proposition 3.10 and Remark 3.20, it is enough to check that \(\mathcal{L}(-Y^{\prime})\) is \(\phi^{-}\) stable on all vine curves. This follows from Construction 6.2. The divisor \(Y^{\prime}\subset\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\) is supported on the strata \((G^{i},D^{i},\alpha^{i})\) of \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\), which are exactly the vine curves where \(\mathcal{L}\) fails to be \(\phi^{+}\)-stable. Moreover, over each \(J^{\prime}_{G^{i},D^{i}}\), the divisor \(Y^{\prime}\) fiberwise intersects its complement at \(t_{i}=|E(G^{i})|\) points. Thus tensoring by \(\mathcal{O}(-Y^{\prime})\) has the effect of modifying the bidegree of \(\mathcal{L}\) on each stratum \((G^{i},D^{i},\alpha^{i})\) by \((-t_{i},+t_{i})\) (where the first element of the pair is the degree on the component of the vine curve that contains the first marking). Thus, because the bidegree \(D^{i}\) is extremal on \(G^{i}\), the line bundle \(\mathcal{L}(-Y^{\prime})\) is \(\phi^{-}\)-stable on \((G^{i},D^{i},\alpha^{i})\) for all \(i=1,\dots,m\). We conclude with the following observation. **Corollary 6.9**.: _The Cartier divisor \(Y^{\prime}\subset\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\) is simple normal crossing, and the stratification category it generates (as in Example 4.2) is \(\widetilde{\mathfrak{C}}_{Y}\)._ Proof.: The first part follows from Proposition 5.26. The second part follows directly from the definition of \(Y^{\prime}\). ## 7. Wall-Crossing Formulas Let \(\phi^{+},\phi^{-}\in V^{d}_{g,n}\) be on opposite sides of a stability hyperplane \(H\) (Definition 5.1). In this section we find a formula for the wall-crossing along \(H\) of Brill-Noether classes in terms of push-forward of boundary strata classes. We first give a formula in Theorem 7.4 on the nonsingular resolution \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) of the identity map \[\mathsf{Id}\colon\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\dashrightarrow \overline{\mathcal{J}}^{d}_{g,n}(\phi^{-})\] that we defined in Section 6. Then we write a second formula in Corollary 7.24, by taking the push-forward along the blow-down morphism \(p\colon\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\to \overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\). The universal quasistable family \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\to \widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) carries two line bundles: the pullback \(\mathcal{L}\) of a tautological line bundle on \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\) and its modification \(\mathcal{L}(-Y^{\prime})\), the pullback of a tautological line bundle on \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{-})\) (see Theorem 6.8). Our main result in Theorem 7.4 is a formula for the difference of the total Chern classes of the derived pushforward of \(-\mathcal{L}(-Y^{\prime})\) and that of \(-\mathcal{L}\) as an explicit push-forward of classes supported on the boundary. Because on the (unprimed) "resolved" \(\mathcal{J}_{g,n}\) strata the normal bundles split as a direct sum of line bundles, our formula is better written on the "resolved" strata instead of the embedded ones. Before stating the main results, let us fix some notation, for Theorem 7.4 and for Corollary 7.24. For each pair \((G,D)\in\mathfrak{C}_{g,n}(\phi^{+})\), denote by \(\pi_{G,D}\colon\mathcal{C}_{G,D}\to\mathcal{J}_{G,D}\) the pullback to \(\mathcal{J}_{G,D}\) of the universal quasistable family \(\overline{\mathcal{J}}^{d}_{g,n+1}(\phi^{+};P)\to\overline{\mathcal{J}}^{d}_{ g,n}(\phi^{+})\). The total space \(\mathcal{C}_{\mathcal{J}_{G,D}}\) has one irreducilbe component \(\mathcal{C}^{+}_{v}:=\mathcal{C}_{G,D,v}\) for each vertex \(v\) of \(G\). We denote by \(\pi^{+}_{v}:=\pi_{G,D,v}\colon\mathcal{C}^{+}_{v}\to\mathcal{J}_{G,D}\) the induced map. Also, for each \(V\subset V(G)\), we denote by \(\pi^{+}_{V}\colon\bigcup_{v\in V}\mathcal{C}^{+}_{v}\to\mathcal{J}_{G,D}\) the induced map on the union. We write \(X^{+}=X^{+}_{G,D}:=\mathcal{C}^{+}_{\mathrm{leg}(1)}\) and \(\Sigma^{+}=X^{+}\cap\mathcal{C}^{+}_{\{\mathrm{leg}(1)\}^{c}}\). We also write \(Y^{+}_{V}=\mathcal{C}^{+}_{V^{c}}\) for every \(V\subset V(G)\). We can extend these notations to \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\). Let us recall the geometry of \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) from Section 6. For each \(1\leq i\leq m\) set \(\beta_{i}:=(G_{i},D_{i})\) to be the vine curves strata from Construction 6.2, so \(\mathcal{J}_{\beta_{i}}\) are the strata of \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\) whose strict transforms of the images are blown-up, in the given order, in \(\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\), to obtain \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\). Recall that \(\widetilde{\mathfrak{C}}_{E}\) is the category of the (resolutions of the closed) strata of \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) that are in the intersection of the exceptional divisors \(E^{\prime}_{i}\). Objects are triples \((G,D,V_{\bullet})\) for \((G,D)\in\mathfrak{C}_{g,n}(\phi)\). Each stratum \(\widetilde{\mathcal{J}}_{G,D,V_{\bullet}}\) admits a forgetful morphism \(p_{G,D,V_{\bullet}}\) to \(\mathcal{J}_{G,D}\), and we define \(\pi_{v}\), \(\pi_{V}\), \(X\), \(\Sigma\) and \(Y_{V}\) as the pullbacks via \(p_{G,D,V_{\bullet}}\) of the corresponding items defined in the previous paragraph for \((G,D)\). Also, we set \(Y_{G,D,V_{\bullet}}:=\bigcap_{V\in V_{\bullet}}Y_{V}\), note that, by Proposition 5.7, \(Y_{G,D,V_{\bullet}}\) is nonempty if and only if \(V_{\bullet}\) is a chain (as in the definition of \(\widetilde{\mathfrak{C}}_{Y}\) in Definition 5.28), and in that case, we have that \(Y_{G,D,V_{\bullet}}=Y_{\max(V_{\bullet})}\). Also, for some triple \((G,D,V_{\bullet})\), we define \[F^{X}_{+}:=-R^{\bullet}(\pi^{+}_{X})_{*}\mathcal{L}^{+}(-\Sigma^{+})_{|X^{+}} ;\ F^{+}_{V}:=-R^{\bullet}(\pi_{V^{c}})_{*}\mathcal{L}^{+}_{|Y^{+}_{V}};\ H^{+}_{V}:=F^{+}_{V}-\sum_{V^{ \prime}\in V_{\bullet},V^{\prime}\ni V}F^{+}_{V^{\prime}} \tag{7.1}\] and \[F^{X}:=-R^{\bullet}(\pi_{X})_{*}\mathcal{L}(-\widetilde{\Sigma})_{|X};\ \ F_{V}:=-R^{\bullet}(\pi_{V^{c}})_{*}\mathcal{L}_{|Y_{V}};\ \ \text{and}\ \ \ H_{V}:=F_{V}-\sum_{V^{\prime}\in V_{\bullet},V^{\prime}\ni V}F_{V^{\prime}} \tag{7.2}\] Note also that \(\mathcal{L}\) is the pullback of \(\mathcal{L}^{+}\), hence \(F^{X}\), \(F_{V}\) and \(H_{V}\) are the pullback via \(p_{G,D,V}\) of \(F^{X}_{+}\), \(F^{+}_{V}\) and \(H^{+}_{V}\) respectively. Let \(E_{i}\) be the exceptional stratum of the blowup morphism \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\to\overline{\mathcal{J}}^ {d}_{g,n}(\phi^{+})\), so that \(E_{i}\to E^{\prime}_{i}\subset\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^ {-})\) is the exceptional divisor, and each \(E^{\prime}_{i}\) is contracted to \(\mathcal{J}^{\prime}_{\beta_{i}}\). Following the notation in the previous paragraph, we let then \(X^{\prime}_{i}\cup Y^{\prime}_{i}\) denote the two irreducible components of the restriction to \(E^{\prime}_{i}\) of the universal quasistable family \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\to\widetilde{\mathcal{J} }^{d}_{g,n}(\phi^{+},\phi^{-})\), where \(X^{\prime}_{i}\) is the component containing the first marked point and \(Y^{\prime}_{i}\) is the other component, and denote by \(X_{i}\) and \(Y_{i}\) the base change to \(E_{i}\) of \(X^{\prime}_{i}\) and \(Y^{\prime}_{i}\). Recall that the divisor \(Y^{\prime}\) in Theorem 6.8 is precisely \(\sum_{i}Y^{\prime}_{i}\). We now define psi-classes following (4.34). Each edge \(e\in E(G)\) defines a morphism \(f_{e}\colon\mathcal{J}_{G,D}\to\mathcal{J}_{G^{\prime},D^{\prime}}\) to some codimension one stratum \((G^{\prime},D^{\prime})\), and we set \[\Psi_{G^{\prime},D^{\prime}}:=-c_{1}\left(N_{\mathcal{J}_{G^{\prime},D^{\prime }}}\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\right),\quad\Psi_{(G,D,e)}:=f^{ *}_{e}(\Psi_{G^{\prime},D^{\prime}}). \tag{7.3}\] In Remark 7.30 we will discuss how these compare to the usual psi-classes on \(\overline{\mathcal{M}}_{g,n}\). Similarly, for a triple \((G,D,V_{\bullet})\) in \(\widetilde{\mathfrak{C}}_{E}\), we have that \(S_{G,D,V_{\bullet}}=V_{\bullet}\) (recall the definition of \(S_{G,D,V_{\bullet}}\) in Section 4.a Item (4)) and each \(V\in V_{\bullet}\) defines a morphism \(f_{G,D,V}\colon\widetilde{\mathcal{J}}_{G,D,V_{\bullet}}\to E_{i}\) for some \(i=1,\dots,m\). As in Section 4.b Item (3), we set \(\mathbb{L}_{V}:=f^{*}_{G,D,V}N_{E_{i}}(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi ^{+},\phi^{-})\) and define the psi-classes \[\Psi_{i}:=-c_{1}\left(N_{E_{i}}(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+}, \phi^{-})\right),\quad\Psi_{G,D,V}:=f^{*}_{G,D,V}(\Psi_{i})=-c_{1}(\mathbb{L} _{V})\] on \(E_{i}\) and on \(\widetilde{\mathcal{J}}_{G,D,V_{\bullet}}\) respectively. Finally, define the coefficient \[b_{G,D,V}((j_{V})_{V\in V_{\bullet}};(k_{V})_{V\in V_{\bullet}}):=-\binom{k_{ V}+g_{V}-d_{V}-\sum_{V^{\prime}\geq V}j_{V^{\prime}}+(k_{V^{\prime}}+1)}{k_{V}+1}\] for each vectors \((j_{V}\geq 0)_{V\in V_{\bullet}}\) and \((k_{V}\geq 0)_{V\in V_{\bullet}}\) of nonnegative integers. **Theorem 7.4**.: _The difference of total Chern classes_ \[c_{t}(-R^{\bullet}\pi_{*}\mathcal{L})-c_{t}(-R^{\bullet}\pi_{*}\mathcal{L}(-Y ^{\prime})) \tag{7.5}\] _in \(A^{*}\left(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\right)\) equals the boundary class_ \[\sum_{\Gamma=(G,D,V_{\bullet})\in\widetilde{\mathfrak{C}}_{E}\setminus\{ \bullet\}}\frac{-f_{\Gamma*}}{|\mathrm{Aut}(\Gamma)|}\Bigg{(}\sum_{ \begin{subarray}{c}(j_{V}\geq 0)_{V\in V_{\bullet}},\\ (k_{V}\geq 0)_{V\in V_{\bullet}}\\ s\geq 0\end{subarray}}c_{s}(\widetilde{F}^{X})\cdot\prod_{V\in V_{\bullet}}b_{G,D,V} ((j_{V})_{V},(k_{V})_{V})\cdot c_{j_{V}}(\widetilde{H}_{V})\cdot\Psi_{V}^{k_ {V}}\Bigg{)}, \tag{7.6}\] _where the sum runs over all resolved strata \(\Gamma\in\widetilde{\mathfrak{C}}_{E}\) (intersection of exceptional divisors) of \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) (see Section 6), except the terminal object (the open stratum)._ Note that Formula (7.6) gives a total Chern class from which one can immediately deduce the difference of the Brill-Noether classes on \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\). This is a more direct formula than e.g. the main result of [10], where an explicit formula is given for the Chern character, which then requires inversion to obtain the desired Cher class. Proof.: We start our calculation by making use of the short exact sequence \[0\to\mathcal{L}(-Y^{\prime})\to\mathcal{L}\to\mathcal{L}|_{Y^{\prime}}\to 0. \tag{7.7}\] on the quasistable family \(\widetilde{\mathcal{J}}^{d}_{g,n+1}(\phi^{+},\phi^{-};P)\to\widetilde{\mathcal{ J}}^{d}_{g,n}(\phi^{+},\phi^{-})\). For convenience, define \[F:=-R^{\bullet}\pi_{*}(\mathcal{L}),\quad\widetilde{F}:=R^{\bullet}\pi_{*}( \mathcal{L}_{|Y^{\prime}}).\] We then apply Whitney's formula \[c_{t}(-R^{\bullet}\pi_{*}(\mathcal{L}(-Y^{\prime})))-c_{t}(-R^{\bullet}\pi_{*}( \mathcal{L}))=(c_{t}(\widetilde{F})-1)\cdot c_{t}(F) \tag{7.8}\] for the total Chern class of the three terms in (7.7). This computes the opposite of (7.5). From now on, we will mostly work on the term \(c_{t}(\widetilde{F})-1\). For each \(\Gamma^{\prime}\in\widetilde{\mathfrak{C}}_{E}\) we let \[F^{Y}_{\Gamma^{\prime}}:=-R^{\bullet}\pi_{*}\mathcal{L}_{|Y_{\Gamma^{\prime}}}.\] We now apply the following **Lemma 7.9**.: _(Inclusion-exclusion principle for a simple normal crossing stratification.) Let \(\mathcal{D}\) be a simple normal crossing divisor in \(X\), and let \(\mathfrak{C}\) be its category of strata. Then the following equality holds in the rational \(K\)-theory of \(X\):_ \[\mathcal{L}|_{\mathcal{D}}=\sum_{\alpha\in\mathfrak{C}}(-1)^{\operatorname{cd }(\alpha)-1}\mathcal{L}|_{\mathcal{D}_{\alpha}}\] By combining Lemma 7.9 with Lemma 6.9 (the fact that the \(Y^{\prime}_{i}\) are indeed simple normal crossing), and the multiplicativity of the total Chern class, together with the fact that \(Y_{\Gamma}\to Y^{\prime}_{\Gamma}\) is etale of degree \(|\operatorname{Aut}(\Gamma)|\) (by Corollary 6.9), we obtain \[c_{t}(F)\cdot(-1+c_{t}(\widetilde{F}))=c_{t}(F)\cdot\left(-1+\prod_{\Gamma^{ \prime}\in\widetilde{\mathfrak{C}}_{Y}\setminus\{\bullet\}}c_{t}\left((-1)^{ \operatorname{cd}\Gamma^{\prime}}\frac{f_{\Gamma^{\prime}*}F^{Y}_{\Gamma^{ \prime}}}{|\operatorname{Aut}(\Gamma^{\prime})|}\right)\right) \tag{7.10}\] where \(\widetilde{\mathfrak{C}}_{Y}\subseteq\widetilde{\mathfrak{C}}_{E}\) is the image in \(\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\) of the stratification induced by \(Y^{\prime}_{1},\dots,Y^{\prime}_{m}\) (and in the product we have removed its terminal object), and \[f_{\Gamma^{\prime}}\colon\widetilde{\mathcal{J}}_{\Gamma^{\prime}}\to \widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})=:\widetilde{\mathcal{J}}\] is the (resolution of the closed) stratum \(\widetilde{\mathcal{J}}^{\prime}_{\Gamma^{\prime}}\). For a fixed \(\Gamma^{\prime}\), we now aim to write each factor of the product in the RHS of (7.10) as a pushforward via the corresponding stratum. We apply Formula (4.22) (GRR for the total Chern class) to obtain that each factor \[c_{t}\left((-1)^{\operatorname{cd}\Gamma^{\prime}}\frac{f_{\Gamma^{\prime}*}F_{ \Gamma^{\prime}}^{Y}}{|\operatorname{Aut}(\Gamma^{\prime})|}\right)\] equals \[1+\sum_{\begin{subarray}{c}\Gamma\in\widetilde{\mathfrak{C}}_{E},\\ k\geq 1\end{subarray}}\frac{1}{|\operatorname{Aut}(\Gamma)|}f_{\Gamma*}\left( \sum_{\begin{subarray}{c}\{f_{1},\ldots,f_{k}\}\in\\ \operatorname{SInt}((f_{\Gamma})^{k})_{f_{\Gamma^{\prime}}}\end{subarray}} \frac{\prod_{j=1}^{k}\left(f_{j}^{*}c_{t}(\left(\bigwedge^{\bullet}N_{ \widetilde{\mathcal{J}}_{\Gamma}}^{\vee}\widetilde{\mathcal{J}}\otimes(-1)^{ \operatorname{cd}(\Gamma^{\prime})}F_{\Gamma^{\prime}}^{Y}\right)-1\right)}{c _{\operatorname{top}}N_{\widetilde{\mathcal{J}}_{\Gamma}}\widetilde{\mathcal{J }}}\right). \tag{7.11}\] Note that intersections in \(\widetilde{\mathfrak{C}}_{Y}\) are not necessarily objects of the latter category, so the product is taken over \(\widetilde{\mathfrak{C}}_{E}\supseteq\widetilde{\mathfrak{C}}_{Y}\). After that, to continue our derivation from Formula (7.10), we aim to calculate the product of the terms in (7.11) for varying \(\Gamma^{\prime}\). We apply the excess intersection formula Proposition 4.19 to the product (7.10), to deduce that it equals \[\sum_{\Gamma\in\widetilde{\mathfrak{C}}_{E}\backslash\{\bullet \}}\frac{1}{|\operatorname{Aut}(\Gamma)|}f_{\Gamma*}\Bigg{(}\sum_{ \begin{subarray}{c}\Gamma_{1},\ldots,\Gamma_{k}\in\widetilde{\mathfrak{C}}_{Y} \\ Q_{t}\frac{\subseteq\operatorname{Mor}(\Gamma,\Gamma_{t})\text{ for all }t=1,\ldots,k \\ \text{ such that }\cup_{t}Q_{t}\text{ is generic}\end{subarray}}}\] \[\frac{c_{t}(F_{\Gamma})\cdot\prod_{\begin{subarray}{c}j=1, \ldots,k,\\ f\in Q_{j}\end{subarray}}\left(f^{*}c_{t}(\left(\bigwedge^{\bullet}N_{ \widetilde{\mathcal{J}}_{\Gamma_{j}}}^{\vee}\widetilde{\mathcal{J}}\otimes(-1 )^{\operatorname{cd}(\Gamma_{j})}F_{\Gamma_{j}}^{Y}\right)-1\right)}{c_{ \operatorname{top}}N_{\widetilde{\mathcal{J}}_{\Gamma}}\widetilde{\mathcal{J }}}\Bigg{)}. \tag{7.12}\] Now we focus on simplifying the term inside the pushforward \(f_{\Gamma*}\). Following Proposition 4.6.for fixed \(\Gamma=(G,D,V_{\bullet})\), there is a natural bijection between the set of morphisms \[\Big{\{}\{Q_{t}\subseteq\operatorname{Aut}(\Gamma)\backslash\operatorname{ Mor}(\Gamma,\Gamma_{t})\}_{t=1,\ldots,k}\text{ for some }\Gamma_{1},\ldots,\Gamma_{k}\in\widetilde{\mathfrak{C}}_{Y}:\ \cup_{t}Q_{t}\text{ is generic}\Big{\}}\] and the set \[\big{\{}\{\ell_{1},\ldots,\ell_{M}\}\subseteq\text{chains}(V_{\bullet})\text{ with pairwise distinct elements, and such that }V_{\bullet}=\cup_{i=1}^{M}\ell_{i}\big{\}}\] with \(M=|Q_{1}|+\ldots+|Q_{k}|\), given by \[\{Q_{1},\ldots,Q_{k}\}\mapsto\bigcup_{t=1}^{k}\{f^{*}(V_{\Gamma_{j},\bullet}) \}_{f\in Q_{j}}.\] Moreover, if \(f_{\ell}\colon\Gamma\to\Gamma_{t}\) is a contraction that corresponds to the chain \(\ell\subseteq V_{\bullet}\), then \(f_{\ell}^{*}(F_{\Gamma_{t}}^{Y})=F_{\Gamma,\max(\ell)}\) (in particular, the latter only depends on \(\max(\ell)\in V_{\bullet}\), and not on the whole chain). Furthermore, the pullback \(f_{\ell}^{*}(N_{\widetilde{\mathcal{J}}_{\Gamma_{t}}}\widetilde{\mathcal{J}})\) equals a direct sum of line bundles, which allows us to expand the wedge product \[\bigwedge^{\bullet}f_{\ell}^{*}(N_{\widetilde{\mathcal{J}}_{\Gamma_{t}}} \widetilde{\mathcal{J}})=\bigwedge^{\bullet}\bigoplus_{V\in\ell}\mathbb{L}_{V}= \sum_{S\subseteq\ell}(-1)^{|S|}\bigotimes_{V\in S}\mathbb{L}_{V}.\] In light of this, we rewrite the numerator inside the pushforward via \(f_{\Gamma}\) in (7.12) as \[\sum_{\begin{subarray}{c}\Gamma_{1},\ldots,\Gamma_{k}\in\widetilde{\mathfrak{C }}_{Y}\\ Q_{t}\underline{\subset}\overline{\mathrm{Mor}}(\Gamma,\Gamma_{t})\text{ for all }t=1,\ldots,k\\ \text{ such that }\cup_{t}Q_{t}\text{ is generic}\end{subarray}}c_{t}(F_{\Gamma}) \cdot\prod_{\begin{subarray}{c}j=1,\ldots,k,\\ f\in Q_{j}\end{subarray}}\left(f^{*}c_{t}\left(\bigwedge^{\bullet}N_{ \widetilde{\mathcal{J}}_{\Gamma_{j}}}^{\vee}\widetilde{\mathcal{J}}\otimes( -1)^{\operatorname{cd}\Gamma_{j}}F_{\Gamma_{j}}^{Y}\right)-1\right)=\] \[=c_{t}(F_{\Gamma})\cdot\sum_{\begin{subarray}{c}\{\ell_{1}, \ldots,\ell_{M}\}\subseteq\mathrm{chains}(V_{\bullet})\\ \text{ such that }\ell_{1}\cup\ldots\cup\ell_{M}=V_{\bullet}\end{subarray}} \prod_{i=1}^{M}\prod_{S\subseteq\ell_{i}}\left(c_{t}\left((-1)^{|S|}\bigotimes _{V\in\ell_{i}}\mathbb{L}_{V}^{\vee}\otimes(-1)^{|\ell_{i}|}F_{\Gamma,\max( \ell_{i})}\right)-1\right) \tag{7.13}\] Next, we apply the inclusion-exclusion principle in the form \[\sum_{\{\ell_{1},\ldots,\ell_{M}\}\subseteq\mathrm{chains}(V_{\bullet})} \varphi(\ell_{1},\ldots,\ell_{M})=\sum_{K\subseteq V_{\bullet}}\sum_{ \begin{subarray}{c}\{\ell_{1},\ldots,\ell_{M}\}\subseteq\mathrm{chains}(V_{ \bullet})\\ \text{ such that }\ell_{1}\cup\ldots\cup\ell_{M}=V_{\bullet}\end{subarray}}(-1)^{|K|} \varphi(\ell_{1},\ldots,\ell_{M})\] for any function \(\varphi\colon\,\mathrm{chains}\,V_{\bullet}\to\mathbb{Z}\), to eliminate the condition that \(\bigcup_{i=1}^{M}\ell_{i}=V_{\bullet}\) in the last set of indices of (7.13). We thus obtain that (7.13) equals \[\sum_{K\subseteq V_{\bullet}}(-1)^{|K|}c_{t}(F_{\Gamma})\cdot\prod_{S\in \mathrm{chains}(V_{\bullet}\backslash K)}c_{t}\Bigg{(}\bigotimes_{V\in S} \mathbb{L}_{V}^{\vee}\otimes\sum_{\begin{subarray}{c}\ell\in\mathrm{chains}(V _{\bullet}\backslash K)\\ \text{ such that }S\subseteq\ell\end{subarray}}(-1)^{|S|+|\ell|}F_{\Gamma,\max( \ell)}\Bigg{)} \tag{7.14}\] We now apply Lemma 7.18 to simplify (7.14), so it becomes \[\sum_{K\subseteq V_{\bullet}}(-1)^{|K|}c_{t}\left(F_{X}\right)\cdot\prod_{V_{ 0}\in V_{\bullet}\backslash K}c_{t}\Bigg{(}\bigotimes_{\begin{subarray}{c}V \in V_{\bullet}\backslash K\\ V\leq V_{0}\end{subarray}}\mathbb{L}_{V}^{\vee}\otimes H_{K,V_{0}}\Bigg{)}. \tag{7.15}\] After all these simplifications, we now go back and replace (7.15) as the numerator of the term in (7.12) that is pushed forward via \(f_{\Gamma}\), to obtain that (7.12) equals \[\sum_{\Gamma\in\widetilde{\mathfrak{C}}_{E}\setminus\{\bullet\}}\frac{1}{| \operatorname{Aut}(\Gamma)|}f_{\Gamma*}\Bigg{(}\frac{\sum_{K\subseteq V_{ \bullet}}(-1)^{|K|}c_{t}\left(F_{X}\right)\cdot\prod_{V_{0}\in V_{\bullet} \setminus K}c_{t}\left(\bigotimes_{V\leq V_{0}}\mathbb{L}_{V}^{\vee}\otimes H _{K,V_{0}}\right)}{c_{\operatorname{top}}N_{\widetilde{\mathcal{J}}_{\Gamma}} \widetilde{\mathcal{J}}}\Bigg{)}. \tag{7.16}\] Our final step to conclude is to repeatedly use Formula (3.33) for the total Chern class of the tensor product of a K-theory element times a line bundle, and then divide by \[c_{\operatorname{top}}N_{\widetilde{\mathcal{J}}_{\Gamma}}\widetilde{ \mathcal{J}}=\prod_{V\in V_{\bullet}}-\Psi_{V}. \tag{7.17}\] After combining the binomial coefficients by means of Vandermonde's identity, we obtain that Formula (7.16) equals the final formula (7.6). (One way to obtain the formula is to consider only the case \(K=\varnothing\) in (7.15), then expanding as a polynomial in \(\{\Psi_{V}\}_{V\in V_{\bullet}}\), and considering only the monomial containing \(\prod_{V\in V_{\bullet}}\Psi_{V}^{a_{V}}\) for all \(a_{V}\ \geq\ 1\), and then lowering the exponents \(a_{V}\) by one because of the division by the term in (7.17)). We now prove the ancillary results used in the proof of Theorem 7.4. **Lemma 7.18**.: _Let \(V_{\bullet}\) be a rooted forest, and let \((x_{V})_{V\in V_{\bullet}}\) be formal variables. Let \(S\subseteq V_{\bullet}\) be a chain in \(V_{\bullet}\). Then_ \[\sum_{\begin{subarray}{c}\ell\in\operatorname{chains}(V_{\bullet})\\ \text{such that }S\subseteq\ell\end{subarray}}(-1)^{|S|+|\ell|}x_{\max(\ell)}= \begin{cases}-\sum_{V\in\min(V_{\bullet})}x_{V}&\text{if $S=\varnothing$;}\\ &\text{if there is $V\in V_{\bullet}\setminus S$ s.t.}\\ &V<\max(S)\text{ and $S\cup\{V\}$}\\ &\text{is a chain;}\\ x_{\max(S)}-\sum_{\begin{subarray}{c}V\in V_{\bullet}\\ V\gg\max(S)\end{subarray}}x_{V}&\text{otherwise.}\end{cases}\] Proof.: Assume that \(S\) is nonempty and that there exists \(V\in V_{\bullet}\setminus S\) such that \(V<\max(S)\) and \(S\cup\{V\}\) still is a chain. Then we can write \[\sum_{\begin{subarray}{c}C\in\operatorname{chains}(V_{\bullet}) \\ S\subset C\end{subarray}}(-1)^{|S|+|C|-1}x_{\max(C)}=\\ =\sum_{\begin{subarray}{c}C\in\operatorname{chains}(V_{\bullet} )\\ S\subset C,V\notin C\end{subarray}}(-1)^{|S|+|C|-1}x_{\max(C)}+(-1)^{|S|+|C\cup \{V\}|-1}x_{\max(C\cup\{V\})}\] since \(\max(C)=\max(C\cup\{V\})\), we have that the sum is \(0\). Assume that \(S\) is nonempty and denote by \(\max(S)=V_{0}\). Also assume that \(S=\{V\in V_{\bullet}:V\leq V_{0}\}\). Then we can write \[\sum_{\begin{subarray}{c}C\in\operatorname{chains}(V_{\bullet})\\ S\subset C\end{subarray}}(-1)^{|S|+|C|-1}x_{\max(C)}=\sum_{V\in V_{\bullet}}x_{ V}\sum_{\begin{subarray}{c}C\in\operatorname{chains}(V_{\bullet})\\ S\subset C,\max(C)=V\end{subarray}}(-1)^{|S|+|C|-1}\] If \(V=V_{0}\), then the condition \(S\subseteq C\) and \(\max(C)=V_{0}\) is equivalent to \(C=S\), so \[\sum_{\begin{subarray}{c}C\in\operatorname{chains}(V_{\bullet})\\ S\subseteq C,\max(C)=V\end{subarray}}(-1)^{|S|+|C|-1}=-1.\] If \(V\!\gg\!V_{0}\), then the condition \(S\subset C\) and \(\max(C)=V_{0}\) is equivalent to \(C=S\cup\{V\}\), so \[\sum_{\begin{subarray}{c}C\in\operatorname{chains}(V_{\bullet})\\ S\subset C,\max(C)=V\end{subarray}}(-1)^{|S|+|C|-1}=1.\] If \(V<V_{0}\), then the sum is empty, and so it is \(0\). If \(V\gg V_{0}\), choose \(V^{\prime}\) such that \(V_{0}<V^{\prime}<V\), and then \[\sum_{\begin{subarray}{c}C\in\operatorname{chains}(V_{\bullet})\\ S\subset C,\max(C)=V\end{subarray}}(-1)^{|S|+|C|-1}=\sum_{\begin{subarray}{c}C \in\operatorname{chains}(V_{\bullet})\\ S\subset C,\max(C)=V,V^{\prime}\notin C\end{subarray}}(-1)^{|S|+|C|-1}+(-1)^{ |S|+|C\cup\{V^{\prime}\}|-1}\] which equals \(0\). The case \(S=\varnothing\) is similar. And now the inclusion-exclusion principle: Proof.: (of Lemma 7.9). It is enough to prove the statement for the case of the structure sheaf \(\mathcal{L}=\mathcal{O}\). Assuming that \(\mathcal{D}=\mathcal{D}_{1}+\mathcal{D}_{2}\), we have the short exact sequences \[0\to\mathcal{O}(-\mathcal{D}_{1}-\mathcal{D}_{2})\to\mathcal{O} \to\mathcal{O}_{\mathcal{D}_{1}+\mathcal{D}_{2}}\to 0\] \[0\to\mathcal{O}(-\mathcal{D}_{1}-\mathcal{D}_{2})\to\mathcal{O} (-\mathcal{D}_{2})\to\mathcal{O}_{\mathcal{D}_{1}}(-\mathcal{D}_{2})\to 0\] \[0\to\mathcal{O}(-\mathcal{D}_{2})\to\mathcal{O}\to\mathcal{O}_{ \mathcal{D}_{2}}\to 0\] \[0\to\mathcal{O}_{\mathcal{D}_{1}}(-\mathcal{D}_{2})\to \mathcal{O}_{\mathcal{D}_{1}}\to\mathcal{O}_{\mathcal{D}_{1}\cap\mathcal{D}_{ 2}}\to 0.\] By combining these, we obtain the equality \[\mathcal{O}_{\mathcal{D}_{1}+\mathcal{D}_{2}}=\mathcal{O}_{\mathcal{D}_{1}}+ \mathcal{O}_{\mathcal{D}_{2}}-\mathcal{O}_{\mathcal{D}_{1}\cap\mathcal{D}_{2}}.\] The statement is then obtained by repeatedly decomposing \(\mathcal{D}\) until all summands are irreducible. Our next task is to take the push-forward of Formula (7.6) via the blow-down morphism \(p\colon\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\to\overline{ \mathcal{J}}^{d}_{g,n}(\phi^{+})\), to produce an explicit graph formula for the difference of the Brill-Noether classes. Recall the notation for the strata \(\mathcal{J}_{G,D}\) that was set in the beginning of this section. For \(V\in V_{\bullet}\), we define the "close upper edges" and "far upper edges" as \[\operatorname{CU}(V):=E(V,\operatorname{next}(V)\setminus V)\text{ and } \operatorname{FU}(V):=E(V,\operatorname{next}(V)^{c}) \tag{7.19}\] so there is a decomposition \[E(V,V^{c})=\operatorname{CU}(V)\sqcup\operatorname{FU}(V).\] For every collection \((g_{V})_{V\in V_{\bullet}}\) of nonnegative integers, define the class \[c_{(G,D,V_{\bullet})}((g_{V})_{V\in V_{\bullet}})\in A^{\bullet}(\overline{ \mathcal{J}}_{(G,D)})\] to equal \[\sum_{\begin{subarray}{c}(a_{e},V)_{V\in V_{\bullet},e\in\operatorname{FU}(V) \\ (g_{e},V)_{V\in V_{\bullet},e\in\operatorname{CU}(V)}\\ \text{such that, for all }V\in V_{\bullet},\\ \sum(g_{e},V+1)-\sum a_{e,V}=g_{V}+1\end{subarray}}\left(\prod_{e\in E(G)} \Psi^{g_{e,k(e)-\sum_{V\in S(e)}a_{e,V}}}_{(G,D,e)}\cdot\prod_{V\in S(e)}(-1) ^{a_{e,V}}\binom{g_{e,k(e)}-\sum_{\begin{subarray}{c}V^{\prime}\in S(e),\\ V\subseteq V^{\prime}\\ a_{e,V}\end{subarray}}}{a_{e,V}}\right) \tag{7.20}\] where, the \(a_{e,V}\) and \(g_{e,V}\) vary over the nonnegative integers, and for \(e\in E(G)\), we let \(k(e)\in V_{\bullet}\) be the unique (by Proposition 5.19) element such that \(e\in\operatorname{CU}(k(e))\), and we let \(S(e):=\{V\in V_{\bullet}:e\in\operatorname{FU}(V)\}\). For a given \(h\colon(G^{\prime},D^{\prime},V^{\prime}_{\bullet})\to(G,D,V_{\bullet})\), the pullback \(h^{*}((g_{V}))=h^{*}((g_{V})_{V\in V_{\bullet}})_{V^{\prime}\in V^{\prime}_{ \bullet}}\) is defined by: \[h^{*}((g_{V})_{V\in V_{\bullet}})_{V^{\prime}}:=\begin{cases}g_{V}&\text{ if }V^{\prime}=h^{-1}(V)\text{ for some }V\in V_{\bullet};\\ -1&\text{ otherwise.}\end{cases} \tag{7.21}\] We have then the following pushforward result. **Proposition 7.22**.: _The following pushforward formula holds_ \[p_{*}\left(\frac{f_{(G,D,V_{\bullet})*}}{|\operatorname{Aut}(G, D,V_{\bullet})|}\left(\prod_{V\in V_{\bullet}}\Psi^{g_{V}}_{V}\right) \right)=\\ \sum_{(G^{\prime},D^{\prime})\in\xi_{g,n}(\phi)}\frac{f_{(G^{ \prime},D^{\prime})*}}{|\operatorname{Aut}(G^{\prime},D^{\prime})|}\Bigg{(} \sum_{\begin{subarray}{c}V^{\prime}_{\bullet}\text{ a full forest in }\operatorname{ Ext}(G^{\prime},D^{\prime}),\\ h\in\operatorname{Mor}((G^{\prime},D^{\prime},V^{\prime}),(G,D,V))\end{subarray}}c _{(G^{\prime},D^{\prime},V^{\prime}_{\bullet})}(h^{*}((g_{V})))\Bigg{)} \tag{7.23}\] Proof.: Follows from Corollary 4.35. Our final step is to take the pushforward of our formula in Theorem 7.4 via \(p\colon\widetilde{\mathcal{J}}^{d}_{g,n}(\phi^{+},\phi^{-})\to\overline{ \mathcal{J}}^{d}_{g,n}(\phi^{+})\). Note first that the K-theory elements defined in (7.1) are pull-backs via \(p\) of similar classes, which we will denote with the same name in the next result. The pushforward via \(p\) is then obtained by combining Theorem 7.4 and Proposition 7.22. **Corollary 7.24**.: _The difference \(\mathsf{w}_{d}(\phi^{+})-\mathsf{Id}^{*}\mathsf{w}_{d}(\phi^{-})\) in \(A^{g-d}\big{(}\overline{\mathcal{J}}^{d}_{g,n}(\phi^{+})\big{)}\) equals_ \[-\sum_{(G,D)\in\mathfrak{C}_{g,n}(\phi)}\frac{1}{|\operatorname{ Aut}(G,D)|}f_{(G,D)*}\\ \Bigg{(}\sum_{\begin{subarray}{c}V_{\bullet}\text{ a full forest in }\operatorname{Ext}(G,D),\\ s+\sum_{V}j_{V}+\sum g_{e}=g-d-|E(G)|\end{subarray}}\alpha(s,(j_{V})_{V\in V _{\bullet}},(g_{e})_{e\in E(G)})\cdot c_{s}(F^{X}_{+})\cdot\prod_{V\in V_{ \bullet}}c_{j_{V}}(H^{+}_{V})\prod_{e}\Psi^{g_{e}}_{(G,D,e)}\Bigg{)}, \tag{7.25}\] _where each coefficient \(\alpha(s,(j_{V})_{V\in V_{\bullet}},(g_{e})_{e\in E(G)})\) is defined by_ \[\sum_{(a_{e},V)_{(e,V)}}(-1)^{|V_{\bullet}|}\prod_{e\in E(G)}\prod _{V\in S(e)}(-1)^{a_{e,V}}\binom{g_{e}+\sum_{V^{\prime}\in S(e)},a_{e,V^{ \prime}}}{a_{e,V}}. \tag{7.26}\] _where \(\operatorname{CN}(V):=E(\operatorname{next}(V),\operatorname{next}(V)^{c})\), each \(a_{(e,V)}\) ranges over the integers, and the indices \((e,V)\) range over all \(V\in V_{\bullet}\) and over all \(e\in E(V,\operatorname{FU}(V))\)._ (Note that, because of the last binomial, the summand is zero except for finitely many natural number values of \(a_{e,V}\), Also, note that \(H^{+}_{V}\) depends on \(V_{\bullet}\)). Proof.: First observe that the result amounts to taking the degree \(g-d\) part of the pushforward via \(p\) of Formula (7.6). The calculation that we are attempting has the form \[p_{*}\left(\sum_{(G,D,V_{\bullet})\in\widetilde{\mathfrak{C}}_{E}}\frac{f_{(G,D,V_{\bullet})*}}{|\operatorname{Aut}(G,D,V_{\bullet})|}\left(\sum_{(g_{V})_{ V\in V_{\bullet}}}p_{(G,D)}^{*}(\beta_{(g_{V})_{V\in V_{\bullet}}})\prod_{V\in V _{\bullet}}\Psi^{g_{V}}_{V}\right)\right). \tag{7.27}\] for suitable classes \(\beta_{(g_{V})_{V\in V_{\bullet}}}\in A^{*}(\overline{\mathcal{J}}_{(G,D)})\) as in (7.6). Since \(F^{X}\) and \(H_{V}\) are pullback of \(F^{X}_{+}\) and \(H^{+}_{V}\), by the push-pull formula, and by Proposition 7.22, we obtain that (7.27) equals \[\sum_{(G^{\prime},D^{\prime})\in\mathfrak{C}_{g,n}(\phi)}\frac{f_{(G^{\prime},D^{ \prime})*}}{|\mathrm{Aut}(G^{\prime},D^{\prime})|}\Bigg{(}\sum_{\begin{subarray}{ c}V^{\prime}_{\bullet}\text{ a full forest in }\operatorname{Ext}(G^{\prime},D^{\prime});\\ h\in\overline{\mathrm{Mor}}((G^{\prime},D^{\prime},V^{\prime}_{\bullet}),(G,D,V _{\bullet}))\\ (g_{V}\geq 0)_{V\in V_{\bullet}}\end{subarray}}h^{*}\beta_{(g_{V})_{V\in V _{\bullet}}}\cdot\cdot\texttt{{c}}_{(G^{\prime},D^{\prime},V^{\prime}_{\bullet} )}(h^{*}((g_{V})))\Bigg{)}. \tag{7.28}\] For a tuple \((g_{V^{\prime}}\geq-1)_{V^{\prime}\in V^{\prime}_{\bullet}}\) we define \(h\colon(G^{\prime},D^{\prime},V^{\prime}_{\bullet})\to(G,D,V_{\bullet})\) as the unique contraction with the property that \(g_{V^{\prime}}\geq 0\) if and only \(V^{\prime}=h^{-1}(V)\) for some \(V\in V_{\bullet}\). That is, we contract each collection of vertices \(V^{\prime}\) such that \(g_{V^{\prime}}=-1\) (see 7.21). We then define \(\beta_{(g_{V^{\prime}})_{V^{\prime}\in V^{\prime}_{\bullet}}}:=h^{*}(\beta_{( g_{V})_{V\in V_{\bullet}}})\). Formula (7.28) can then be simplified to \[\sum_{(G^{\prime},D^{\prime})\in\mathfrak{C}_{g,n}(\phi)}\frac{f_{(G^{\prime}, D^{\prime})*}}{|\mathrm{Aut}(G^{\prime},D^{\prime})|}\Bigg{(}\sum_{ \begin{subarray}{c}V^{\prime}\text{ a full forest in }\operatorname{Ext}(G^{\prime},D^{ \prime});\\ (g_{V^{\prime}}\geq-1)_{V^{\prime}\in V^{\prime}_{\bullet}}\end{subarray}} \beta_{(g_{V^{\prime}})_{V^{\prime}\in V^{\prime}_{\bullet}}}\cdot\texttt{{c}}_ {(G^{\prime},D^{\prime},V^{\prime}_{\bullet})}((g_{V^{\prime}}))\Bigg{)}. \tag{7.29}\] Now to obtain the final result, we rename \((G^{\prime},D^{\prime})\) and \(V^{\prime}_{\bullet}\) into \((G,D)\) and \(V_{\bullet}\). Then we eliminate the indices \((g_{V})\) by means of the equality \[g_{V}=-1+\sum(g_{e,V}+1)-\sum a_{e,V},\] and we replace the indices \((g_{e,V})\) with indices \((g_{e})\) defined by \(g_{e}:=g_{e,k(e)}-\sum_{V\in S(e)}a_{e,V}\). As promised earlier, here we compare the \(\psi\) classes on Jacobians with the classical ones on moduli of curves. **Remark 7.30**.: Denote by \(f\colon\mathcal{J}_{G,D}\to\overline{\mathcal{M}}_{G}\) the forgetful morphism. For every \(e\in E(G)\), we have: \[\Psi_{(G,D,e)}=f^{*}\Psi_{G,e}+\Delta_{G,D,e}\] Where \(\Psi_{G,e}=-c_{1}(\mathbb{L}_{e})\) is the first Chern class of the normal line bundle corresponding to the node \(e\) on the stratum \(\overline{\mathcal{M}}_{G}\to\overline{\mathcal{M}}_{g,n}\), and \(\Delta_{G,D,e}\) is the divisor in \(\mathcal{J}_{G,D}\) corresponding to pointsrepresent sheaves that fail to be locally free at the edge \(e\). ### The case of disjoint blowups Our main results, Theorem 7.4 and Corollary 7.24 massively simplify in the case when the \(m\) vine curves \(\beta_{1},\ldots,\beta_{m}\) are disjoint. This is for example the case for all hyperplanes on divisorial (or compact type) vine curves (3.12) (Proposition 3.15) -in this case \(m\) equals \(1\) and no blowup is required-, and for all hyperplanes of the form (3.13) with \(S\neq[n]\) (Proposition 5.29). In each of these cases, the category \(\widetilde{\mathfrak{C}}_{E}\) only contains the terminal object and the resolved strata \((\beta_{i},V^{i}_{\bullet})\) where \(V^{i}_{\bullet}=\{V_{i}\}\) contains only the one vertex set \(V_{i}=\{\mathrm{leg}_{\beta_{i}}(1)\}\) for all \(i=1,\ldots,m\) (Proposition 5.29). Recall the notation from the previous section. We set \(X_{i}^{+}\), \(Y_{i}^{+}\) (respectively, \(X_{i}\), \(Y_{i}\)) as the two components over \(\beta_{i}\) (respectively, over \((\beta_{i},V_{\bullet}^{i})\)). We denote by \(g_{Y_{i}}\) the genus of the fiber of \(Y_{i}\) and by \(d_{Y_{i}}\) the degree of the universal line bundle on \(Y_{i}\). We also set \(F_{+}^{Y_{i}}=F_{V_{i}}^{+}\) and \(F^{Y_{i}}=F_{V_{i}}\). Let \(t_{i}\) be the number of nodes of a general curve in \(\beta_{i}\), so \(|\operatorname{Aut}(\beta_{i})|=t_{i}!\). Then we have: **Corollary 7.31**.: _When the hyperplane \(H=H(\phi^{+},\phi^{-})\) is such that the vine curves \(\beta_{1},\dots,\beta_{m}\) are pairwise disjoint, Formula (7.6) simplifies to_ \[\sum_{i=1}^{m}\sum_{s_{i},j_{i},k_{i}\geq 0}\frac{1}{t_{i}!}\binom{g_{Y_{i}}- d_{Y_{i}}-j-1}{k_{i}+1}\cdot f_{E_{i}*}\big{(}c_{s_{i}}(F^{X_{i}})\cdot c_{j_{i}}(F ^{Y_{i}})\cdot\Psi_{i}^{k_{i}}\big{)} \tag{7.32}\] Proof.: Follows immediately from Theorem 7.4. Note the simplification of the minus sign in the definition of \(b_{G,D,V}((j_{V});(k_{V}))\) and the minus sign before \(f_{\Gamma*}\) in Equation (7.6). We can also recast the main result of 7.24. by simply taking the pushforward along each \(\mathbb{P}^{t_{i}-1}\) bundle \(p_{i}\colon E_{i}\to\beta_{i}\). Note that the K-theory elements \(F^{X_{i}}\) and \(F^{Y_{i}}\) are pull-backs of corresponding elements \(F_{+}^{X_{i}}\) and \(F_{+}^{Y_{i}}\) on \(\beta_{i}\). For each \(i\) and \(1\leq r_{i}\leq t_{i}\), let \(\Psi_{i,r_{i}}\) be the first Chern class of the conormal bundle to the \(r_{i}\)-th gluing on the resolved stratum \(\beta_{i}\). **Corollary 7.33**.: _When the hyperplane \(H=H(\phi^{+},\phi^{-})\) is such that the vine curves \(\beta_{1},\dots,\beta_{m}\) are pairwise disjoint, the formula in Corollary 7.24 equals_ \[\sum_{\begin{subarray}{c}s_{i}+j_{i}+\lambda_{i}=g-d-t_{i}\\ \text{for all $i=1,\dots,m$}\end{subarray}}\frac{1}{t_{i}!}\binom{g_{Y_{i}}- d_{Y_{i}}-j_{i}-1}{g-d-j_{i}-s_{i}}\cdot f_{\beta_{i}*}\big{(}c_{s_{i}}(F_{+}^{X_{i}}) \cdot c_{j_{i}}(F_{+}^{Y_{i}})\cdot h_{\lambda_{i}}(\Psi_{i,1},\dots,\Psi_{i, t_{i}})\big{)} \tag{7.34}\] _where \(h_{\lambda_{i}}\) is the complete homogeneous polynomial of degree \(\lambda_{i}\) in its entries._ Proof.: This follows from Corollary 7.24 or, more directly, by applying the push-pull formula to Corollary 7.31 combined with the fact that the push-forward of \(\Psi_{i}^{k_{i}}\) along \(E_{i}\to\beta_{i}\) equals \(h_{k_{i}-t_{i}+1}(\Psi_{i,1},\dots,\Psi_{i,t_{i}})\). We now analyse some even more special cases of this, already special, formula. **Remark 7.35**.: (The "compact type" hyperplanes). A special case of Corollaries 7.31 and 7.33 occurs when \(H(\phi^{+},\phi^{-})\) is a hyperplane of the form (3.12). In this case the generic locus where \(\phi^{+}\) differs from \(\phi^{-}\) is a compact type boundary divisor. In particular, \(m\) equals \(1\) and \(\widetilde{\mathcal{J}}_{g,n}^{d}(\phi^{+},\phi^{-})\to\overline{\mathcal{J}}_ {g,n}^{d}(\phi^{+})\) is the identity. The unique vine curve \(\beta=\beta_{1}\) consists of a boundary divisor \(\Delta_{g-g_{Y},S}\subset\overline{\mathcal{M}}_{g,n}\) decorated with a unique pair of \(\phi^{+}\)-stable bidegrees, say \((d-d_{Y},d_{Y})\). In this case the degree \(g-d\) part of the formula in 7.31 coincides with the formula in 7.33, and they equal to \[\sum_{s+j+\lambda=g-d-1}\binom{g_{Y}-d_{Y}-j-1}{g-d-j-s}\cdot f_{\beta*}\big{(} c_{s}(F_{+}^{X})\cdot c_{j}(F_{+}^{Y})\cdot\Psi^{\lambda}\big{)}. \tag{7.36}\] ### Wall-crossing in low codimension We now analyse the first few cases of our main result, ordered by codimension. #### 7.b.1. Codimension \(1\) Let \(d=g-1\). In this case the classes \(\mathsf{w}_{g-1}(\phi)\) are divisors, also known under the name of _theta divisors_. This case was the main result of [10]. Because each \(\mathsf{w}_{g-1}(\phi)\) is a divisor class and \(\overline{\mathcal{J}}_{g,n}^{d}(\phi^{+})\) is nonsingular, the wall-crossing term equals zero across any hyperplane not of the form (3.12). Assume that the hyperplane crossed is \(H=H(g-g_{Y},1,S;d-d_{Y}+\frac{1}{2})\). Then Formula (7.36) collapses and it gives \[\mathsf{w}_{g-1}(\phi^{+})-\mathsf{Id}^{*}\mathsf{w}_{g-1}(\phi^{-})=(g_{Y}-d _{Y}-1)\cdot[\mathcal{J}_{\beta}]\] for \(\beta=(G(g-g_{Y},1,S),(d-d_{Y},d_{Y}))\). This recovers [10, Theorem 4.1] after observing that the divisor \(\mathcal{J}_{\beta}\subset\overline{\mathcal{J}}_{g,n}^{d}(\phi^{+})\) is the pullback of \(\Delta_{g-g_{Y},S}\subset\overline{\mathcal{M}}_{g,n}\). #### 7.b.2. Codimension \(2\) When \(d=g-2\), the classes \(\mathsf{w}_{g-2}(\phi)\) have codimension \(2\). There are \(2\) types of hyperplanes where the wall-crossing term is not zero. If the hyperplane has the form (3.12), the vine curve \(\beta\) is a boundary divisor. Assuming that \(H\) and \(\beta\) are as in the previous paragraph, then Formula 7.36 reads \[f_{\beta*}\left(\binom{g_{Y}-d_{Y}-1}{g-d-1}c_{1}(F_{+}^{X})+\binom{g_{Y}-d_{Y }-2}{g-d-1}c_{1}(F_{+}^{Y})+\binom{g_{Y}-d_{Y}-1}{g-d}\Psi\right).\] If the hyperplane is of type (3.13), then the only cases when the formula is non-trivial is for \(H=H(g-g_{Y}-1,2,S,d-d_{Y}-1)\). While this hyperplane might witness a change in stability on more than \(1\) vine curve, the intersection of any \(2\) would occur in codimension \(>2\) and hence not be relevant. We can read the wall-crossing term off Formula (7.33): \[\sum_{i=1}^{m}\binom{g_{Y_{i}}-d_{Y_{i}}-1}{g-d}[\mathcal{J}_{\beta_{i}}]\] Here \(\beta_{i}\) for \(i=1,\ldots,m\) is a vine curve of the form \((G(g-g_{Y_{i}}-1,2,S),(d-d_{Y_{i}},d_{Y_{i}}))\) where the stability condition changes. (If \(S^{c}\) is not empty, then \(m=1\)). ### Pullbacks via Abel-Jacobi sections Fix integers \(\mathbf{d}=(k;d_{1},\ldots,d_{n})\), \(\mathbf{f}=(\mathbf{f}_{i,\mathbf{S}})_{\mathbf{i},\mathbf{S}}\), and let \(\mathcal{L}=\mathcal{L}_{\mathbf{d},\mathbf{f}}\) be the line bundle on the universal curve \(\overline{\mathcal{C}}_{g,n}\) defined in Section 3.d.1. Let then \(\phi^{+}\) and \(\phi^{-}\) be on opposite sides of a hyperplane \(H\) (Definition 5.1), and such that \(\mathcal{L}\) is \(\phi^{+}\) stable. This defines an Abel-Jacobi section \(\sigma=\sigma_{\mathbf{d},\mathbf{f}}\colon\overline{\mathcal{M}}_{g,n}\to \overline{\mathcal{J}}_{g,n}^{d}(\phi^{+})\). We now compute the pullback of Formula 7.25 via \(\sigma\). For every \(G\in G_{g,n}\), define the divisor \(D=D_{\mathbf{d},\mathbf{f}}\) on \(G\) as the multidegree of \(\mathcal{L}\) on any curve whose dual graph equals \(G\). We then have a poset \(\operatorname{Ext}(G)=\operatorname{Ext}(G,D)\), depending on \(\phi^{+},\phi^{-}\), defined in Section 5. The total space \(\mathcal{C}_{G}\to\overline{\mathcal{M}}_{G}\) has one irreducilbe component \(\mathcal{C}_{v}:=\mathcal{C}_{G,v}\) for each vertex \(v\) of \(G\). We redefine \(\pi_{v}=\pi_{G,v}\colon\mathcal{C}_{v}\to\overline{\mathcal{M}}_{G}\). Also, for each \(V\subset V(G)\), we denote by \(\pi_{V}\colon\bigcup_{v\in V}\mathcal{C}_{v}\to\overline{\mathcal{M}}_{G}\) the induced map on the union. We write \(X=X_{G}=\mathcal{C}_{\operatorname{leg}(1)}\) and \(\Sigma=X\cap\mathcal{C}_{\{\operatorname{leg}(1)\}^{c}}\). We also write \(Y_{V}=\mathcal{C}_{V^{c}}\) for every \(V\subset V(G)\). We then define the following K-theory elements in \(\overline{\mathcal{M}}_{G}\) \[F_{\mathbf{d},\mathbf{f}}^{X}:=-R^{\bullet}(\pi_{X})_{*}\mathcal{L}(-\Sigma)_ {|X};\ \ F_{V}^{\mathbf{d},\mathbf{f}}:=-R^{\bullet}(\pi_{V^{c}})_{*}(\mathcal{L}_{|Y _{V}});\ \ H_{V}^{\mathbf{d},\mathbf{f}}:=F_{V}-\sum_{V^{\prime}\in V_{\bullet},V^{ \prime}\gg V}F_{V^{\prime}}.\] The line bundle \(\mathcal{L}\) also defines a section, possibly rational, \(\sigma^{-}\colon\overline{\mathcal{M}}_{g,n}\dashrightarrow\overline{ \mathcal{J}}_{g,n}^{d}(\phi^{-})\). **Corollary 7.37**.: _The difference_ \[\sigma^{*}(\mathsf{w}_{d}(\phi^{+}))-\sigma_{-}^{*}(\mathsf{w}_{d}(\phi^{-}))\] _equals_ \[-\sum_{G\in G_{g,n}}\frac{1}{|\operatorname{Aut}(G)|}f_{G*}\\ \left(\sum_{\begin{subarray}{c}V_{\bullet}\text{ a full forest in }\operatorname{Ext}(G),\\ s+\sum_{V}j_{V}+\sum g_{e}=g-d-|E(G)|\end{subarray}}\alpha(s,(j_{V})_{V\in V _{\bullet}},(g_{e})_{e\in E(G)})\cdot c_{s}(F_{\mathbf{d},\mathbf{f}}^{X}) \cdot\prod_{V\in V_{\bullet}}c_{j_{V}}(H_{V}^{\mathbf{d},\mathbf{f}})\prod_{ e}\Psi_{(G,e)}^{g_{e}}\right) \tag{7.38}\] _where the coefficient \(\alpha\) is defined in Equation (7.26)._ Proof.: Follows directly by pulling back (7.25) via \(\sigma\). When the hyperplane \(H\) is such that the vine curves that fail \(\phi^{-}\)-stability are disjoint, the latter can be simplified, as for Formula (7.34). **Remark 7.39**.: The pull-back of Formula 7.25 via the Abel-Jacobi section \(\sigma_{\mathbf{d},\mathbf{f}}\) can be explicitly computed via [14, Theorem 1]. For a full forest \(V_{\bullet}\) in \(\operatorname{Ext}(G)\), recall the definition of the next element from (5.17). Defining \(Z_{V}:=\mathcal{C}_{\operatorname{next}(V)\setminus V}\) and set \(\Sigma_{V}=Z_{V}\cap\bigcup_{V^{\prime}\gg V}\mathcal{C}_{V^{\prime c}}\) and \(\Sigma_{V}^{\prime}=Z_{V}\cap\mathcal{C}_{V}\), we have \[H_{V}^{\mathbf{d},\mathbf{f}}=-R^{\bullet}(\pi_{V})_{*}((\mathcal{L}_{\mathbf{d},\mathbf{f}})_{|Z_{V}}(-\Sigma_{V})),\] and the line bundle \(\mathcal{L}_{\mathbf{d},\mathbf{f}})_{|Z_{V}}(-\Sigma_{V})\) equals \[\omega_{Z_{V}/\overline{\mathcal{M}}_{G}}^{k}\Bigg{(}k\Sigma_{V}^{\prime}+(k-1) \Sigma_{V}+\sum_{\operatorname{leg}(j)\in\operatorname{next}(V)\setminus V}d_{ j}P_{j}+\sum_{S^{c}\subseteq\operatorname{leg}^{-1}(\operatorname{next}(V) \setminus V)}f_{i,S^{c}}C_{i,S^{c}}|_{Z_{V}}\Bigg{)}.\] We note that \(Z_{V}\) is the disjoint union \(\mathcal{C}_{v}\) for \(v\in\operatorname{next}(V)\setminus V\) (see Proposition 5.19). Moreover, the line bundle above, restricted to each one of these components, is precisely the pullback via the projection \(\overline{\mathcal{M}}_{G}\to\overline{\mathcal{M}}_{g(v),\operatorname{val}(v)}\) of a line bundle as in [12, Formula 0.1]. **Example 7.40**.: As an illustration of Remark 7.39, we write the simpler case where \(H\) corresponds to changing the stability condition on a single vine curve \(\beta\) with \(t\) nodes, components of genus \(g_{X}\) and \(g_{Y}\), with markings \(S_{\beta}\) and \(S_{\beta}^{c}\) respectively. Then \(\overline{\mathcal{M}}_{\beta}=\overline{\mathcal{M}}_{g_{X},|S_{\beta}|+t} \times\overline{\mathcal{M}}_{g_{Y},|S_{\beta}^{c}|+t}\) and we denote by \(p_{X}\) and \(p_{Y}\) the projections. In this case we have that \[F_{\mathcal{L}}^{X}= p_{X}^{*}\Bigg{(}-R^{\bullet}(\pi_{*}^{X})\Bigg{(}\omega_{X}^{k}(( k-1)\Sigma+\sum_{j\in S_{\beta}}d_{j}P_{j}+\sum_{\begin{subarray}{c}i\leq g_{X}\\ 1\in S\subseteq S_{\beta}\end{subarray}}f_{i,S_{\beta}\setminus S}\cdot C_{i, S_{\beta}\setminus S}^{X})\Bigg{)}\Bigg{)}\] \[F_{\mathcal{L}}^{Y}= p_{Y}^{*}\Bigg{(}-R^{\bullet}(\pi_{*}^{Y})\Bigg{(}\omega_{Y}^{k}(k \Sigma+\sum_{j\in S_{\beta}^{c}}d_{j}P_{j}+\sum_{\begin{subarray}{c}i\leq g_{ Y}\\ S\subseteq S_{\beta}^{c}\end{subarray}}f_{i,S_{\beta}^{c}\setminus S}\cdot C_{i, S_{\beta}^{c}\setminus S}^{Y})\Bigg{)}\Bigg{)}\] Formula (7.38) for the difference \(\sigma^{*}(\mathsf{w}_{d}(\phi^{+}))-\sigma_{-}^{*}(\mathsf{w}_{d}(\phi^{-}))\) in this case becomes \[\sum_{\begin{subarray}{c}s+j+\lambda\\ =g-d-t\end{subarray}}\binom{g_{Y}-d_{Y}-j-1}{g-d-j-s}\frac{f_{\beta*}}{t!} \Bigg{(}c_{s}(F_{\mathcal{L}}^{X})\cdot c_{j}(F_{\mathcal{L}}^{Y})\cdot h_{ \lambda}(\Psi_{1},\ldots,\Psi_{t})\Bigg{)}.\] The Chern classes above are computed in [12, Theorem 1].
2307.15610
Trends and Topics: Characterizing Echo Chambers' Topological Stability and In-group Attitudes
Social Network sites are fertile ground for several polluting phenomena affecting online and offline spaces. Among these phenomena are included echo chambers, closed systems in which the opinions expressed by the people inside are exacerbated for the effect of the repetition, while opposite views are actively excluded. This paper offers a framework to explore, in a platform-independent manner, the topological changes through time of echo chambers, while considering the content posted by users and the attitude conveyed in discussing specific controversial issues. The proposed framework consists of four steps: (i) data collection and annotation of users' ideology regarding a controversial topic, (ii) construction of a dynamic network of interactions, (iii) ECs extraction and analysis of their dynamics, and (iv) topic extraction and valence analysis. The paper then enhances the formalization of the framework by conducting a case study on Reddit threads about sociopolitical issues (gun control, American politics, and minorities discrimination) during the first two years and a half of Donald Trump's presidency. The results unveil that users often stay inside echo chambers over time. Furthermore, in the analyzed discussions, the focus is on controversies related to right-wing parties and specific events in American and Canadian politics. The analysis of the attitude conveyed in the discussions shows a slight inclination toward a more negative or neutral attitude when discussing particularly sensitive issues, such as fascism, school shootings, or police violence.
Erica Cau, Virginia Morini, Giulio Rossetti
2023-07-28T15:13:09Z
http://arxiv.org/abs/2307.15610v2
# Trends and Topics: Characterizing Echo Chambers' ###### Abstract Social Network sites are fertile ground for several polluting phenomena affecting online and offline spaces. Among these phenomena are included echo chambers, closed systems in which the opinions expressed by the people inside are exacerbated for the effect of the repetition, while opposite views are actively excluded. This paper offers a framework to explore, in a platform-independent manner, the topological changes through time of echo chambers, while considering the content posted by users and the attitude conveyed in discussing specific controversial issues. The proposed framework consists of four steps: (i) data collection and annotation of users' ideology regarding a controversial topic, (ii) construction of a dynamic network of interactions, (iii) ECs extraction and analysis of their dynamics, and (iv) topic extraction and valence analysis. The paper then enhances the formalization of the framework by conducting a case study on Reddit threads about sociopolitical issues (gun control, American politics, and minorities discrimination) during the first two years and a half of Donald Trump's presidency. The results unveil that users often stay inside echo chambers over time. Furthermore, in the analyzed discussions the focus is on controversies related to right-wing parties and specific events in American and Canadian politics. The analysis of the attitude conveyed in the discussions shows a slight inclination toward a more negative or neutral attitude when discussing particularly sensitive issues, such as fascism, school shootings, or police violence. _Keywords--_ Echo chambers, Polarization, Social Network Analysis, Natural Language Processing, Topic Modeling ## 1 Introduction The emergence of Online Social Network sites (OSNs) in the _information age_[1] reshaped every aspect of life, spanning from those we show in the infosphere to how we communicate, with no longer any geographic and temporal constraints. Moreover, owing to the lack of these limitations, the Internet has made the exchange of opinions between users immediate, and the same has occurred with information that is shown to users instantaneously. Hence, the disclosure of new problems unknown in the past. An example is the _information overload_ to which users are exposed when accessing online spaces. The massive amount of conflicting information found online may lead users to experiment a mental discomfort called _cognitive dissonance_[2]. Consequently, to avoid this discomfort, people are more prone to filter and choose only pieces of information confirming their beliefs and ideas, helped by recommendation systems introduced in OSNs, which show content perfectly tailored to users according to their profilation. However, despite the importance of opinion heterogeneity for creating meaningful debates - and consequently allowing the unfolding of the dialectic process of _thesis, antithesis, and synthesis_ - at the same time, OSNs represent a perfect breeding ground for both human and algorithmic biases which may interfere with discussions and knowledge formation. The rise and, most of all, the exacerbation of these biases contribute to creating pollution in online spaces. Furthermore, this issue has grown in importance because of the loss of an evident boundary between the online and offline worlds, thus resulting in potentially harmful consequences that may easily overflow into the real world. Among the pollutants, polarization has raised several concerns due to the features offered by OSNs, as they tend to exacerbate the ideological positions of users by allowing for easier connections with people with the same interests and exposing them to content aligned with their thoughts. This work focuses on one of the main consequences of polarization: the widely debated concept of _echo chamber_ (henceforth, EC). Although there is no agreement on a formal definition of echo chambers, their effects are noticeable. They are often argued to be involved in spreading misinformation and pseudo-science narrations, as well as worsening political debate, which may bring consequences affecting online and offline life. The discussions behind _what_ really is an echo chamber picture toward echo chambers as a closed system where opinions and ideas are reinforced and exacerbated as the only truthful view of reality, for the effect of the repetition inside this environment and the active exclusion of other - opposing - beliefs. Existing works to date have analyzed the effects of ECs and assessed their presence in online spaces, but often from a modelistic perspective or through case studies. Usually, they ignore the temporal unfolding of these systems, thus flattening the formation of ties between people over time and, consequently, providing an overestimation of users' sociality, leading to biased results. The aim of this work is threefold. First, it formalizes a platform-independent framework for investigating the dynamics of the relations that have often been ignored in the literature. Second, it defines a methodology to investigate the topics and the attitude of users inside and outside ECs after their identification. Third, it enriches the body of work on ECs dynamics by offering a case study on Reddit sociopolitical discussions during the first two years and a half of Donald Trump's presidency. Here, the ECs are detected, tracked through time, and analyzed by considering the content and investigating the emotional component of the discussions. Additionally, we compare ECs and less polarized debates to gain insights into the possible similarities and differences in their behavior over time. The proposed framework is an expansion of the previous framework for ECs detection described in [3], as it adopts the core of that work, that is, to develop a framework built on top of common features among OSNs while leveraging network science structures and algorithms to perform ECs detection. The framework we present is instead composed of four steps, where the first three are related to the topological and ideological identification over time of ECs, while the latter deals with the characterization of the contents discussed and the emotional characterization of users' discussions both inside and outside ECs. The paper is organized as follows. In Section 2, we introduce and discuss the literature on ECs, focusing on their detection. Subsequently, Section 3, describes the proposed framework for tracking and analyzing ECs dynamics over time, emphasizing both the relations and topics of discussion. Section 4 tests the framework on OSN data extracted from Reddit and discusses the obtained results. The final section, Section 5, summarizes the main findings of this project and discusses weaknesses and future developments. ## 2 Related works ### Echo chamber detection As the concept of ECs itself is widely discussed, there is also much debate on how these polarized systems create and grow, e.g., by growing in polarization. This information is necessary to allow their recognition and, subsequently, their efficient mitigation to avoid potentially harmful outcomes. In the last decades, an ever-growing body of research has focused on quantifying the extent to which discussions are polarized [4, 5, 6] and, consequently, are deemed to be a fertile ground for polluted phenomena. As for ECs, given their inner nature of online polarized environments, they primarily originate in online discussions about controversial topics taking place mainly in OSNs. Furthermore, traces of ECs have also been found in forums [7], blogs [8], and, generally, in those online spaces employing recommendation systems, such as e-commerce platforms [9]. Traditional ECs detection methods may follow two different families of approaches, which can also be intertwined to study these polarized communities from two perspectives simultaneously. In the first case, the focus may be on the textual content shared by users (i.e., the _echo_ dimension), which corresponds to the debated opinion echoing among people with the same ideological alignment. Examples of this methodology include estimating users' views conveyed through words, without considering users' relations. For example, in [10], the authors attempted to understand whether users were exposed to crosscutting content by investigating the news they shared on Facebook. Another example of this approach can be found in [11], in which over 10 million U.S. Facebook users with public political leaning in their profile information were classified into two categories, depending on whether they discussed news or more generic and less polarizing topics. The contents were classified as liberal, neutral, or conservative, and then analyzed through a content network. The evidence is that the contents shared in the inner circle of Facebook friends are more involved than the Facebook algorithm in determining the content a user decides to consume on OSNs. In [12], two methods to gain insight into the content of ECs are shown: one to estimate the stance on a particular topic and another to determine the type of emotion and its intensity. The main issue of content-based approaches is that the data need to be annotated, and this is often performed via unsupervised Natural Language Processing techniques, which does not ensure the correctness of the labels. The other family of methods to address this issue considers the network knit by users talking between themselves in OSNs discussions. In this scenario, the _chamber_ dimension is investigated, which has the effect of opening up for the reverberation of the opinion. The analysis consists of modeling the graph to inspect users' relations at various topological levels (i.e., _retweet network_, _comment network_), eventually with the possibility of enhancing the analysis through the mining of additional information from the text, such as in [13], where Garimella _et al_. estimated the users' political leaning and then proceeded with the construction of the interaction network, defining different roles for users in ECs formation. The large number of approaches defined through the years could be further grouped based on the _topological_ scale of detected ECs: _macro-scale_, _meso-scale_, and _micro-scale_. The first examines users' relations as a whole and involves looking at the interaction networks on an aggregate level to identify two well-distinguished clusters of users with opposite leaning in the network. For example, in [14], the study extracted two communities using the HITS algorithm, without leveraging any other information from external analysis, e.g., NLP. Another macro-scale study can be found in [15], in which the authors reconstructed the interaction network and then analyzed the interactions between Donald Trump and Hillary Clinton supporters. The meso-scale approach, instead, looks more in-depth at the topological division of nodes into clusters, usually by leveraging a community detection algorithm, with the aim to detect echo-chamber-like structures composed of nodes sharing identical ideological leaning. An example of a hybrid meso-scale and content-based approach was described in [16], where the authors explored the presence of ECs in tweets about COVID-19 by constructing the interaction network and then applied METIS, a community detection algorithm, which allowed to partition the network into two distinct communities. The communities were then evaluated according to different measures, both traditional community evaluation measures and controversy measures. Ultimately, the last category consists of investigating the leaning of each user and comparing it to the one adopted by the members of the neighborhood, such as in [17], where the authors leverage homophily to assess the presence of ECs, moved by the idea that users surrounded by people with a similar leaning are consequently exposed to similar content(s), thus increasing the likelihood of ECs formation. One issue regarding these studies is the methodology employed because they are often structured as data-driven case studies that simply assess the presence of ECs in controversial topics of discussion, without specifically focusing on the analysis of the content or on the emotional counterpart. Another issue is that they usually rely on platform-specific features, thus making these methods difficult to reuse on other platforms to perform the same task. In addition, it is often neglected another invisible, yet, fundamental component of every complex system: time. This issue is usually approached through case studies revolving around a short timespan or from a merely modelistic perspective and is addressed as an _Opinion dynamics_ task. Regrettably, such flattened representations, keeping together interactions potentially distant in time and disregarding their temporal ordering, describe a complex phenomenon in a simplistic manner, risking an overestimation of users' sociality and failing to understand the real dynamics behind their appearance and evolution. ### Dynamic community detection The approaches performing ECs detection at a meso-scale level of topology usually rely on community detection (henceforth, CD) algorithms, which can detect homogeneous clusters of users sharing a set of common features. Even if there is still no agreement on what a _community_ should be, nonetheless several algorithms have been proposed to identify communities and, most importantly, to track their temporal unfolding. The attempt to have a glimpse into community dynamics adds another layer of complications, because nodes and edges between users may undergo different events [18]. For example, the disappearance of a node or a link usually leads to a topological variation that contributes to its lifecycle [19]. While there are case studies employing CD algorithms in the context of polarized information systems detection [4], there is a scarcity of works employing dynamic community detection (DCD) to define frameworks or to present case studies about specific issues. Among the exceptions there is the work by Kopacheva _et al._[20], where the authors leverage DCD over a timespan ranging from 2012 to 2019 to analyze the evolution of users' communities on Twitter revolving around the refugee crisis in Sweden in 2015. Because users inside an EC are involved in debates that bring out their opinion, it is also necessary to consider the ideology of each user when modeling the interaction network and when performing the DCD task. This can be accomplished by modeling the interaction network using a _node-attributed graph_, where each node is associated with an attribute expressing users' leaning. Formally, \(\mathcal{G}=(V,E,\Lambda)\), where \(V\) is the set of vertices, \(E\) is the set of edges, and \(\Lambda\) is the set of \(m\) attributes associated with vertex \(v\in V\) to represent attributes. In addition, for the extraction of communities in a network snapshot, we will leverage _Labeled Community Detection_, a specific instance of CD that considers both topological criteria and label homophily inside each community when extracting the partitions. ### Topic modeling and valence analysis Moved by the intention to analyze ECs in more depth once they have been identified, in this section we briefly present the state of the art of the two approaches employed to gain insights about the topics and the attitude of the users when discussing online. _Topic modeling._ The first approach, namely _topic modeling_, is related to the extraction of the most relevant information, or topics, from a textual _corpus_, i.e., from a collection of documents. The potential and usefulness of this task have been widely recognized and employed in the literature, even in fields of research not immediately related to linguistics or information retrieval [21], e.g., in bioinformatics [22] or in computer vision [23]. Many approaches and algorithms have been formalized and implemented to address this task. Among these approaches, one of the first and most well-known is Latent Dirichlet Allocation (LDA) [24], a probabilistic model that assumes that a statistical process generates each document. Therefore, each document has its own distribution of topics, and consequently, each topic would be characterized by the probability of its specific words. LDA, for its own nature, lacks semantic information and disregards relations between words and syntactic structures. In recent years, many evolutions of LDA have been proposed, such as HDP [25], which remove the need to choose the number of topics to be found by including a probabilistic model to infer the most likely number of topics inside the document, or the latter LDA2vec [26] which incorporates Word2Vec [27] into the LDA model. Topic modeling algorithms have also evolved to capture the dynamics of topics in documents over time, i.e., DTM [28] and TTM [29]. As an effect of the development of deep learning models, _Transformers_ have revolutionized the way documents are represented as vectors. In 2018 Bidirectional Encoder Representations from Transformers (BERT) [30] was released. Since then, considerably improved models have been released, such as RoBERTa [31], BART [32] and DistilBERT [33]. These models are well known for being trained over a massive amount of data and for implementing the so-called _attention_ mechanism [34], which builds for each word a particular and contextual word embedding while accounting for both the words to its right and its left (thus, define BERT as a _bidirectional_ model). This revolution in language representation models has led to their massive application in several NLP tasks, e.g., text generation, classification tasks, and Named-Entities Recognition. _Valence analysis._ Valence has often been investigated as one of the factors that define the meaning of words and has been addressed as a useful measure in Natural Language Processing, psychology, and cognitive sciences. Valence (V) is often paired with two other meaning-related dimensions, i.e., Arousal (A) and Dominance (D). According to [35] different values in these measures might be employed to extract primary emotions. Typically, measures quantifying Valence, Arousal, and Dominance are extracted through manually annotated datasets, such as in the case of ANEW [36] and its extension by Warriner _et al._[37]. Another dataset is the VAD lexicon [38], which consists of around 20.000 English words manually annotated. Their ratings were aggregated by leveraging the Best-Worst scaling [39]. The resulting values are in the range [0, 1], namely, the negativeness and positiveness of the concept indicated by the word. Echo chambers diachronic analysis In this section, we formalize a platform-independent framework for tracking and analyzing the dynamics of ECs, while also addressing the content-related aspect of the issue. Note that the first steps of the pipeline consist of the platform-agnostic framework from Morini _et al._[3], to which we add two additional steps to handle the temporal analysis and the content characterization of ECs. The original framework that we enhance lies within approaches that investigate meso-scale topologies. This fits well with the leading theory about ECs, according to which they are close systems of users discussing mostly between themselves, with few social interactions with those outside. Before defining the four-step framework, it is worth fixing a formal definition for what we mean in this paper by _echo chamber_, as one of the main problems in detection is that there is still no agreement on their formal definition. As we are extending the previous framework, we still rely on the definition previously given in [3]. **Definition 1** (Echo Chamber): _Given a network \(G=(V,E)\) describing users' interactions centered on a controversial topic, an echo chamber is a subset of the network nodes (users) who share the same ideology and tend to have dense connections primarily within the same group._ The proposed version collapses the four steps described in [3] in two, and proceeds by formalizing two new additional steps. The framework is structured as follows: (i) data collection and opinion estimation in the context of online polarized discussions, (ii) complex network modeling of online debates, (iii) users' groups identification (by means of Dynamic Community Detection), and lifecycle analysis, and (iv) topic extraction and valence analysis. ### Step 1: Data collection and annotation The starting point of the original pipeline is the identification of a controversial issue regarding a wide variety of topics, ranging from politics to social and environmental issues. This is necessary because users are more prone to assume an extreme ideology when discussing controversial topics, as they form the fertile ground for echo chamber formation. Moreover, users often use particular hashtags to easily identify controversies on the OSN, such as on Twitter, whereas on Reddit users may join topic-based communities of discussion. Once the data have been obtained, it is necessary to focus on the ideological characterization of users. Typically, obtaining data with a clear user's leaning toward a controversy is difficult. Hence the need to define a user's classification methodology to estimate their leaning about the debate according to the issue under analysis. In the framework that we are currently expanding, the task is modeled as a _text classification_ problem. The raw text of posts and comments - two features many OSNs have in common - is encoded into a vector, which becomes the training set of a classification model specific to the context. This choice grants a higher level of generalizability while keeping the framework independent of any platform-specific feature, such as the number of _likes_ or _retweets_, which are Twitter features often used by other frameworks. Among the various approaches to classification, the authors propose choosing between Deep Learning models or pre-trained Transformers while considering the amount of data and the type of information to be extracted. If it is necessary that the model keeps into account the semantic aspect of the sentence the choice should fall on Transformers, otherwise, if it is just needed to extract specific information from the text may be also leveraged Neural Networks, e.g., LSTM and CNN. In addition, the classification may be modeled as a multiclass problem: thus, it is necessary to tackle the issue using a multiclass text classifier. ### Step 2: Modeling online debates In the previous section, we defined how to handle the two dimensions of the phenomenon, namely the _chamber_ and the _echo_ dimensions. The additional component we introduce is modeling the temporal unfolding of the phenomenon. In fact, our idea is that because ECs are a system with constant evolution, it is necessary to define an actionable way to model the dynamic counterpart to unveil interesting hidden patterns that may also help in their mitigation within OSNs. Another point to keep in mind is that users inside polarized systems bring with them a strong leaning toward the analyzed controversy that needs to be properly modeled to avoid the introduction of biases that may distort the results. We modeled the dynamic interaction network - describing an online debate - through static snapshots of the graph, as described in [40]. In this way, each static snapshot captures the specific state of the network in a certain period - e.g., describing weekly or monthly social interactions among users. Formally, \[\mathcal{G}_{r}=\langle G_{1},G_{2}...G_{t}\rangle \tag{1}\] where each snapshot \(G_{i}=(V_{i},E_{i},A_{i})\) is a feature-rich graph univocally identified by the set of nodes \(V_{i}\), edges \(E_{i}\), and node labels \(A_{i}\). The timespan between adjacent snapshots is the key to good modeling of interactions; consequently, if the timespan is too large, it is likely to have information loss in terms of varying nodes and links. Complementarily, if the temporal window is too short, the interaction graph may register only a few changes in the interactions, which may hide a possible temporal correlation with the process occurring in the network [41]. The criterion suggested by [42] is to maintain a balance between the target to be studied and the temporal resolution. Given our working definition of ECs as closed systems made of people sharing a strongly polarized ideology, each snapshot graph must incorporate information on users' ideology. Therefore, each \(G_{i}\) describes a _feature-rich graph_, where node attributes, \(A_{i}\) are used to integrate in the model the (inferred) leanings of the users. ### Step 3: Identify Groups and their Dynamics So far, we defined our reference model for dynamic interactions involving agents enriched by some semantic information (e.g., their stance in a debate). The next step is to describe how to handle the extraction of meso-scale partitions inside the dynamic network through Dynamic Community Detection. As also suggested in the previous work, our choice for the partition extraction algorithm fell on EVA [43], a Labeled Community Detection algorithm, applied on the graph representing each snapshot, which is able to simultaneously optimize both structural cohesion and intra-community label homogeneity. The algorithm extends the Louvain algorithm [44] to node-attributed graphs. On the one hand, it maximizes Newman's modularity and, on the other, a measure defined in its paper, known as _Purity_. The two measures considered by EVA are defined as follows. **Definition 2** (Modularity): _Modularity function quantifies the observed number of edges inside the given partition minus the expected number of edges if they were distributed following a null model of a random graph. The modularity has values ranging from -1 to +1. It is formalized as follows:_ \[Q=\frac{1}{2m}\sum_{vw}\left[A_{uw}-\frac{k_{v}k_{w}}{(2m)}\right]\delta(c_{v},c_{w}) \tag{2}\] **Definition 3** (Purity): _Purity was defined in [43], and it is calculated as the product of the frequencies of the most frequent labels carried by its node. This function lies within the range [0,1]._ \[P_{c}=\prod_{a\in A}\frac{\max\left(\sum_{v\in c}a(v)\right)}{|c|} \tag{3}\] Optimizing both measures allows the extraction of partitions by considering both modularity and the purity of the ideological clusters, a fertile ground for ECs, at the same time. The LCD algorithm is applied to every snapshot graph and then, after extracting the partitions, ECs are detected by following the rationale in [3]. The idea behind this is that polarized systems, especially ECs, need a sort of closed space in which an opinion can reverberate, moving from one member to another. Therefore, the idea suggested in [3] to evaluate the partitions in terms of _Conductance_ and _Purity_, the former has the function to estimate the number of edges volume staying inside the community, and the latter, to assess the goodness of the partitions in terms of attribute homogeneity. **Definition 4** (Conductance): _The conductance of a community \(C\) is the volume of edges pointing out of it. The aim is to minimize the value of this function such that the average value across communities is as low as possible._ \[Conductance_{c}=\frac{2\left|E_{OC}\right|}{2\left|E_{C}\right|+\left|E_{OC}\right|} \tag{4}\] _where \(E_{OC}\) is the number of edges exiting the community and \(E_{C}\) is the number of edges remaining inside the community._ According to these two measures, the risk of a community being an echo chamber is maximized when _Conductance_ is equal to 0 and _Purity_ is equal to 1. For the detection of ECs, in this case, we consider ECs (or communities more at risk of being polarized), communities having a _Conductance_ equal to or less than 0.5 and the _Purity_ equal to or greater than 0.7, according to [3]. Notwithstanding, the two thresholds can be adjusted to better fit the dataset under analysis. Furthermore, we propose to maintain only the communities with 20 or more users inside, to remove small or noisy communities that may bring not fully representative results. The analysis of the ECs evolution is performed by leveraging the Jaccard index computed on the node sets (i.e., communities) of adjacent snapshots: an approach which has already been used in the literature [40] to identify the most likely evolution of partitions based on similarity. \[J(A_{t},B_{t+1})=\frac{|A_{t}\cap B_{t+1}|}{|A_{t}\cup B_{t+1}|}\] Before proceeding with this step, and to reduce the noise, we preprocessed the community sets removing those users that joined online discussions by just posting/commenting once. Moreover, for each snapshot, we retained only the users it shares with adjacent ones, thus focusing on "stable" sub-populations. We analyze the temporal development of ECs and non-polarized communities using a line plot in which each line represents the evolution of the similarity between adjacent partitions through timestamps. In addition, each line is enriched with a marker representing the _status_ of the community in a specific timestamp: triangles representing communities labeled as ECs, dots communities that are not. This type of plot allows, on the one hand, to assess the stability and evolution of _individual_ communities, and on the other, to observe the difference between all the ECs extracted using the approach previously described. ### Step 4: Topic extraction and analysis After assessing the stability of ECs (and not-ECs) over time, we define a methodology to (i) capture the topics discussed by the identified clusters of users and, (ii) compute the cluster-wide attitude towards that topic. _Topic modeling._ Among the various approaches, ranging from more traditional ones, such as Latent Dirichlet Allocation, to more recent ones, it was decided to employ an approach based on embeddings, i.e., BERTopic [45]. The motivations behind the choice are two: the first, is the nature of transformers, which allow for a better representation of words in context, and the second is the competitive results of BERTopic _w.r.t._ older topic modeling algorithms. In particular, BERTopic has the strength of being robust independently of the language model employed, despite performing or not performing the fine-tuning phase. BERTopic extracts the topics in three steps. First, it generates the embedding of the input text using a language model and - to improve cluster quality - it reduces the data dimensions via UMAP [46] to avoid the curse of dimensionality [47]. Secondly, it clusters the embeddings through HDBSCAN [46], which has the feature to consider noisy topics as outliers. Finally, leveraging a class-based version of _tf-idf_, it extracts the most meaningful words from each identified cluster. To evaluate the quality of the obtained topics, we rely on two measures as proxies for an indicative - and subjective - human evaluation, as highlighted in [45]: namely, topic _coherence_[45] and _diversity_[48]. The former estimates the coherence of the extracted topics by using Normalized Pointwise Mutual Information (NPMI) [49]. It spans the range [-1, 1], being 1 a perfect association with scores given by human annotators. The latter describes, for each topic, the percentage of unique words and lies within [0,1]. _Valence analysis._ Our aim is also to investigate the emotional component of ECs and discussions outside these closed systems. To address this issue, we rely on the VAD Lexicon, described in Section 2.3, and on KeyBERT [50], a method for keyphrase extraction that exploits BERT embeddings and cosine similarity to identify the most likely keywords describing a raw text. The main idea is to extract a set of keywords describing each post/comment and then proceed by calculating the valence score of the topic they are associated with. As a first step, keywords from the cleaned texts included in each topic are extracted. Secondly, the respective valence score in the lexicon is extracted for each of these keywords. The final valence score returned as output, consists of the ratio between the sum of all the valence scores of the keywords found in the VAD lexicon and the total number of these keywords included in the lexicon. In this way, we wanted to reduce the noise in the results because of the presence of non-inherent words interfering with the score. ## 4 Case study: Reddit socio-political dataset In this section, we apply the proposed framework to a specific case study, discuss the obtained results and, evaluate its effectiveness and limitations. The dataset we focus on is the one introduced in [3, 51] as the authors already assessed the presence of ECs, which we decided to track further from a temporal perspective. The dataset composes of Reddit discussions about three socio-political topics and focuses on the pro-/anti-Trump debate during the first two and a half years of his presidency. Reddit is currently the seventh most used social network in the world [52], and it is particularly suitable as a source of data since it is composed by _subreddits_, topic-specific forums devoted to a single topic where users may discuss freely both general issues and more specific topics related to any kind of niche. Since the anonymity of the users is encouraged in these small forums, users may be motivated to talk more openly and therefore reach more extreme positions about controversial discussions, thus making Reddit a valuable source of data for the case study. The data used for this case study are available in the following Github repository ([https://github.com/virgiiim/EC_Reddit_CaseStudy](https://github.com/virgiiim/EC_Reddit_CaseStudy)), while the code is available in this repository ([https://github.com/lyereth/Topics_trends_EC_case_study](https://github.com/lyereth/Topics_trends_EC_case_study)). ### Data collection and annotation The authors of [3] chose to investigate a controversial topic, namely the debate around Donald Trump's presidency from January 2017 to June 2019, which sharply exacerbated the clash between the two factions of Democrats and Republicans [53]. The three analyzed datasets are built on top of subreddits related to socio-political issues, categorized as follows: _gun control_, _minorities discrimination_, and _politics_. As a preliminary step, an additional dataset representing a _polarized ground truth_ was created - collecting posts and comments openly supporting or antagonizing Trump as president. To such an extent, and to maintain a balanced representation of the two-sided controversy, post/comments data were extracted respectively from _r/The_Donald_ and, _r/Fuckthealtright_, _r/EnoughTrumpSpam_. The resulting dataset was employed to train and test a classification model, namely \(BERT_{BASE}\).The model achieved an accuracy greater than 70% for the test set [3]. The classifier was then applied to the three Reddit sociopolitical datasets to infer, for each user, his/her leaning toward the specified controversy. Each post/content was classified as either Pro- or Anti-Trump, and the prediction confidence (ranging in [0,1]) was used to assign a continuous value to the identified class. Posts/comments with prediction confidence equal to 1 were then considered perfectly aligned with a pro-Trump ideology, while the ones on the other extreme aligned with anti-Trump ones. Individual scores were subsequently averaged at the user level to compute the _leaning score_ \[L_{u}=\frac{\sum_{i=1}^{n}PredictionScore(p_{i})}{n}\] where \(p_{i}\in P_{i}\) corresponds to a post shared by the user \(u\) and \(n=|P_{i}|\) indicates the cardinality of the set of posts published by the users. Finally, the resulting users' leaning scores values were discretized into intervals, as follows: if \(L_{u}\leq 0.3\), then the posts are classified as _antitrump_, as _protrump_ if \(L_{u}\geq 0.7\) and _neutral_ otherwise. These thresholds, arbitrarily in nature, have also been maintained in this study but may be increased or decreased according to the dataset. Further, Table 1 shows the number of subreddits of the four datasets in terms of number of posts and users included. ### Network modeling and EC identification From each of the three datasets, five snapshots were extracted, each covering a semester of the observed period. Starting from such a temporal discretization, a dynamic network was reconstructed as a snapshot sequence where a labeled user \(u\) had an edge pointing towards user \(v\) at time \(t\), if and only if \begin{table} \begin{tabular}{|c|c|c|c|} \hline Dataset & \# Subreddit & \# Post & \# User \\ \hline Ground Truth & 3 & 302:762 & 68.113 \\ Gun control & 6 & 180.170 & 65.111 \\ Minorities discrimination & 6 & 223.096 & 52.337 \\ Politics & 6 & 431.930 & 72.399 \\ \hline \end{tabular} \end{table} Table 1: Datasets statistics. \(u\) directly replied to a post or comment by user \(v\) or vice versa during semester \(t\). Each edge \((u,v,t)\) is then enriched with the weight of that tie, which is equal to the number of times the interaction between \(u,v\) occurs during \(t\). Table 2 provides an overview of the network. After network construction, communities were extracted from each snapshot through the Labeled Community Detection. As discussed in Section 3.3, the chosen algorithm was EVA, because it can address both the optimization of modularity and label homogeneity. Further details on community detection in the snapshots are available in the Supplementary Materials. In addition, because of the temporal nature of this network, during each iteration from timestamp to timestamp, the Jaccard similarity index is applied to each identified partition. This led to the detection of the most likely evolution of a partition in \(t\) at \(t+1\). The next step consists of the detection of ECs by employing the evaluation measures discussed in 3.3. In this case study, we set the _Conductance_ score \(\leq 0.5\) and the _Purity_\(\geq 0.7\), to ensure that most of the interactions in an EC involve users who stay within it while maintaining a high ideological cohesion. ### Echo chambers' stability analysis In the previous section, we assessed the presence of ECs in every static snapshot of the network.The analysis now moves toward understanding ECs' internal dynamics to answer two research questions. * RQ1: _Are echo chambers stable over time w.r.t. the users that compose them?_ * RQ2: _Do echo chambers keep or lose their polarization as time passes?_ The results, shown in Figure 2, differed slightly for each of the three main categories of discussions analyzed. It should be noted that the ECs belonging to _Politics_ and _Minorities discrimination_, appear to be stable even over a long timespan, with variations that may be ascribed to the different topics discussed and to the temporal segmentation chosen for the snapshots. Instead, ECs in _Gun control_ behave differently than those ones in the other two topics. Figure 0(a) shows the evolution of detected ECs. In this case, the overall Jaccard similarity between adjacent timestamps is very low. The only exception is an EC that turns into a lesser polarized community between the end of 2017 and the beginning of 2018, which reaches a similarity value equal to 0.53, against the 0.24 of the previous pair of semesters. This behavior is less pronounced in _Minorities discrimination_ and _Politics_ (Figures 0(b), 0(c)). First, in both cases, the internal stability is very high from the beginning of monitoring, reaching its highest value, 93%, during the first year of discussions about _Minorities discrimination_. Then, as time passes, the internal similarity seems to decrease slightly, and in certain cases, ECs become communities that do not bear a strong ideological cohesion as in the previous pair of adjacent semesters. In other cases, otherwise, they remain the same. In _Minorities discrimination_ (Figure 0(b)), for example, it is interesting to note that the EC with the lowest percentage of common users between the first two semesters, namely 75%, is also the one with the longest lifecycle as EC, as it maintains its status until the very last pair of analyzed snapshots. Such behavior might also be identified in _Politics_ (Figure 0(c)), which has a similar case of an EC which took root in the second semester of 2017 but stayed almost still until the end of the monitoring. Furthermore, seems that the less polarized communities derived from ECs do not become ECs again. In addition, in the discussed figures, there is the presence of ECs persisting over a shorter but still significant timespan of only six months. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline **Dataset** & **\# Nodes** & **\# Edges** & **avg. degree** & **avg. density** & **\# pro-Trump** & **\# anti-Trump** & **\# neutral** nodes \\ \hline **Gun Control** & 11,388 & 114,262.4 & 17,1087 & 0,00324 & 2803.2 & 7385.2 & 1199.6 \\ **Minorities** & 6,617.6 & 80,497 & 19,3608 & 0,0031 & 2150.4 & 3676,6 & 790,6 \\ **Politics** & 7,912.8 & 57,463.6 & 17,3634 & 0,0013 & 3837.6 & 2923.4 & 1151.8 \\ \hline \end{tabular} \end{table} Table 2: Averaged number (per semester) of: nodes, edges, degree, and density of the networks, and distribution of neutral and pro-/anti-Trump nodes’ leaning attribute. Regarding Not-ECs, from Figure 0(a) it can be noted that _Gun control_ communities experience an increase in their internal stability between the second and the thrust semester, without reaching strong stability as the one characterizing ECs. In both _Minorities discrimination_ and _Politics_, communities' internal stability tends to be very high except for one community in the first dataset and two communities in the second one. Additionally, they might become even more stable if they become ECs over time. ### Topic modeling on polarized systems After observing the overall persistence over time and the intrinsic stability of most ECs in terms of user compositions, we inspect the textual productions made by the users. Among the various implementations of topic modeling, some of them discussed in Section 2.3, we opted to rely on Transformer models, because of their improved ability to capture more varied shades of meaning. Our final choice fell on BERTopic ([https://maartengr.github.io/BERTopic/index.html](https://maartengr.github.io/BERTopic/index.html)), which implements the BERT model for topic modeling tasks. The first step before applying the algorithm was to create, for each topic and semester, a dataset containing the texts produced by users and information about users themselves, including their being or not being members of ECs. This was necessary since our goal is to characterize and distinguish between text produced inside ECs and non-polarized communities. Therefore, we applied BERTopic on all the documents (i.e., covering texts produced both by ECs and not-ECs users), thus identifying 13 topics. The corpus of textual data on which the model was trained was preprocessed as follows to clean the raw text in input and the resulting labels in output. First, the preprocessing involved the normalization and cleaning of the raw text. The cleaning step consisted of expanding abbreviations and short forms and removing markdown characters. In addition, as a preprocessing step, words were lemmatized using Wordnet [54]. Finally, a set of representative keywords of the entire text was extracted for each post of the dataset using KeyBERT [50], thus creating a fine-tuned vocabulary to label the identified topics better. To reduce the number of outliers originally identified by BERTopic on the available data, we decided to substitute the default clustering algorithm it employs (HDBSCAN [46]) with K-Means [55]. The minimum clusters size was set to 120 to avoid smaller - and noisier - topics. Moreover, the clusters were further diversified by employing the _Maximum Marginal Relevance_[56] ranking algorithm, which allowed to identify of the most meaningful and diverse words describing each topic. As for the results of topic modeling (see Table 3 for _Coherence_ and _Diversity_ values), in both in ECs and less polarized communities belonging to the _Gun control_ dataset, it was possible to assess Figure 1: Communities evolution through pairs of adjacent semesters. For each topic, the plot shows the value of the Jaccard index (_y-axis_) through pair of adjacent semesters (_x-axis_). Triangles mark the community as an echo chamber, while circles as not echo chambers. the presence of general discussions about guns and ammunition brands as well as requests for advice from expert users. Interestingly, at the beginning of 2018, one EC focused mainly on a particularly controversial topic: the War in Syria. In the following semester, users discussed instead the 2018 Firearms Amendment Act; then, the focus shifted back to the War in Syria. Users falling inside the _Minorities discrimination_ dataset, discussed a wider variety of topics. One interesting issue that emerged is Gamergate, an online social movement, for which one of the subreddit included in the dataset, namely _r/KotakulnAction_, represents _the main hub_ on Reddit, as stated on the subreddit homepage. The campaign started in 2014 to naras female journalists and developers involved in the video game industry, who experienced doxing, rape, and death threats [57]. It rapidly evolved into a broader movement targeting _Social Justice Warrior_1 activists and the perceived excess of political correctness in video games. According to Massanari [58], who addressed the movement as an "echo chamber of anger", comprises people sharing the same core values of toxic masculine gaming culture. In addition, it has also been addressed as ideologically near to the alt-right wing of the political spectrum [58]. Footnote 1: Social Justice Warrior”, Cambridge Dictionary. Accessed July 18, 2023. Available: [https://dictionary.cambridge.org/dictionary/english/social-justice-warrior](https://dictionary.cambridge.org/dictionary/english/social-justice-warrior) Other controversial issues that emerged are _antifascists_ movements, often discussed along with the protests in Berkeley in 2017, where Trump supporters clashed with anti-Trump protesters. Outside ECs, other recurring discussions were focused on more general, but still polarized topics, e.g., the _white privilege_, Canadian politics, and gender equality. Inside _Politics_, users belonging to ECs mainly discussed abortion and the _Mueller special counsel investigation_, conducted to assess the interference of Russia in the 2016 U.S. elections. For example, the second topic was the main focus of the discussion in the echo chamber with the longest lifecycle shown in Figure 0(c). Outside ECs, the communities seem to discuss also other issues, in addition to the others discussed in ECs, e.g., news about Trump and politics, Obamacare, and Libertarianism. Further details on topic modeling can be found in Section 2 of the Supplementary Materials. ### Valence analysis After assessing the presence of meaningful and polarized topics across all three macro-topics, the analysis deepened to gain insights into the pleasantness or unpleasantness of the topics that emerged from the words chosen by users. This emotive attitude of users was investigated using the NRC-Valence Lexicon [38] which allowed for extracting the average Valence score from all the posts included in the macro-category represented by a topic. Specifically, for each topic, the average valence was calculated as the ratio of the sum of the valences of its keywords that are annotated in the VAD _w.r.t._ the total number of keywords in the lexicon. The results were then visually investigated with the aid of a scatterplot where each point is colored according to the average valence score, i.e., orange for negativeness, pearl white for neutral, and blue for positiveness. The results reflect the inherently polarized nature of the topics under analysis, with only slight differences between ECs and communities. In _Gun control_, Figure 1(a), the difference between the two systems is not stark as we expected, as users outside ECs appear to discuss using more negative words than users inside ECs, except when talking specifically about _Gun collections_, where users outside ECs often use terms associated with negative meaning. Conversely, in _Minorities discrimination_, Figure 1(b), it is worth noting that ECs users discuss only one topic with a positive attitude, i.e., discussions about the center-left wing of the political spectrum. Moreover, we can also observe how topics that tend to be strongly negatively polarized outside ECs - e.g., fascism, racism, and police shootings against minorities - are more neutral inside ECs. Such a result, which might appear contradictory at first glance, may be related to the fact that in ECs users tend to have less negative or condemming opinions on fascism and sensitive issues. Furthermore, the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**Topic coherence**} & \multicolumn{2}{|c|}{**Topic diversity**} \\ \hline & LDA & BERTopic & LDA & BERtopic \\ \hline **Guncontrol** & 0.0222 & 0.1661 & 0.923 & 0.8376 \\ **Minorities** & 0.0082 & 0.2533 & 0.9487 & 0.9145 \\ **discrimination** & -0.009 & 0.1997 & 0.9487 & 0.9316 \\ \hline \end{tabular} \end{table} Table 3: Comparison between LDA (13 topics) and BERTopic in terms of topic coherence and diversity. Gamergate controversy is characterized by an increase in wording negativeness which the misogynistic nature of the movement may justify. Thus, we can assume that users are prone to condemn and attack women and minorities using negatively connoted language. This result is reflected and magnified by an ever more polarized negative attitude about the topic "OSNs censorship", as the discussions about censorship revolve around the Gamergate controversy. Focusing on the topics identified in the _Politics_ dataset, Figure 1(c), emerges a more negative connotation in discussions about school shootings and protests against them. Moreover, similarly to what was observed in the _Minority_ dataset, a subset of topics are treated as less negative in ECs w.r.t. not-ECs - e.g., War in Syria and abortion. At the same time, a more negative attitude emerges toward the border wall between Mexico and the United States. Despite these results, however, it can be argued that the average valence score alone seems insufficient to highlight a clear distinction in terms of sentiment between ECs and Not-ECs. ## 5 Conclusions In this work, we proposed a platform-independent framework to capture the dynamics of ECs and inspect the content and the positiveness or negativeness conveyed in their discussions. The framework is composed of four steps and leverages only posts and comments, a feature common to most OSNs that represents both the topology of relations and the analyzed content. As a result, the framework turns out to be highly reusable in other OSNs different from Reddit, as long as a system of posts and comments is implemented in the platform. The framework has the advantage of considering one of the pillars in common with most ECs definitions, namely, the idea of being closed systems of like-minded users mainly interacting with one another. This is obtained via ECs extraction from a meso-scale topological level and node-attributed snapshot graph, allowing for a simpler, but at the same time, effective representation of the dynamics of relations over time. Furthermore, it considers another crucial component that makes ECs a _polarized system_, the ideology or leaning assumed by users during discussions. From the case study we presented, we observed different tendencies, but the main one seems to be that ECs keep trapped a large portion of users, while a small component leaves the polarized space as time passes. The most interesting result was observed in the _Politics_ and _Minorities discrimination_ discussions, as they were characterized by ECs with high stability for almost two years. Figure 2: Topic valence (_x-axis_) for EC and Not-ECs (_y-axis_) users’ clusters. Colors describe the attitudes conveyed in texts. Strongly polarized topics are characterized by a blue or dark orange hue, the former for positive and the latter for negatively connotated topics. Circles sizes relate to the number of texts associated with each topic. Topic modeling allowed for the extraction of a wide range of interpretable discussion themes. Topics analysis was then enriched with emotion analysis by extracting scores describing the valence of the texts included in each topic, thus leading to particularly interesting observations. First, from the visual analysis of the results, it was possible to infer that in ECs, most topics were discussed using words conveying a negative meaning. Secondly, interestingly, we were able to observe that often, when highly divisive topics are taken into account - i.e., racism - users trapped within ECs tend to use neutral wording, while outside ECs, a more negative one. ### Approach weaknesses and limitations The proposed framework has weaknesses and limitations that must be discussed and considered. Firstly, the echo chamber concept is something on which there is currently no consensus even from a qualitative perspective. Similarly, the core concept of the topological extraction of ECs in this framework, namely the notion of _community detection_, is well-known in the network science literature because it is an _ill-posed problem_. Secondly, the framework generally lacks rigorous validation related to the absence of ground truth for labels describing the user leaning. These labels are inferred through a classifier trained on polarized ground truth and act as a mere _proxy_ to understand people's real - and multi-faceted - political leaning. Even topic modeling results suffer from a similar problem as they employed an unsupervised approach for which the extracted topics were not annotated. Finally, an intrinsic limitation is due to the stochastic nature of the UMAP employed by BERTopic. ### Future developments To better understand the complex nature of ECs, we plan to perform a more in-depth analysis of both network topology and textual data produced by users. In particular, we plan to move from pairwise to high-order interactions thus explicitly accounting for group dynamics. In this way, we might be able to capture a wider range of interactions that might provide insight into homophilic behaviors related to the phenomenon, e.g., peer pressure. Furthermore, we aim to enhance content analysis by integrating and studying the _stance_ of users towards the controversy in which ECs are detected. Stance detection is an NLP problem that fits well with the concept of echo chambers because it is related to the prediction of users' ideas (pro, none, or against) toward a target [59]. Finally, the proposed framework will be applied to other case studies - both on controversial and less contentious issues - to properly observe whether similar patterns characterizing ECs are found even outside polarized discussions. For example, testing the framework on other case studies comparing polarizing issues and neutral topics would allow us to verify the hypothesis about the higher neutrality used when discussing controversial issues inside ECs. This would also increase the number of studies on the dynamic development of ECs, given their scarcity in literature. ## Acknowledgment This work is supported by: the EU NextGenerationEU programme under the funding schemes PNRR-PE-AI FAIR (Future Artificial Intelligence Research); the EU - Horizon 2020 Program under the scheme "INFRAIA-01-2018-2019 - Integrating Activities for Advanced Communities" (G.A. n.871042) "SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics" ([http://www.sobigdata.eu](http://www.sobigdata.eu)); PNRR-SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics" - Prot. IR0000013;
2307.10768
Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory
Working memory (WM), a fundamental cognitive process facilitating the temporary storage, integration, manipulation, and retrieval of information, plays a vital role in reasoning and decision-making tasks. Robust benchmark datasets that capture the multifaceted nature of WM are crucial for the effective development and evaluation of AI WM models. Here, we introduce a comprehensive Working Memory (WorM) benchmark dataset for this purpose. WorM comprises 10 tasks and a total of 1 million trials, assessing 4 functionalities, 3 domains, and 11 behavioral and neural characteristics of WM. We jointly trained and tested state-of-the-art recurrent neural networks and transformers on all these tasks. We also include human behavioral benchmarks as an upper bound for comparison. Our results suggest that AI models replicate some characteristics of WM in the brain, most notably primacy and recency effects, and neural clusters and correlates specialized for different domains and functionalities of WM. In the experiments, we also reveal some limitations in existing models to approximate human behavior. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities. Our source code and data are available at https://github.com/ZhangLab-DeepNeuroCogLab/WorM.
Ankur Sikarwar, Mengmi Zhang
2023-07-20T10:57:02Z
http://arxiv.org/abs/2307.10768v2
# Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory ###### Abstract Working memory (WM), a fundamental cognitive process facilitating the temporary storage, integration, manipulation, and retrieval of information, plays a vital role in reasoning and decision-making tasks. Robust benchmark datasets that capture the multifaceted nature of WM are crucial for the effective development and evaluation of AI WM models. Here, we introduce a comprehensive **W**orking **M**emory (**W**or**M) benchmark dataset for this purpose. WorM comprises 10 tasks and a total of 1 million trials, assessing 4 functionalities, 3 domains, and 11 behavioral and neural characteristics of WM. We jointly trained and tested state-of-the-art recurrent neural networks and transformers on all these tasks. We also include human behavioral benchmarks as an upper bound for comparison. Our results suggest that AI models replicate some characteristics of WM in the brain, most notably primacy and recency effects, and neural clusters and correlates specialized for different domains and functionalities of WM. In the experiments, we also reveal some limitations in existing models to approximate human behavior. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities. Our source code and data are available at: link ## 1 Introduction Working memory (WM) defines a core cognitive process enabling temporary storage, integration, manipulation, and recall of information. It is vital for many downstream tasks involving reasoning [40; 31; 25], decision-making [34; 18], and language comprehension [4; 7; 1]. Understanding and modeling WM has significant implications for both neuroscience and AI research. In neuroscience and cognitive science, extensive investigations have been conducted to unravel the underlying mechanisms of WM. The non-exhaustive list of works involves (1) identifying brain regions such as the prefrontal cortex [11; 35; 27], hippocampus [5; 38], and parietal cortex [20; 23; 32] as key players in WM processes; (2) exploring the role of WM capacity in cognitive processes such as selective attention [12; 33], and executive control [13; 6]; and (3) investigating the individual differences, such as age [14; 37], intelligence [9], and neurological disorders [2; 16], in WM abilities. In the field of AI, WM has also attracted considerable attention. AI researchers have aimed to develop computational models that can emulate and augment human-like WM capabilities. Various memory architectures [8; 19; 17; 44; 39] have been proposed, such as neural networks with memory cells, recurrent neural networks, and memory-augmented neural networks. These models have demonstrated promising results in specific WM tasks, such as copying, sorting, and memory recalls, as well as real-world applications [15; 24], such as navigation in naturalistic environments, video recognition, and language processing. However, despite significant progress in individual subdomains, there remains a notable gap in the broader study of WM. To date, research efforts in neuroscience, cognitive science, and AI have mostly focused on specific aspects of WM or individual memory tasks. There is a lack of systematic, integrative, and quantitative exploration that covers multiple tasks, encompasses various functionalities and domains of WM, and provides a systematic benchmark for evaluating AI models in these memory tasks. In this work, we aim to address this gap by establishing a general framework to study WM. To capture the multifaceted nature of WM due to its complexity, flexibility, and variability, we include 10 WM tasks and curate 1 million trials. Each of the state-of-the-art recurrent neural networks and transformer models is jointly trained and tested on these tasks with the following goals: (1) to assess model behaviors, such as set size effect and retention interval; (2) to investigate neural populations, such as task-specialized neural clusters and neural correlates of executive control; and (3) to examine and compare performance across multiple WM models when performing different memory functions: storage, integration, manipulation, and supervision. As an upper bound, human performance in these WM tasks is provided for comparison with AI models. In the experiments, we observe that AI models replicate some human-like characteristics of WM; however, we also identify several limitations in the existing WM models, such as disparities between human and model behaviors and inferior model performances in tasks that humans excel at. Through this interdisciplinary exploration, we strive to enhance our understanding of WM processes, uncover novel insights into the cognitive architecture of human WM, and propel the development of AI systems with more robust and human-like WM capabilities. Main contributions are highlighted: **1.** We introduce WorM, a comprehensive large-scale WM benchmark dataset, covering 10 WM tasks and containing 1 million trials, encompassing 4 functionalities and 3 domains of WM. **2.** We introduce evaluation metrics for all our tasks and establish a general methodology to jointly train, study, and benchmark WM models on all tasks. **3.** We explore 8 WM behavioral benchmarks and find that recurrent neural networks are more closely aligned with humans, while transformers exhibit limitations in replicating human behavior. **4.** We examine the neural population of WM models. Our analysis reveals task-specialized neural clusters and uncovers neural correlates of executive control in WM models. ## 2 Psychophysics Experiments in WorM **Phase definitions in WM experiments.** In this section, we refer to our 10 WM tasks as experiments. A WM experiment often consists of several but not necessarily all phases (**Fig. 1**) described below: *** Presentation phase (P\({}_{present}\))**: A sequence of stimuli to remember is presented sequentially. *** Retention phase (P\({}_{retention}\))**: A grey blank image is presented for a period of time, during which the remembered patterns in P\({}_{present}\) have to be maintained in the memory. *** Probing phase (P\({}_{probe}\))**: Responses to a series of questions asking to recall certain aspects of the memory content are required. Sometimes, there will be no questions but free recalls are required. *** Distractor phase (P\({}_{distractor}\))**: A series of other distracting tasks are presented in this phase to interfere with the original WM experiments. *** Memory updating phase (P\({}_{update}\))**: A series of instructions to update the memory content are given. *** Task switching phase (P\({}_{switch}\))**: A cue image is presented to signal the transition between a pair of tasks. **Time steps in WM experiments.** The time duration of each phase in human psychophysics experiments is often measured in milliseconds (**ms**). However, computational WM models do not have the notion of time. We establish a mapping between the time step \(t\) in WM models and **ms** in human psychophysics experiments for all the phases of all the experiments (see **Supp.** for mapping details). For consistency, we describe all the main text with time steps \(t\). We introduce \(T\) to denote the total time steps for each experiment and \(T_{phase}\) to denote the total time steps of each phase, where \(phase\) could be from any of the phases introduced above. For example, \(T_{present}=3\) indicates that there are 3 time steps for the presentation phase and \(\text{t}_{present}=1\) indicates the first time step of the presentation phase. **Nomenclatures in cognitive science and neuroscience.** In human psychophysics experiments [3; 10; 21], researchers often use **"length list"**, denoted as \(L\), to indicate the total number of stimuli presented during each \(\text{P}_{present}\), and **"serial position"** to indicate the position of the \(l\)th stimulus in \(L\). For WM models, each stimulus is presented at each time step of \(\text{P}_{present}\). Thus, \(L=T_{present}\) and \(\text{t}_{present}=l\) denotes the \(l^{th}\) serial position during \(P_{present}\). Following psychology and neuroscience literature [26], we also define **"set size"**, denoted as \(S\), to be the total number of items that need to be held and manipulated in working memory at a given time step. For all the experiments, we use these terms interchangeably, and we also use **"agents"** to indicate that the test subjects can be both humans and computational WM models. **Introduction to ten psychophysics experiments** We introduce 10 psychophysics experiments on Working Memory (WM) from cognitive science and neuroscience [10; 26; 21; 3; 43; 29; 36; 30; 28]. Based on WM functionalities these experiments entail **(Fig. 2b)**, we categorize them into four functional groups: **(A)** memory storage, **(B)** memory integration, **(C)** memory manipulation, and **(D)** memory supervision. We also show the overview of the three WM domains (visual, spatial, and temporal) that each experiment involves in **Fig. 2a**. See **Supp.** for detailed descriptions of each experiment. **Experiment A1 - Spatial Free Recall (SFR) [10] (Fig. 2A)**. At every \(\text{t}_{present}\), a randomly chosen square on a green grid turns red. In \(\text{P}_{probe}\), a blank grey image is shown. The task for the agent is Figure 1: **Schematic illustration of all 10 working memory tasks. Each panel illustrates the schematic for a different task. In each panel, the arrow pointing from left to right denotes the progression of time. Overlapping frames indicate multiple time steps whereas single frames represent a single time step in the trial. In F, we further divide the task into four types. Two types are shown here. See Sec. 2 for a detailed description of each task.** to remember all the previous spatial locations where the squares turned red and provide a 100-way multi-label classification response. **Experiment A2 - Change Detection (CD) [26] (Fig. 2B)**. In P\({}_{present}\), a memory array is shown, consisting of \(S\) bars with varying attributes such as color, orientation, size, and the presence/absence of a gap in each bar. Followed by a sequence of blank grey images in P\({}_{retent}\), the agents are tested with a probe array in P\({}_{probe}\), which can be identical to the memory array or have a single bar that differs along one feature dimension. The task for the agents is to make a binary prediction about whether there is a change or not given the test array. **Experiment A3 - Visual Item Recognition (VIR) [21] (Fig. 2C)**. In P\({}_{present}\), a sequence of distinct \(6\times 6\) matrix patterns is displayed. In P\({}_{retent}\), a series of blank grey images is shown. In P\({}_{probe}\), a probe image is presented, consisting of two patterns side-by-side. One pattern matches a previously shown pattern, while the other is a distractor. The agents must perform a binary classification task, where 0 represents the left pattern and 1 represents the right pattern. **Experiment A4 - Visual Serial Recall (VSR) [3] (Fig. 2D)**. In P\({}_{present}\), a sequence of \(L\) matrix patterns is sequentially presented. These patterns consist of a \(6\times 6\) grid with half of the cells filled in green. In P\({}_{probe}\), a probe image is displayed, containing all \(L\) patterns presented during T\({}_{present}\). Agents perform a 9-way classification task at each t\({}_{probe}\), recalling the matrix patterns seen during T\({}_{present}\) in serial order. **Experiment A5 - Visual Serial Recognition (VSRec) [3] (Fig. 2E)**. P\({}_{present}\) is exactly the same as P\({}_{present}\) in the VSR experiment. Differences were introduced during P\({}_{probe}\). At each t\({}_{probe}\), a target pattern from T\({}_{present}\) and a distractor pattern were presented together. Distractor patterns were distinct from the initially presented matrix patterns and differ from the target pattern by \(n\) cells, where \(n\) denotes the pattern-distractor difference. Agents performed a binary classification task, with 0 indicating the left pattern and 1 indicating the right pattern. **Experiment A6 - Complex Span (CS) [43] (Fig. 2F)**. The experiment includes two types of P\({}_{present}\) and P\({}_{probe}\), as well as two types of P\({}_{distractor}\). (I) The first type involves memorizing visual patterns and recalling them in order (visual). (II) The second type involves memorizing spatial patterns of ball movements and recalling their directions (spatial). Two types of P\({}_{distractor}\) are introduced to interfere with memory. (III) color discrimination tasks (visual) and (IV) spatial symmetry discrimination tasks (spatial) are used as distractors. Four variations of experiment conditions are considered: visual storage + visual distractor, spatial storage + visual distractor, visual storage + spatial distractor, and spatial storage + spatial distractor. **Experiment B1 - Spatial Coordination (SC) [29] (Fig. 2G)**. P\({}_{present}\) involves a \(10\times 10\) grid where one cell is highlighted in green at each t\({}_{present}\). In P\({}_{probe}\), agents are presented with a blank gray image and asked to perform a binary discrimination task to determine whether the pattern formed by integrating all the patterns in P\({}_{present}\) is symmetric about the vertical axis or not. **Experiment B2 - Spatial Integration (SI) [36] (Fig. 2H)**. P\({}_{present}\) involves sequentially presenting partial line drawings on a \(4\times 4\) grid, eventually completing a multi-segment figure. The number of Figure 2: **WorM covers a wide spectrum of working memory (WM) tasks encompassing 3 domains, 4 functionalities, and 11 characteristics. For all ten tasks, we categorize them based on their objectives in studying different (a) memory domains, and (b) memory functions. See Fig. 1 for task acronyms. In (c), we present word clouds representing the studied memory characteristics. Larger font sizes indicate that more tasks (the exact number of tasks in brackets) produce results in this category.** line segments in the partial drawings determines the number of integration operations required, also equivalent to \(L\). In P\({}_{probe}\), agents perform a binary classification task to determine if the given figure matches the mentally integrated figure from P\({}_{present}\). **Experiment C1 - Spatial Memory Updating (SMU) [30] (Fig. 2I)**. In P\({}_{present}\), a stimulus with green squares is presented, each containing a red marker in one of the nine possible locations on a 3\(\times\)3 grid. P\({}_{update}\) involves arrows indicating the direction for mentally updating the marker's location. In P\({}_{probe}\), agents perform a 9-way classification task to recall the marker's location in the highlighted square. **Experiment D1 - Spatial Task Switching (STS) [28] (Fig. 2J)**. In P\({}_{switch}\), a cue image indicates the discrimination task for subsequent P\({}_{probe}\). There are two tasks across which to switch: top-versus-bottom and left-versus-right. At every P\({}_{probe}\), a red marker appears in a \(2\times 2\) grid, and agents switch between the two tasks based on cues presented during P\({}_{switch}\). **Experimental trial splits for computational WM models.** For every WM psychophysics experiment introduced above, we generate 86,400 trials for training, 9,600 trials for validation, and 9,600 trials for testing by following a ratio of 9:1:1. These trials are distributed uniformly among various experimental conditions for individual experiments. ## 3 Computational Models of Working Memory **Joint training and testing regime.** There exist multiple approaches to train and evaluate computational working memory (WM) models using our dataset. However, our aim here is not to exhaustively benchmark WM models across all possible training and testing regimes. Instead, we aim to lay the foundation for a systematic and quantitative methodology to comprehensively study the multi-facets of WM. In this regard, we propose a joint learning paradigm where we simultaneously train and test different models on all conditions of all ten WM tasks. This approach allows us to explore the intricate interactions and inter-dependencies among the different WM tasks, capturing the complex nature of WM performance. **Architectures for working memory models.** We investigate several state-of-the-art variations of recurrent neural networks, including vanilla recurrent neural networks (RNNs), Gated Recurrent Units (GRUs), and Long Short-Term Memory (LSTM) networks [8; 19]. Furthermore, considering the recent success of Transformers [42], we also include vanilla transformer-based encoders (TRF). Below, we introduce details of our model architectures (**Fig. 3**). **Feature Extraction.** At each time step \(t\) of a trial, the visual stimulus \(I_{t}\) comes as a \(32\times 32\times 3\) tensor which is then fed to a 4-layer 2D-convolutional network (2D-ConvNet). See **Supp.** for the Figure 3: **Overview of Working Memory (WM) models.** All WM models take a batch of stimuli in sequences from \(I_{t=1}\) to \(I_{t=T}\) as inputs and predict the task-specific responses at every single time step \(t\). These responses are compared against the batched and masked ground truths with classification losses. The gradients are only back-propagated based on the actual target values. For time steps without responses required (denoted as “NaN”), the predicted responses from the linear classifiers are not computed, and thus, no gradients are back-propagated. See legends for box notations. “shared” refers to learnable parameters shared jointly over tasks except for “task-specific shared” in linear classifiers, which refers to learnable weight parameters shared across time steps within the same task but different over tasks. See **Sec. 3** for model details. 2D-ConvNet architecture. The output feature maps from 2D-ConvNet are then flattened to a vector of dimension \(K\) and linearly projected to a feature representation \(F_{t}\) of dimension \(D\) using weight matrix \(\in\mathbb{R}^{K\times D}\). Different trials can be of different lengths, therefore we pad all the trials with blank images to ensure that all trials have a consistent length of 20. **Task embeddings.** In addition to extracting stimuli features, we also introduce learnable task-specific embeddings \(M\in\mathbb{R}^{14\times 14}\), informing the network about the task identity during the joint training. At every \(t\) of each task, the corresponding task embedding is concatenated with the stimulus feature representation \(F_{t}\) from \(I_{t}\), resulting in a (14+D)-dimensional representation \(A_{t}\). \(A_{t}\) is further fed to various WM models introduced below for further processing. See **Supp.** for details. **Encoding Time-dependent Representations.** As we have RNN-based networks and transformer-based networks to process stimuli sequences, we introduce their model designs separately. For RNNs, GRUs, and LSTMs, we use only one recurrent layer. The recurrent networks take (14+D)-dimensional representation \(A_{t}\) above as inputs for each \(t\). We define the capacity of these models \(C\) as the number of hidden units in this recurrent layer. At every \(t\), the recurrent networks output a hidden representation \(h_{t}\) of dimension C used for predicting task-specific responses. For TRFs, we introduce extra learnable positional embeddings \(P\in\mathbb{R}^{20\times(14+D)}\) which is shared across all tasks. Each row of \(P\) indicates a "time step" (see **Supp.** for details). At \(t^{th}\) time step, \(P_{t}\) is incorporated into \(A_{t}\) via element-wise addition. We denote this positional embedding-modulated stimulus feature representation as \(A_{TRF,t}\) which is then fed to the TRF. We include two standard transformer encoder blocks in TRF. We define the model's capacity \(C_{TRF}\) as the dimension \(14+D\) of the stimulus feature vector \(A_{t}\). Additionally, to simulate a more realistic scenario where the TRF model can only rely on past context to make predictions, we apply masking in the self-attention mechanism to prevent the model from accessing future information. We take the output \(h_{t,TRF}\) of TRF corresponding to the \(t^{th}\) time step and use it for predicting task-specific responses at \(t\). **Response Generation.** The output of a WM network \(h_{t}\) or \(h_{t,TRF}\) is fed to a task-specific linear classifier with weight matrix \((O_{task}^{(14+D)\times U_{task}}\) for generating final probability distributions over a fixed set of all possible response choices in the corresponding task. Note that \(O_{task}\) is shared across all \(T\) time steps within the same task, but it is independent over different tasks. For example, in the VSR task, during \(\text{P}_{probe}\), the WM model has to predict which pattern among the 9 shown patterns is the target pattern. In this case, \(U_{task}=9\), and the linear classifier outputs a probabilistic distribution of dimension 9, and the choice with the largest probability is the final response selected by the model. **Training Details.** Practically, the WM models output responses for all \(T\) steps. However, training supervision with corresponding losses are only applied on the set of time steps where actual responses are required. All model parameters, including the feature extractors and WM networks, are initialized using Xavier initialization and trained from scratch. See **Supp.** for more details. ## 4 Results and Analysis We analyze multiple memory characteristics (**Fig. 2c**) individually for the corresponding tasks for different WM models. We use "model-\(C\)" to refer to the WM model of memory capacity \(C\). For example, LSTM-128 refers to the model with LSTM of memory capacity \(C=128\) as the backbone. As benchmarks, we also include human responses here from [10; 26; 21; 3; 43; 29; 36; 30; 28] for direct comparison with WM models. Importantly, the WM models were not trained on human data or influenced by human biases. Therefore, our focus is not on comparing absolute performance disparities between humans and WM models. Instead, we aim to examine their qualitative trends across different conditions. Due to the extensive combinations of models and tasks, we present a condensed selection of results in this section. See **Supp.** for detailed result analysis. **A. Primacy and recency effect across list lengths is an emergent phenomenon.** We report the top-1 accuracy as a function of serial positions across various list lengths \(L\) for LSTM-256 in the VSR task (**Sec. 2**, **Fig. 4A**). We make several observations. Firstly, as \(L\) increases, the memory load in WM also increases, resulting in a decrease in overall accuracy across different list lengths (indicated by the varying colors of the lines, from blue to brown). Secondly, our findings in WM models align with the well-known cognitive science and neuroscience phenomena of the primacy and recency effects. These effects demonstrate that items presented at the beginning (primacy) and end (recency) of a trial are better retained than those presented in the middle. This pattern holds true across different \(L\) values, indicating that the primacy and recency effects are an emergent phenomenon in our WM model. Thirdly, we observed an asymmetry in the strength of the primacy and recency effects, with the primacy effect being more prominent than the recency effect. This is evident when comparing accuracy at positions 1 and 9 for \(L\) = 9. Furthermore, the primacy and recency effects become more pronounced with longer sequences \(L\), suggesting that these effects are more prominent when the memory load is high. For instance, the effects are minimal for \(L\) = 3 but become significant for \(L\) = 7. Fifthly, we compared the behaviors of WM models with humans and found that, though the models were never trained on human data, they approximate human-like qualitative trends in the primacy and recency effects across different serial positions and \(L\) values. **B. Short retention intervals have minimal effects on working memory.** We report the accuracy as a function of serial positions under the conditions of different retention intervals in the VIR task (**Sec. 2**, **Fig. 4B**). Aligning with human results, we observe that short retention intervals have minimal effects on WM performances, as indicated by the small performance gap between \(T_{reent}\) of 0 and 5 (blue versus orange lines). It is possible that the effect might be more dramatic with longer retention intervals. Moreover, same as **Fig. 4A**, here, we noted that there also exists a small recency effect for both models and humans. In particular, the effect is slightly stronger in \(T_{reent}=5\) than \(T_{reent}=0\). However, this effect is less prominent than the ones observed in **Fig. 4A**, probably due to the short \(T_{reent}=5\) or the small list length. **C. Domain conflicts and increased cognitive loads impair working memory performances.** To investigate whether domain conflicts would impair WM performance, we report accuracy in the CS task as a function of cognitive loads under different conditions involving combinations of visual and spatial domains (**Sec. 2**, **Fig. 4C**). In humans, regardless of the domain of the distractor task, we observe a monotonic decrease in accuracy with increasing cognitive load. This suggests that WM performance is impaired due to the cognitive load imposed by the distractor tasks, rather than the specific domain of the distractor task itself. However, in the WM model, we observe a monotonic decrease in accuracy only when the distractor task is of the spatial domain, implying that our spatial distractor task introduces a higher cognitive load than the visual distractor task, thereby hurting WM performance. **D. Monotonic decrease in accuracy with increasing set size.** We report the agent's accuracy as a function of set sizes \(S\) in SMU task (**Sec. 2**) in **Fig. 4D**. For recurrent models, the recall accuracy monotonically decreases with increasing set sizes. For different recurrent backbones with the same memory capacities, GRU outperforms LSTMs and RNNs (compare GRU-96 vs RNN-96 vs LSTM-96). Moreover, for the same recurrent backbone, larger memory capacity enhances overall recall accuracy across all set sizes (RNN-96 vs RNN-256). Humans show a similar qualitative set size effect as recurrent models. In contrast, TRF-128 shows an opposite trend to humans and recurrent Figure 4: **Performance benchmarks and behavioral analysis for working memory models and humans.** We present the behavioral accuracy as a function of (A) list lengths \(L\) for LSTM-256 in VSR task, (B) retention interval in VIR task, (C) memory domain conflicts for LSTM-1024 in CS task, (D) set sizes \(S\) in SMU task, (E) memory resolution \(n\) in VSRec task, and (F) number of features or conjunctions of features per item in CD task. See **Sec. 2** for all the task introductions. See **Sec. 4** for the analysis of these results. models, with accuracy increasing with set size initially. Also, note that TRF-128 outperforms other RNN models in recall accuracy at set size 8 even with smaller memory capacity (e.g. RNN-256). **E. Working memory stores fine-grained details of visual patterns.** We report the accuracy as a function of serial positions over various pattern-distractor differences \(n\) in VSRec (**Sec. 2**, **Fig. 4E**). Intuitively, as it becomes easier to discern the differences between the correct pattern and the distractor, the more accurate it is to recall the correct patterns. Indeed, we see an increase in overall accuracy for bigger \(n\). This trend is similar to human results. Moreover, as also observed in **Fig. 4A**, we see a primacy effect on both WM models and humans, where the accuracy is highest at position 1 and slowly decreases over subsequent positions. Interestingly, despite the difficulty of VSRec, for both humans and the models, the accuracy of recalling 36 cells correctly from the distractors with only 2 cell differences is still way above chance. This suggests that both humans and WM models are very good at memorizing fine-grained details of complex patterns. **F. Memory capacity is independent of the features or conjunctions of features.** We study the capacity for storing different features and conjunctions in the CD task based on the agent's accuracy over set sizes under different feature conditions (**Sec. 2**, **Fig. 4F**). Humans display minimal accuracy differences across feature conditions, suggesting that memory capacity is independent of specific features or feature combinations. Similarly, WM models exhibit comparable accuracy regardless of feature conditions. However, caution is advised in interpreting model results, as accuracy saturates at 100% regardless of set size, likely due to overfitting. To draw stronger conclusions, further investigation in a low-data regime using our trials is needed. **G. Neural population carries active task representation in task-switching experiment.** In STS task (**Sec. 2**), we present the visualization results of clusters based on neural activation of all hidden units across all trials and all conditions for LSTM-1024 in **Fig. 5A**. Specifically, we first take the hidden representation during P\({}_{probe}\) for all the trials within the task and then perform t-SNE [41] to project the hidden representation to 2D. On the left panel in **Fig. 5A**, we color-code the points based on the ground truth at that particular t\({}_{probe}\). In this case, we did not observe distinct colored clusters whereas when we color-coded the points based on the task that was supposed to be performed at t\({}_{probe}\), we observe two distinct colored clusters. In other words, the neural representations in the recurrent model encode task identities within a task-switching paradigm, thus playing a supervisory role. See **Supp.** for more details. Figure 5: **Visualization of neural correlates and task-specialized neural clusters** (A) For LSTM-1024 in STS task, the visualization results of neural clusters based on response behaviors (left) and task identities (right) are presented. See **Sec. 4G** for details. In (B), we present the t-SNE visualization of neural clusters based on Task Variance (TV) of all hidden units in LSTM-256 over all 10 tasks. In (C), we present the neural selectivity of all hidden units for all the tasks. The neural selectivity is defined as TV values. All hidden units from left to right are sorted based on the clusters in (B). See the colorbar for TV values. In (D), we present the matrix of histogram plots for any pairs of tasks, where each histogram indicates the number of hidden units more selective to one task over the other. The selectivity for each hidden unit over a pair of tasks is defined as the fractional task variance (FTV). The x-axis and the y-axis of the histogram denote the FTV and the frequency of hidden units respectively. See **Sec. 4H** for details. **H. Responses from neural populations reveal task selectivity during joint training.** To study the task-specialized neural clusters in recurrent models, we present the visualization results in **Fig. 5B-D**. See **Supp.** for the steps to obtain and interpret the visualization results. From **Fig. 5B, C**, we observed 8 distinct neural clusters and each neural cluster is selective to a particular set of tasks. Some clusters are more active in different domains of WM. For example, cluster 2 is more active in SC, SRF, and SI tasks, often involving spatial WM; while cluster 5 is more active in VSR and VSRC, involving temporal WM. We also note that clusters of neurons mostly emerge in networks of lower capacity. From **Fig. 5D**, we show diverse neural populations where some have preferences for one task over the other, while some are equally active in both tasks. For instance, the entry in row 1, and column 5 indicates that the majority of hidden units are more selective to SC task compared with STS, as the distribution of neural population shifts to the right with the majority of the neurons having FTV = 1. ## 5 Discussion Despite significant research progress in studying individual aspects of working memory (WM), there remains a huge gap in the broader study of WM. We take initial steps in this direction by establishing a systematic and quantitative methodology to study the multi-facets of WM. However, it is important to recognize that we are still at the preliminary stage in a broader exploration of WM. The complexity, flexibility, and variability of WM present numerous avenues for future research, such as establishing improved time mappings between human performance and model predictions, exploring the generalization capabilities of WM models across a wider range of tasks and domains, and expanding the benchmarking of bio-inspired WM models. In our work, we introduce a comprehensive benchmark dataset consisting of 10 tasks, and 1 million trials covering 4 functionalities and 3 domains of WM. Moreover, we benchmark WM models and humans on the 10 tasks and compare their 11 memory characteristics at the behavioral, performance, and neural levels with a set of newly introduced evaluation metrics. We obtained insights about the underlying mechanism of WM from these tasks and identified the limitations in existing WM models. The methodology established in our work serves as a valuable resource, setting the stage for comprehensive investigations into WM and paving the way for further advancements in both biological and artificial intelligence systems. ## Acknowledgments and Disclosure of Funding This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-025), its NRFF award NRF-NRFF15-2023-0001, Mengmi Zhang's Startup Grant from Agency for Science, Technology, and Research (A*STAR), and Early Career Investigatorship from Center for Frontier AI Research (CFAR), A*STAR. The authors declare that they have no competing interests. List of Supplementary Sections * A Psychophysics Experiments in WorM * A.1 Detailed experiment descriptions * A.2 Time mapping between WM models and Human psychophysics experiments * B Computational Models of Working Memory * C Results * D Datasheet for WorM Dataset * D.1 Motivation * D.2 Composition * D.3 Collection Process * D.4 Preprocessing/cleaning/labeling * D.5 Uses * D.6 Distribution * D.7 Maintenance List of Supplementary Figures * S1 Detailed schematic of each experiment * S2 Behavioral accuracy as a function of list lengths in VSR task * S3 Behavioral accuracy as a function of retention interval in VIR task * S4 Behavioral accuracy as a function of memory domain conflicts in CS task * S5 Behavioral accuracy as a function of set size in SMU task. * S6 Behavioral accuracy as a function of memory resolution \(n\) in VSRec task * S7 Behavioral accuracy as a function of features and conjunctions in CD task * S8 Training curves for different model architectures and capacities. * S9 Training curve of individual tasks for LSTM-1024. * S10 Visualization of similarity matrix based on learned task embeddings from WM models with different memory capacities. List of Tables * S1 Time mapping between WM models and Human psychophysics experiments. Psychophysics Experiments in WorM ### Detailed experiment descriptions **Experiment A1 - Spatial Free Recall (SFR) [10]**. This experiment focuses on the spatial domain of WM and aims to assess the ability to remember spatial locations and engage in immediate free recall of that information. **Experiment schematic:** The experiment begins with P\({}_{present}\) followed by P\({}_{probe}\). In each trial, the stimulus image in P\({}_{present}\) contains 30 green squares randomly chosen from a \(10\times 10\) grid. At every t\({}_{present}\), one randomly selected square of the 30 green ones turns red. Finally, in P\({}_{probe}\), a grey blank image is presented signaling the WM model to recall all the past spatial locations where the squares turned red. On the other hand, humans received an auditory cue to begin the recall. Humans indicate the highlighted squares by clicking on them in any order using a mouse cursor. The WM models, however, engage in a 100-way multi-label classification task with multi-hot encoded targets corresponding to the highlighted square locations. Afterward, we select the top-\(L\) squares from the WM model's prediction based on the output probabilities, sort them in descending order, and generate the model's recall response, where \(L=T_{present}\) refers to the list length in the trial. **Experiment conditions:** We vary list length \(L=T_{present}=1-8,10,12,15,18\). Refer **Fig. S1A** for example trial. **Experiment A2 - Change Detection (CD) [26]**. This experiment involves the visual domain of WM and aims to evaluate the agent's memory capacity for simple features and conjunctions. **Experiment schematic:** The experiment consists of P\({}_{present}\), P\({}_{retent}\), and P\({}_{probe}\) in the respective order. In P\({}_{present}\) where T\({}_{present}=1\), a memory array, consisting of \(S\) bars with varying attributes in color (red or green), orientation (horizontal or vertical), size (small or big), and presence/absence of a gap in the middle of each bar, is presented. Here, \(S\) refers to the set size of the trial. During P\({}_{retent}\), a series of blank grey images are presented. In P\({}_{probe}\), where T\({}_{probe}=1\), agents are probed with a test array that can be either identical to the memory array or has one bar that differs along a single feature dimension. The agents have to make a binary prediction about whether or not there is a change given the test array. **Experiment conditions:** We formulate 5 main experiment conditions by varying the change of the test array in five aspects: color, orientation, bar size, presence or absence of the gap, and conjunctions of the four features where either of the four features could change. Additionally, we vary S from 2, 4, 6, 8, 10, 12 and also vary \(T_{retent}\) from 0, 6, 12, and 18. See **Fig. S1B** for an example trial from this experiment. **Experiment A3 - Visual Item Recognition (VIR) [21]**. Here, we examine visual WM with different length lists and investigate the effect of retention interval on recognition performance. **Experiment schematic:** The experiment is structured with three phases presented in the following order: P\({}_{present}\), P\({}_{retent}\), and P\({}_{probe}\). In P\({}_{present}\), a series of distinct matrix patterns are presented in sequence. These patterns are composed of \(6\times 6\) square cells, with half of the cells being filled with red. In P\({}_{retent}\), a series of grey blank images are presented. In P\({}_{probe}\), where T\({}_{probe}=1\) a probe image is presented, consisting of two patterns presented side-by-side. One of these patterns is an exact match to one of the previously shown patterns, while the other pattern serves as a distractor. The distractor pattern is generated from the target pattern by changing 2 unfilled cells to filled ones and vice-versa. The agents have to perform a binary classification task, wherein 0 represents the left pattern and 1 represents the right pattern. **Experiment conditions:** We vary list length \(L=T_{present}\) from 4, 6, 8, and 10, and also vary \(T_{retent}\) from 0, 2, 4, 5, and 6. See **Fig. S1C** for an example trial. **Experiment A4 - Visual Serial Recall (VSR) [3]**. This experiment evaluates the ability to accurately recall matrix patterns as well as the order in which they were presented. Therefore, this experiment encompasses both the visual and temporal domains of WM. **Experiment schematic:** The experiment consists of P\({}_{present}\) followed by P\({}_{probe}\). P\({}_{present}\) involves the presentation of \(L\) matrix patterns in a sequential manner. These matrix patterns consist of square cells arranged in a \(6\times 6\) grid, with half of the cells filled in green. In P\({}_{probe}\), a probe image is displayed containing all the \(L\) patterns presented during P\({}_{present}\). These \(L\) patterns are placed simultaneously at randomly chosen locations from a set of 9 evenly spaced locations in a 3 \(\times\) 3 grid. Locations that were not used remained blank. At each t\({}_{probe}\), the agents perform a 9-way classification task to indicate the first pattern presented during P\({}_{present}\), followed by the second pattern, and so on. **Experiment conditions:** We vary list length \(L=T_{present}\) from \(2-9\). An example trial is shown in **Fig. S1D**. **Experiment A5 - Visual Serial Recognition (VSRec) [3]**. In this experiment, agents are required to recall sequentially presented matrix patterns in a forward serial order within a recognition paradigm. Here, we also investigate the resolution of the recalled patterns by varying the difference between target patterns and distractor patterns. This experiment again involves both the visual and temporal domains of WM. **Experiment schematic:** The experiment is similar to the VSR experiment above in terms of the setup. The experiment consists of P\({}_{present}\) and P\({}_{probe}\). P\({}_{present}\) is exactly the same as P\({}_{present}\) in the VSR experiment. The only differences happen at P\({}_{probe}\). During P\({}_{probe}\), a series of two-alternative recognition tests are presented in consecutive time steps to assess memory for the presented patterns in forward serial order. Essentially, at every t\({}_{probe}\), the target pattern from corresponding t\({}_{present}\) and a distractor pattern are presented side by side. The agents perform a binary classification task to indicate the target pattern, where 0 corresponded to the left pattern and 1 corresponded to the right pattern. The distractor patterns were generated from the target pattern by changing the values of n cells such that the total number of filled cells remained constant. We denote \(n\) as the pattern-distractor difference. We made sure that the distractor patterns were not among the initially presented matrix patterns. **Experiment conditions:** We vary list length \(L=T_{present}\) from 2 to 9 and the pattern-distractor difference \(n\) from the range of 2, 4, 6, 8, and 10. See **Fig. S1E** for an example trial. **Experiment A6 - Complex Span (CS) [43]**. This experiment aims to investigate the interference between the visual and spatial domains of WM, as well as the effect of cognitive load on recall performance. Moreover, in this experiment, agents are required to perform a serial recall of information. Hence, this experiment encompasses all three domains of WM: visual, spatial, and temporal. **Experiment schematics:** The experiment consists of two types of P\({}_{present}\) and two corresponding types of P\({}_{probe}\), and two types of P\({}_{distractor}\). (I) The first type of P\({}_{present}\) and P\({}_{probe}\) corresponds to visual storage and involves memorizing a sequence of visual patterns in P\({}_{present}\) and recalling the exact visual patterns in a serial order in P\({}_{probe}\). The visual patterns are represented as 4 \(\times\) 4 grid, where half of the cells are in red. (II) The second type of P\({}_{present}\) and P\({}_{probe}\) corresponds to spatial storage and involves memorizing a sequence of spatial patterns involving ball movements in P\({}_{present}\) and recalling the exact directions of ball movements in a serial order in P\({}_{probe}\). To interfere with the spatial/visual memories in P\({}_{present}\), two types of P\({}_{distractor}\) are introduced: (III) The first type of P\({}_{distractor}\) corresponds to the visual domain and involves color discrimination tasks where agents have to classify the color of the given panel as blue or red. (IV) The second type of P\({}_{distractor}\) corresponds to the spatial domain and involves symmetry discrimination tasks where agents have to classify whether the given pattern is symmetric about the vertical axis or not. **Experiment conditions:** We consider four different variations: I(P\({}_{present}\)) + III(P\({}_{distractor}\)) + I(P\({}_{probe}\)); II(P\({}_{present}\)) + III(P\({}_{distractor}\)) + II(P\({}_{probe}\)); I(P\({}_{present}\)) + IV(P\({}_{distractor}\)) + I(P\({}_{probe}\)); and II(P\({}_{present}\)) + IV(P\({}_{distractor}\)) + II(P\({}_{probe}\)); namely, visual + visual, spatial+visual, visual+spatial, and spatial + spatial respectively. Additionally, we change the cognitive load by varying T\({}_{distractor}\) from 0, 1, 3, and 5. See **Fig. S1F** for an example trial. **Experiment B1 - Spatial Coordination (SC) [29]**. The objective of this experiment is to assess the agents' working memory (WM) capacity in the spatial domain, specifically focusing on their ability to coordinate and integrate spatial information across various time steps. **Experiment schematics:** The experiment consists of P\({}_{present}\) and P\({}_{probe}\) in the respective order. At every t\({}_{present}\), one cell in a \(10\times 10\) grid is highlighted in green. In P\({}_{probe}\), the agents are probed using a blank grey image to perform a binary discrimination task to indicate whether the pattern integrated mentally over all T\({}_{present}\) is symmetric about the vertical axis or not. **Experimental conditions:** We vary the list lengths \(L=T_{present}\) from 10, 12, 14, 16, and 18. Refer **Fig. S1G** for an example trial. **Experiment B2 - Spatial Integration (SI) [36]**. This experiment focuses on the spatial domain and aims to investigate the agents' abilities to integrate spatial information. **Experiment schematics:** The experiment consists of P\({}_{present}\) and P\({}_{probe}\). In P\({}_{present}\), there is a sequential presentation of partial line drawings at every t\({}_{present}\). In P\({}_{probe}\), where T\({}_{probe}=1\), a complete line drawing containing 12 line segments is presented. The task is to mentally integrate the partial drawings shown during P\({}_{present}\) and compare them to the complete line drawing shown during P\({}_{probe}\). Essentially, the agents perform a binary classification to indicate whether the figure shown in P\({}_{probe}\) is identical to the mentally integrated figure from P\({}_{present}\). All line drawings were generated by connecting neighboring points within an imaginary \(4\times 4\) grid. **Experiment conditions:** We vary the number of line segments in the partial drawings from 12, 6, 4, 3, 2, and 1, resulting in list lengths \(L=T_{present}\) of 1, 2, 3, 4, 6, and 12 respectively. As the list length increases, the number of integrations needed to solve the trial also increases. See **Fig. S1H** for an example trial. **Experiment C1 - Spatial Memory Updating (SMU) [30]**. This experiment assesses the capabilities of memory manipulation in the spatial domain. **Experiment schematics:** The experiment consists of P\({}_{present}\), P\({}_{update}\), and P\({}_{probe}\) in the respective order. In P\({}_{present}\) where T\({}_{present}=1\), a stimulus is presented, consisting of \(S\) green squares, arranged in an imaginary circular formation. Within each square, there is a red marker located in one of the nine possible locations within an invisible 3\(\times\)3 grid. Agents are required to memorize the spatial locations of the markers within each square. Next, at every t\({}_{update}\) in P\({}_{update}\), an arrow is presented at the center of one of the squares, indicating the direction for mentally updating the marker's new location. The arrows can be oriented vertically, horizontally, or diagonally, with the condition that the marker never exits the square. A sequence of a total of 8 memory update operations is presented in a clockwise sequence through the \(S\) squares. Hence, T\({}_{update}=8\). Finally, at every t\({}_{probe}\) of P\({}_{probe}\), one out of the \(S\) squares is highlighted in red. This red square acts as a cue for the agents to recall and indicate the marker's location within that particular square through a 9-way classification task. The red marker locations for all \(S\) squares are probed in random order until t\({}_{probe}=S\). **Experiment conditions:** We vary \(S\) from 1 to 8. Refer **Fig. S11** for an example trial. **Experiment D1 - Spatial Task Switching (STS) [28]**. This experiment involves the spatial domain of WM and investigates the agent's capacity to flexibly transition between two spatial discrimination tasks based on supervision cues. **Experiment schematics:** The experiment consists of two types of phases P\({}_{switch}\) and P\({}_{probe}\) presented in random order. In P\({}_{switch}\), a cue image is presented to the agents to indicate the discrimination task to be performed for subsequent P\({}_{probe}\). There are two binary discrimination tasks: top-versus-bottom and left-versus-right. For the left-versus-right discrimination task, the cue image contains dash markers placed on the left and right sides of the image. In contrast, for the top-versus-bottom discrimination task, the dash markers are positioned on the top and bottom sides of the image. At every t\({}_{probe}\) in P\({}_{probe}\), a red marker is presented in any of the four locations within a \(2\times 2\) grid. The agents are required to switch between the two discrimination tasks during every P\({}_{switch}\) and respond with 0 for top/left positions and 1 for bottom/right positions. **Experiment conditions:** We randomly vary the number of task switches i.e. P\({}_{switch}\) in different trials as well as the time step at which task switches occur. See **Fig. S1J** for an example trial. **Experimental trial splits**. In each WM psychophysics experiment, we generate a total of 86,400 trials for training, 9,600 trials for validation, and 9,600 trials for testing. Importantly, note that for the CD experiment, there are five distinct conditions, each of which consists of its own set of 86,400 trials for training, 9,600 trials for validation, and 9,600 trials for testing. ### Time mapping between WM models and Human psychophysics experiments There could be multiple ways of establishing the time mapping between WM models and human psychophysics experiments (**Fig. 1**). Here, for the WM models, we stick to 1 stimulus per time step rule. In **Tab. S1**, we report the mapping of a single time step in WM models to the duration of the corresponding stimulus presentation in human experiments. For instance, in the SFR human experiment, the duration of each stimulus presentation during P\({}_{present}\) was 750 ms, whereas the WM models were exposed to each stimulus for only 1 time step. Therefore, within the P\({}_{present}\) phase of the SFR experiment, a single time step of the WM models corresponds to a duration of 750 ms in the human experiments. Figure S1: **Detailed schematic of each experiment.** We expand the overviews of all ten experiments introduced in **Fig. 1** and **Sec. 2** with detailed experiment schematics. Each column represents the schematic of one experiment. The time goes from the top to the bottom. Each row presents the example stimulus at that time step \(t\). The legend and interpretations of this figure follow **Fig. 1**. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Task & P\({}_{present}\) & P\({}_{retent}\) & P\({}_{probe}\) & P\({}_{distractor}\) & P\({}_{switch}\) & P\({}_{update}\) \\ \hline SFR & 750 ms & * & - & * & * & * \\ CD & 100 ms & 150 ms & 2000 ms & * & * & * \\ VIR & 1000 ms & 1000 ms & - & * & * & * \\ VSR & 1550 ms & * & - & * & * & * \\ VSRec & 1550 ms & * & - & * & * & * \\ CS & 500 ms & * & - & 1700 ms & * & * \\ SC & 1000 ms & * & - & * & * & * \\ SI & - & * & - & * & * \\ SMU & - & * & - & * & * & - \\ STS & * & * & 1500 ms & * & 203 ms & * \\ \hline \hline \end{tabular} \end{table} Table S1: **Time mapping between WM models and Human psychophysics experiments.** We show the mapping of a single time step in WM models to **ms** in different phases of the corresponding human psychophysics experiments. Asterisks (*) indicate the absence of a specific phase in the experiment. Dashes (-) indicate that either humans were provided with an indefinite amount of time during that phase, or the duration of stimulus presentation was variable and not standardized. Computational Models of Working Memory **Feature Extractor**. We employ a 4-layer 2D-convolutional network with 64, 128, 256, and 512 channels in the successive layers. **Task Embeddings**. We learn task-specific embeddings \(M\in\mathbb{R}^{14\times 14}\), which informs the model of which task needs to be performed. There are a total of 10 WM tasks. Although, the Change Detection (CD) task comprises five distinct task conditions, namely color, orientation, size, gap, and conjunction, each of which necessitates a unique task identifier. Consequently, there are a total of 14 (9+5) unique task embeddings. **Training Details**. During training, we sample 10 trials from each task, obtain the model's responses for all the sampled trials, and compute task-specific losses for each respective task. These individual losses are then aggregated and the joint loss is utilized to perform back-propagation. In essence, the model parameters are optimized jointly over all tasks. Note that we formulate the SFR task as multi-label classification and the other tasks as multi-class or binary classification. Consequently, the task-specific losses are computed using either cross-entropy or binary cross-entropy, depending on the nature of the tasks. **Training Details**. All our models were trained from scratch. We employed the Adam optimizer [22] with a starting learning rate of 1e-4 and the first moment of 0.9. The learning scheduler reduces the learning rate by a factor of 0.8 when the validation loss doesn't improve for 3 epochs. We report the average top-1 accuracy and standard error for various model architectures and capacities, computed across test trials from different conditions. For all our experiments, we used 16 NVIDIA RTX A5000 GPUs, each equipped with 24 GB of memory. **Implementation Details of Visualizing Neural Correlates in Task Switching in Sec. 4G.** In the STS experiment, we first take the hidden representations at each t\({}_{probe}\) for all the test trials within the task and then perform t-SNE [41] to project these hidden representations to 2D. Next, we introduce two types of color schemes to label hidden representations. First, on the left panel in **Fig. 5A**, we color-code the points based on the ground truth at that particular t\({}_{probe}\). We assign a blue color to label those hidden representations where the ground truth is 0 i.e. when the marker is either at the left or top of the grid and we assign an orange color to label those hidden representations where the ground truth is 1 i.e. when the marker is either at the right or bottom of the grids. In essence, we color-code the points based on the ground truth response for that particular t\({}_{probe}\). Second, on the right panel in **Fig. 5A**, we color-code the hidden representations based on the task that was supposed to be performed at t\({}_{probe}\). We assign a green color to label those hidden representations when the task at hand is to discriminate top versus bottom and we assign a red color to label those hidden representations when the task at hand is to discriminate left versus right. By color-coding the neural representations based on the task to be performed at t\({}_{probe}\), we observed the emergence of two distinct clusters corresponding to the two tasks involved, namely top-versus-bottom and left-versus-right. This observation highlights how the model's neural representations encode and maintain task identity over time, enabling accurate performance in task-switching experiments. Importantly, it should be noted that at t\({}_{probe}\), the model does not receive any explicit information regarding the task to be performed at that specific time step. **Implementation Details of Visualizing Task-Specialized Neural Clusters in Sec. 4H**. Task Variance (TV) [45] is a scalar that quantifies the selectivity of a hidden unit within a recurrent model towards a specific task, where higher values indicate greater selectivity. To obtain the results in **Fig. 5B, C** in the main text, we first compute the TV for each hidden unit for each task, resulting in a TV matrix of size 10\(\times\)256 for LSTM-256. We normalize the task variance of each hidden unit such that the maximum normalized TV across all tasks was 1. We then perform t-SNE along the TV dimensions and project the TV vector of each hidden unit to 2D. Subsequently, we identify 8 clusters among the hidden units and assign unique colors to each cluster (**Fig 5B**). In line with [45], we chose the optimal number of clusters based on the highest silhouette score. We then group the hidden units based on their clusters. The sorted TV matrix based on the clusters is shown in **Fig. 5C** in the main text, where the intensity denotes the magnitude of normalized TV values. Given a pair of tasks A and B, the fractional task variance (FTV) [45] defines the preference of a hidden unit for one task over the other. For a particular hidden unit \(i\) in a WM model, the FTV with respect to task A and task B is defined as \(FTV_{i}(A,B)=\frac{TV_{i}(A)-TV_{i}(B)}{TV_{i}(A)+TV_{i}(B)}\), where \(TV_{i}(A)\) and \(TV_{i}(B)\) are the TVs for tasks A and B, respectively. FTV(A, B) ranges from -1 to 1, with 1 being more selective to task A over B, and vice versa. FTV = 0, when the hidden unit is neutral about both tasks and stays equally active during both tasks. To obtain the results in **Fig. 5D** in the main text, for each pair of tasks, we compute the FTV for each hidden unit and plot the histogram of the total number of hidden units based on their FTV values. In the histogram, the x-axis denotes the FTV whereas the y-axis denotes the proportion of hidden units. If the histogram distribution is skewed towards FTV \(=1\), it means that the majority of the hidden units are more selective to Task A and vice versa if the distribution is skewed toward FTV \(=-1\). If the distribution is unimodal in the center, it implies that the majority of the hidden units do not have task preference and are equally active in both tasks. In the case of the bimodal distribution where the two modes are located at both ends, this implies that there are two separate populations of neurons, dedicated to two individual tasks respectively. ## Appendix C Results We analyzed various memory characteristics in **Fig. 4**. Here, we expand over different model architectures and model capacities in **Fig S2, S3, S4, S5, S6, S7**. We found that the performances in top-1 accuracy vary across architectures and memory capacities. In general, the larger the memory capacity, the better the behavioral performance. However, there are a few exceptions where the models with smaller capacities yield higher accuracy (e.g. **Fig. S6**, J versus K). We also notice that some tasks are extremely difficult for all WM models to learn regardless of the architectures. For example, in **Fig. S4**, almost all the WM models fail. Even with a large memory capacity of 1024, which proves to be sufficient for performing other WM tasks, it still performs badly in the CS task. More surprisingly, even with the state-of-the-art transformer architectures, these WM models still underperform in some WM tasks, such as the VSR task in **Fig. S2**. While recurrent architectures generally perform better in top-1 accuracy and exhibit the primacy and recency effect (**Sec. 4A**), the transformer architectures fail to replicate such phenomenon; and their top-1 accuracy is much lower than the recurrent architectures, with some even close to the chance performance. We also visualize the task similarity matrix based on the learned task embeddings for each WM model of different memory capacities in **Fig. S8**. As expected, we observe the brightest values along the diagonal of the task similarity matrix as the embeddings from one task compare against itself. Interestingly, we found that in recurrent architectures, task similarities between different conditions within the CD task are high **Fig. S8A-I**. However, we did not observe such strong similarities in transformers **Fig. S8J, K**. Figure S2: **Behavioral accuracy as a function of list lengths in VSR task.** In addition to **Fig. 4A** in the main text, we show the results of three recurrent architectures (first 3 rows) with three memory capacities (3 columns) and the transformer architecture with two memory capacities (J-K). The chance level is 0.11 for all the above plots. Figure S3: **Behavioral accuracy as a function of retention interval in VIR task.** In addition to **Fig. 4B** in the main text, we show the results of four architectures with different memory capacities. The layout interpretations follow **Fig. S2**. The chance level is 0.5 for all the above plots. Figure S4: **Behavioral accuracy as a function of memory domain conflicts in CS task.** In addition to **Fig. 4C** in the main text, we show the results of four architectures with different memory capacities. The layout interpretations follow **Fig. S2**. The chance level is 0.04 for all the above plots. Figure S5: **Behavioral accuracy as a function of set size in SMU task.** In addition to **Fig. 4D** in the main text, we show the results of four architectures with different memory capacities. The layout interpretations follow **Fig. S2**. The chance level is 0.11 for the above plot. Figure S6: **Behavioral accuracy as a function of memory resolution \(n\) in VSRC task.** In addition to **Fig. 4E** in the main text, we show the results of four architectures with different memory capacities. The layout interpretations follow **Fig. S2**. The chance level is 0.5 for all the above plots. Figure S7: **Behavioral accuracy as a function of features and conjunctions in CD task.** In addition to **Fig. 4F** in the main text, we show the results of four architectures with different memory capacities. The layout interpretations follow **Fig. S2**. The chance level is 0.5 for all the above plots. Figure S8: **Training curves for different model architectures and capacities.** The y-axis shows the joint validation accuracy across all tasks and the x-axis shows epochs. Figure S9: **Training curve of individual tasks for LSTM-1024.** The y-axis shows the validation accuracy for the tasks and the x-axis shows epochs. The training curves for other models show similar trends; thus, they are omitted for simplicity. Here, we plot the model with the highest joint validation accuracy, i.e., LSTM-1024. Figure S10: **Visualization of similarity matrix based on learned task embeddings from WM models with different memory capacities.** We take the task embedding learned by each WM model with different memory capacities and compute the cosine similarity between a pair of task embedding vectors from two corresponding tasks. We present the similarity matrix between tasks. Each row of the similarity matrix is normalized such that the maximum similarity is 1 and the minimum is 0. See the color bar on the right for the normalized similarity values. The brighter values indicate that the learned task embeddings are more similar for the two corresponding tasks. Datasheet for WorM Dataset ### Motivation 1. **For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.** Despite significant research progress in studying individual aspects of working memory (WM), there remains a huge gap in the broader study of WM. We take initial steps in this direction by establishing a systematic and quantitative methodology to study the multi-facets of WM. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities. 2. **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** The dataset was created by Ankur Sikarwar and Mengmi Zhang from Deep NeuroCognition Lab in affiliation with Center for Frontier AI Research (CFAR), and Institute for Infocomm Research (I2R), from Agency for Science, Technology, and Research (A*STAR), Singapore. 3. **Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number** This creation of the dataset is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-025) and its NRFF award NRF-NRFF15-2023-0001. 4. **Any other comments?** N/A ### Composition 1. **What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.** The dataset consists of 10 distinct working memory tasks. Each data instance in the dataset represents a trial that is time-based and includes stimuli and corresponding responses for various time steps within the trial. 2. **How many instances are there in total (of each type, if appropriate)?** The dataset consists of 10 working memory (WM) tasks. One of these tasks i.e. the CD task contains 5 separate task conditions, each of which has separate training, validation, and testing trials. For each of these task conditions within the CD task and 9 remaining WM tasks, there are 86,400 trials for training, 9,600 trials for validation, and 9,600 trials for testing. 3. **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).** Since the dataset is synthetic, it is possible to generate an arbitrary number of samples with the provided code. For consistency, we limit the total number of generated samples, including training, validation, and testing, to 105,600 trials for each task. 4. **What data does each instance consist of "Raw" data (e.g., unprocessed text or images) or features? In either case, please provide a description.** Every instance in the dataset includes raw image stimuli corresponding to each time step within the trial, along with the corresponding responses for that particular trial. 5. **Is there a label or target associated with each instance? If so, please provide a description.** The label for each instance is the ground-truth response corresponding to the task and the specific trial. 6. **Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.** There is no missing information; the data is complete. 7. **Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? If so, please describe how these relationships are made explicit.** N/A 8. **Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.** We propose our own training, validation, and testing splits for each task that is available on the GitHub repository. 9. **Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.** We generate trials through random sampling. While it is theoretically possible to generate the same trial more than once, the probability of this occurrence is extremely low due to the vast number of possible combinations of stimuli across multiple time steps. 10. **Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.** The dataset is self-contained. 11. **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals' non-public communications)? If so, please provide a description. N/A. The dataset is synthetic** 12. **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. N/A. The dataset is synthetic.** ### Collection Process N/A. No data collection process was involved. The dataset has been synthetically generated and we make our dataset generation code available on GitHub. The human data was collected from the existing literature. We directly borrowed their human data and plot them side by side for comparisons with WM models. ### Preprocessing/cleaning/labeling 1. **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section.** No preprocessing was done. 2. **Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the "raw" data.** N/A 3. **Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.** N/A 4. **Any other comments?** N/A ### Uses 1. **Has the dataset been used for any tasks already? If so, please provide a description.** The dataset is being used for the first time. 2. **Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point.** N/A 3. **What (other) tasks could the dataset be used for?** The dataset itself contains ten well-defined WM tasks and is intended to be used for those specific tasks. 4. **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a dataset consumer might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other risks or harms (e.g., legal risks, financial harms)? If so, please provide a description. Is there anything a dataset consumer could do to mitigate these risks or harms?** No, there are no risks or harms of the dataset. The dataset contains no human data. 5. **Are there tasks for which the dataset should not be used? If so, please provide a description.** No. 6. **Any other comments?** N/A ### Distribution 1. **Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description.** The dataset is publicly available. 2. **How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)?** The dataset and the supplementary code is available on the GitHub repository. 3. **When will the dataset be distributed?** The dataset is available from June 2023. 4. **Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions.** The dataset is released under the MIT License. 5. **Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions.** N/A 6. **Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation.** N/A 7. **Any other comments?** N/A ### Maintenance 1. **Who will be supporting/hosting/maintaining the dataset?** Ankur Sikarwar is supporting and maintaining the dataset. 2. **How can the owner/curator/manager of the dataset be contacted (e.g., email address)?** [email protected] 3. **Is there an erratum? If so, please provide a link or other access point.** The GitHub repository will reflect any changes or improvements made to the dataset. 4. **Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, GitHub)?** This information will be available on GitHub. 5. **If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced.** N/A 6. **Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.** N/A 7. **If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to dataset consumers? If so, please provide a description.** People who want to contribute are encouraged to get in touch with the authors. 8. **Any other comments?** N/A
2306.11987
Training Transformers with 4-bit Integers
Quantizing the activation, weight, and gradient to 4-bit is promising to accelerate neural network training. However, existing 4-bit training methods require custom numerical formats which are not supported by contemporary hardware. In this work, we propose a training method for transformers with all matrix multiplications implemented with the INT4 arithmetic. Training with an ultra-low INT4 precision is challenging. To achieve this, we carefully analyze the specific structures of activation and gradients in transformers to propose dedicated quantizers for them. For forward propagation, we identify the challenge of outliers and propose a Hadamard quantizer to suppress the outliers. For backpropagation, we leverage the structural sparsity of gradients by proposing bit splitting and leverage score sampling techniques to quantize gradients accurately. Our algorithm achieves competitive accuracy on a wide range of tasks including natural language understanding, machine translation, and image classification. Unlike previous 4-bit training methods, our algorithm can be implemented on the current generation of GPUs. Our prototypical linear operator implementation is up to 2.2 times faster than the FP16 counterparts and speeds up the training by up to 35.1%.
Haocheng Xi, Changhao Li, Jianfei Chen, Jun Zhu
2023-06-21T02:45:01Z
http://arxiv.org/abs/2306.11987v2
# Training Transformers with 4-bit Integers ###### Abstract Quantizing the activation, weight, and gradient to 4-bit is promising to accelerate neural network training. However, existing 4-bit training methods require custom numerical formats which are not supported by contemporary hardware. In this work, we propose a training method for transformers with all matrix multiplications implemented with the INT4 arithmetic. Training with an ultra-low INT4 precision is challenging. To achieve this, we carefully analyze the specific structures of activation and gradients in transformers to propose dedicated quantizers for them. For forward propagation, we identify the challenge of outliers and propose a Hadamard quantizer to suppress the outliers. For backpropagation, we leverage the structural sparsity of gradients by proposing bit splitting and leverage score sampling techniques to quantize gradients accurately. Our algorithm achieves competitive accuracy on a wide range of tasks including natural language understanding, machine translation, and image classification. Unlike previous 4-bit training methods, our algorithm can be implemented on the current generation of GPUs. Our prototypical linear operator implementation is up to 2.2 times faster than the FP16 counterparts and speeds up the training by up to 35.1%. Our code is available at [https://github.com/xijiu9/Train_Transformers_with_INT4](https://github.com/xijiu9/Train_Transformers_with_INT4). ## 1 Introduction Training neural networks is computationally demanding. Training with low-precision arithmetic (a.k.a., fully quantized training or FQT) is promising to improve computational and memory efficiency. FQT methods add some quantizers and dequantizers in the original full-precision computational graph, and replace expensive floating-point operations with cheap low-precision ones. Research in FQT aims to reduce the training numerical precision, without sacrificing much convergence speed or accuracy. The required numerical precision has been reduced from FP16 [32] to FP8 [53; 45], INT32+INT8 [3] and INT8+INT5 [7]. FP8 training is implemented in Nvidia's H100 GPU with Transformer Engine [34], achieving impressive speedup for the training of large-scale transformers. Recently, the training numerical precision has been pushed down to 4 bits. Sun et al. [46] successfully trained several modern networks with INT4 activation/weights and FP4 gradients; and Chmiel et al. [8] propose a custom 4-bit logarithmic numerical format to further improve the accuracy. However, these 4-bit training methods cannot be directly utilized for acceleration as they require custom numerical formats which are not supported on contemporary hardware. There are significant optimization challenges to train neural networks at an extremely low 4-bit level. First, the non-differentiable quantizers in forward propagation make the loss landscape rugged, where gradient-based optimizers can easily stuck at local optima [30]. Second, gradients are only computed approximately in low-precision. Such imprecise gradients slow down the training process and even cause the training to be unstable or diverge. In this work, we propose a novel INT4 training algorithm for a class of popular neural networks, transformers [51]. All the costly linear operations for training transformers can be written in a matrix multiplication (MM) form. This MM form allows us to design more flexible quantizers, which better approximate FP32 matrix multiplications by utilizing specific structures of the activations, weights, and gradients in transformers. Our quantizers leverage advances in the field of randomized numerical linear algebra (RandNLA) [14]. For forward propagation, we find that outliers in the activation are the main reason for accuracy degradation. To suppress outliers, we propose a _Hadamard quantizer_, which quantizes a _transformed version_ of the activation matrix. The transformation is a block diagonal Hadamard matrix, which spreads the information carried in outliers to its nearby entries of the matrix and thus reduces the numerical range of the outliers. For backpropagation, we exploit the _structural sparsity_ of activation gradients. We find that the gradients of a few tokens are extremely large. Meanwhile, the gradients for the rest majority of the tokens are very small, even smaller than the quantization residuals of larger gradients. Rather than computing these small gradients, it is better to save the computational resource for calculating the residuals of the larger gradients. To utilize such sparsity, we propose _bit splitting_, which split the gradient of each token into higher 4 bits and lower 4 bits. Then, we choose the most informative gradients by _leverage score sampling_, which is an importance sampling technique for RandNLA. Combining quantization techniques for forward and backward propagation, we propose an algorithm that uses INT4 MMs for all linear operations in transformers. We evaluate our algorithm for training transformers on a wide variety of tasks, including natural language understanding, question answering, machine translation, and image classification. Our algorithm achieves competitive or superior accuracy compared with existing works on 4-bit training [46; 8]. Moreover, our algorithm _is compatible with contemporary hardware_ like GPUs, since it does not require custom numerical formats like FP4 or logarithm formats. Our prototypical quantization + INT4 MM operator implementation is up to 2.2 times faster than the FP16 MM baseline, and it speeds up the training by up to 35.1%. ## 2 Related Work Fully Quantized TrainingFully quantized training (FQT) [32; 53; 45; 3; 15; 1; 56; 64; 28; 29; 58; 67] methods accelerate training by quantizing the activations, weights, and gradients to low-precision, so linear and nonlinear operators during training can be implemented with low-precision arithmetic. Researches on FQT design novel numerical formats and quantization algorithms which better approximate full-precision tensors. The current research frontier is 4-bit FQT. FQT is challenging due to the vast numerical range of the gradient and the optimization issues of training quantized networks from scratch. Due to these challenges, existing 4-bit FQT algorithms [46; 8] still have \(\sim\)1-2.5% accuracy drop on several tasks, and they cannot support contemporary hardware. Other Efficient Training MethodsMixture-of-experts [42] improves the model capacity without increasing the training budget. Structural dropout [21; 17] exploits computationally efficient ways to regularize the model. Efficient attention [26; 10] reduces the quadratic time complexity for computing attention. Distributed training systems [38; 22] reduce training time by leveraging more computational resources. Our work on reducing numerical precision is orthogonal with these directions. ## 3 Forward Propagation Neural network training is an iterative optimization procedure with stochastic gradients computed by forward and back propagation. We accelerate forward and back propagation with 4-bit integer (INT4) arithmetic. We first describe the forward propagation of our training procedure. The forward propagation can be formulated as a composition of linear and non-linear (GeLU, normalization, softmax, etc.) operators. In our training procedure, we accelerate all the linear operators with INT4 arithmetic and leave all the less-computationally-intensive non-linear operators in the 16-bit floating-point (FP16) format. All linear operations in transformers can be written in a matrix multiplication (MM) form. For ease of presentation, we consider the acceleration of the following simple matrix multiplication throughout this paper: \[\mathbf{Z}=\mathbf{X}\mathbf{W}^{\top},\text{where }\mathbf{Z}\in\mathbb{R}^{N \times C},\mathbf{X}\in\mathbb{R}^{N\times D}\text{and }\mathbf{W}\in\mathbb{R}^{C\times D}. \tag{1}\] The most predominant use case of such MM is the fully-connected layer. Consider a transformer with an input shape of _(batch size \(S\), sequence length \(T\), dimensionality \(D\))_. The fully-connected layer can be written as Eq. (1) where \(\mathbf{X}\) is the activation for \(N=ST\) tokens, and \(\mathbf{W}\) is the weight matrix. For attention layers, batch matrix multiplications (BMMs) might be required. Our proposed techniques can be applied to BMMs, and we leave the discussion of BMMs in Appendix. A.1. ### Learned Step Size Quantization To accelerate training, the forward propagation must be computed with integer arithmetic. We leverage the _learned step size quantizer_ (LSQ) [16] for this purpose. LSQ is a static quantization method whose quantization scale does not depend on the input, and is thus cheaper than dynamic quantization methods [23], which need to compute the quantization scale dynamically per iteration. Given a FP matrix \(\mathbf{X}\), LSQ _quantizes_\(\mathbf{X}\) to integer with \[\text{int}_{s_{X}}\left(\mathbf{X}\right):=\left\lfloor\text{clamp}( \mathbf{X}/s_{X},-Q_{N},Q_{P})\right\rceil, \tag{2}\] where \(s_{X}\) is a learnable scalar parameter, clamp restricts its input to the range \([-Q_{N},Q_{P}]\), \(\lfloor\cdot\rceil\) is a rounding operation, and \(\mathbf{X}/s_{X}\) is computed elementwise. The resultant matrix takes values from \(\{-Q_{N},-Q_{N}+1,\ldots,Q_{P}\}\). Since we aim to perform INT4 MMs, we set \(Q_{N}=Q_{P}=7\). The integer matrix can be _dequantized_ back to FP through float \(\left(\text{ints}_{s_{X}}\left(\mathbf{X}\right)\right)=s_{X}\text{int}_{s_{ X}}\left(\mathbf{X}\right)\approx\mathbf{X}\). With LSQ, Eq. (1) can be computed approximately as \(\mathbf{Y}=\mathbf{X}\mathbf{W}^{\top}\approx s_{X}s_{W}\text{int}_{s_{X}} \left(\mathbf{X}\right)\text{int}_{s_{W}}\left(\mathbf{W}\right)^{\top},\) where the INT4 MM \(\text{int}_{s_{X}}\left(\mathbf{X}\right)\text{int}_{s_{W}}\left(\mathbf{W} \right)^{\top}\) can be implemented efficiently on hardware. Remark:Quantization-aware training (QAT) [9; 62; 66; 23; 12; 11; 43; 59; 44; 48; 63; 2; 18; 54] is an _inference acceleration_ technique which trains networks with quantizers inserted in the forward propagation graph, so the trained network can perform efficiently during inference. QAT can compress activation/weights to extremely low precision (e.g. 1-2 bits). It is tempting to think that directly applying a quantizer for QAT to FQT can lead to similar low activation/weights bit-width. However, even only quantizing the forward propagation for FQT is much more challenging than QAT because: (1) QAT requires a converged full-precision model as initialization [16] and/or as a teacher model for knowledge distillation [2]; (2) QAT can adopt expensive multi-stage training pipelines without worrying about the convergence speed [31], while FQT algorithm must converge as fast as full-precision training algorithms to be useful; (3) QAT may approximate the discrete quantizer with continuous functions during training [19], which cannot be implemented with integer arithmetic. Due to these challenges, it is still an open problem to do FQT with 4-bit activations/weights. ### Activation Outliers Simply applying LSQ for FQT with 4-bit activation/weights leads to accuracy degradation due to _activation outliers_[57]. As shown in Fig. 1(a), activations have some outlier entries, which are much larger in magnitude than other entries. In this case, the step size \(s_{X}\) poses a trade-off between quantization granularity and representable numerical range. If \(s_{X}\) is large, we can represent the outliers well at the expense of representing most other entries in a very coarse manner. On the other hand, if \(s_{X}\) is small, we have to truncate the entries outside the range \([-Q_{N}s_{X},Q_{P}s_{X}]\). Unfortunately, the transformers tend to store information in these outliers, and such truncation would seriously harm accuracy (see Sec. 5.2 for details). The outlier problem is particularly significant when the training task is to fine-tune a pre-trained model on some new downstream tasks, since the pre-train model contains more outliers [57] than random initialization. There exists some works to handle activation outliers for post-training quantization (PTQ). Outlier Suppression [55] discover that LayerNorms amplify outliers, and propose Gamma Migration and Token-Wise Clipping to solve this issue and achieves 6-bit BERT PTQ without too much degradation. SmoothQuant [57] migrates the quantization difficulty of activation outliers to weights and achieves 8-bit PTQ for large language models, such as OPT-175B. Outlier Channel Splitting [65] duplicates channels containing outliers with small overhead on the size of the network. However, these methods mainly focus on PTQ or QAT, and seldom successfully deal with ultra-low 4-bit training. ### Hadamard Quantization We propose a _Hadamard quantizer_ (HQ) to solve the outlier problem. Its main idea is to quantize the matrices _in another linear space_ which has fewer outliers. The outliers in activation matrices form a feature-wise structure [57]. They are typically concentrated on a few dimensions, i.e., only a few columns of \(\mathbf{X}\) are significantly larger than others. Hadamard transform [47] is a linear transformation, which can amortize the outliers into other entries. Specifically, the Hadamard transform \(\mathbf{H}_{k}\) is a \(2^{k}\times 2^{k}\) matrix, where \[\mathbf{H}_{0}=\left[1\right],\quad\mathbf{H}_{k}=\tfrac{1}{\sqrt{2}}\left[ \mathbf{H}_{k-1}\quad\mathbf{H}_{k-1};\mathbf{H}_{k-1}\quad-\mathbf{H}_{k-1} \right].\] Hadamard matrices are orthogonal and symmetric: \(\mathbf{H}_{k}=\mathbf{H}_{k}^{\top}=\mathbf{H}_{k}^{-1}\), so \(\mathbf{H}_{k}\mathbf{H}_{k}=\mathbf{I},\forall k\geq 0\). Consider any coordinate row vector1\(\mathbf{e}_{i}^{\top}\in\mathbb{R}^{2^{k}}\), we have \(\mathbf{e}_{i}^{\top}\mathbf{H}_{k}=2^{-k/2}\mathbf{1}_{2^{k}},\forall i\), where \(\mathbf{1}_{2^{k}}=(1,1,\ldots,1)\) is a \(2^{k}\)-dimensional all-one-vector. This demonstrates the extreme case when a single outlier dominates all the rest dimensions. In this case, Hadamard transformation effectively turns the vector into a quantization-friendly all-one-vector. The practical effect of the Hadamard transform on suppressing activation outliers is demonstrated in Fig. 2(b). Footnote 1: A vector which \(i\)-th dimension is 1, and all other dimensions are 0. HQ uses a block-diagonal transformation matrix \(\mathbf{H}\in\mathbb{R}^{D\times D}\): \(\mathbf{H}=\text{BlockDiag}(\mathbf{H}_{k},\ldots,\mathbf{H}_{k})\), where \(D\) is a multiple of \(2^{k}\). To suppress outliers, we quantize a transformed version of \(\mathbf{X}\) and \(\mathbf{W}\): \[\mathbf{X}=(\mathbf{X}\mathbf{H})\mathbf{H}^{\top}\approx s_{X}\text{int}_{s_ {X}}\left(\mathbf{X}\mathbf{H}\right)\mathbf{H}^{\top},\quad\mathbf{W}=( \mathbf{W}\mathbf{H})\mathbf{H}^{\top}\approx s_{W}\text{int}_{s_{W}}\left( \mathbf{W}\mathbf{H}\right)\mathbf{H}^{\top}.\] Combining the quantized matrices, we get \[\mathbf{Y}=\mathbf{X}\mathbf{W}^{\top}\approx s_{X}s_{W}\text{int}_{s_{X}} \left(\mathbf{X}\mathbf{H}\right)\mathbf{H}^{\top}\text{Hint}_{s_{W}}\left( \mathbf{H}^{\top}\mathbf{W}^{\top}\right) =s_{X}s_{W}\text{int}_{s_{X}}\left(\mathbf{X}\mathbf{H}\right) \text{int}_{s_{W}}\left(\mathbf{H}^{\top}\mathbf{W}^{\top}\right), \tag{3}\] where the inverse transformations cancel with each other, and the MM can be implemented as: **Procedure** HQ-MM 1. Compute \(\mathbf{X}\mathbf{H}\) and \(\mathbf{H}^{\top}\mathbf{W}^{\top}\) in FP16. 2. Quantize the resultant matrices to INT4 by LSQ. 3. Multiply the two INT4 matrices. 4. Dequantize the resultant INT32 matrix to FP16 by multiplying \(s_{X}s_{W}\). For time complexity, Step 1 takes \(O(2^{k}N(D+C))\) FP16 multiply-accumulates (MACs); Step 2 and Step 4 takes \(O(N(D+C))\) FP16 MACs in total; and Step 3 takes \(O(NDC)\) INT4 MACs. Comparing with the plain LSQ Eq. (2), the amount of FP16 MACs increases by \(2^{k}\) times, from \(O(N(D+C))\) to \(O(2^{k}N(D+C))\). However, our HQ-MM is still much cheaper than an FP16 MM given \(2^{k}\ll D\) and \(2^{k}\ll C\). The number \(k\) shows a tradeoff between the ability to suppress outliers and computation complexity. Larger \(k\) allows for amortizing the outlier within a larger horizon, at the cost of being more expensive. We propose an adaptive algorithm to choose \(k\) for each activation depending on the outlier scale, as discussed in Appendix A.5. The typical value is \(k=5\), while the dimensionality \(C\) and \(D\) ranges from 768 to 4096. ## 4 Backpropagation We now consider accelerating the backpropagation of the linear layer with INT4 operations. The linear operator HQ-MM defined in Eq. (3) has four inputs: activation \(\mathbf{X}\), weight \(\mathbf{W}\), and step sizes \(s_{X}\), \(s_{W}\). Given the output gradient \(\nabla_{\mathbf{Y}}\mathcal{L}\) w.r.t. some loss function \(\mathcal{L}\), we need to compute the gradient of all four inputs. We discuss the computation of activation/weight gradients in this section, and left the discussion of step size gradients to Appendix A.3. For simplicity, we omit \(\mathcal{L}\) and simply use \(\nabla_{\mathbf{Y}}\) to denote the gradient in the following text. By the straight-through estimator \(\lfloor x\rceil^{\prime}=1\)[5] and the chain rule, we have \[\nabla_{\mathbf{W}}=s_{X}\left(\nabla_{\mathbf{Y}}^{\top}\hat{\mathbf{X}} \circ\mathbb{I}_{W}\right)\mathbf{H}^{\top},\quad\nabla_{\mathbf{X}}=s_{W} \mathbb{I}_{X}\circ\nabla_{\mathbf{Y}}\hat{\mathbf{W}}\mathbf{H}^{\top}, \tag{4}\] where we define \(\hat{\mathbf{X}}=\text{int}_{s_{X}}\left(\mathbf{X}\mathbf{H}\right)\), \(\hat{\mathbf{W}}=\text{int}_{s_{W}}\left(\mathbf{W}\mathbf{H}\right)\), \(\mathbb{I}_{X}=\mathbb{I}(-Q_{N}\leq\mathbf{X}/s_{X}\leq Q_{P})\), and \(\mathbb{I}_{W}=\mathbb{I}(-Q_{N}\leq\mathbf{W}/s_{W}\leq Q_{P})\). For computing the gradients, three types of matrix multiplications are required: 1. The element-wise multiplication \(\circ\) of a \(0/1\) matrix \(\mathbb{I}_{X}\) (or \(\mathbb{I}_{W}\)) with another INT4 (or INT32) matrix. This operation has low time complexity. 2. The multiplication of an INT32 matrix with an FP16 block-wise Hadamard matrix \(s_{W}\mathbf{H}^{\top}\), which also has low-time complexity, as discussed in Sec. 3.3. 3. The multiplication of the FP16 gradient \(\nabla_{\mathbf{Y}}\) with an INT4 matrix \(\hat{\mathbf{X}}\) or \(\hat{\mathbf{W}}\), which we will accelerate by quantizing \(\nabla_{\mathbf{Y}}\) to INT4. In the rest of this section, we will discuss quantization methods to compute the "type 3" MMs \(\nabla_{\mathbf{Y}}^{\top}\hat{\mathbf{X}}\) and \(\nabla_{\mathbf{Y}}\hat{\mathbf{W}}\). We quantize \(\nabla_{\mathbf{Y}}\) dynamically for each MM, while \(\hat{\mathbf{X}}\) and \(\hat{\mathbf{W}}\) have been already calculated in forward propagation in Section. 3. We start by discussing the structure of the gradient. ### Structural Sparsity of Gradients We note that the gradient matrix \(\nabla_{\mathbf{Y}}\) tends to be very sparse along the training process. Furthermore, the sparsity has a structure: few rows (i.e., tokens) of \(\nabla_{\mathbf{Y}}\) have large entries, while most other rows are close to an all-zero vector. We illustrate this by plotting the histogram of per-row norm \(\|(\nabla_{\mathbf{Y}})_{i,:}\|\) for all the rows \(i\) in Fig. 2. Such a structural sparsity arises from the heavy overparameterization [61] of modern neural networks. During almost the entire training process, the network operates in the overparameterized scheme [33], where it can fit most training data well, except for a few hard examples. Therefore, the (activation) gradient will be close to zero for well-fitted data points. We find that for pretraining tasks, such structural sparsity quickly emerges after only a few training epochs. For fine-tuning tasks, the gradient is always sparse during the whole training process. ### Bit Splitting and Leverage Score Sampling Here, we discuss how to design gradient quantizers to accurately compute the MMs during backpropagation by leveraging structural sparsity. The high-level idea is that many rows of the gradient are so small that they have little impact on the parameter gradient, yet they waste abundant computation. On the other hand, the large rows cannot be accurately represented with INT4. We drop some small rows and use the saved computation to represent large rows more accurately. First, we propose _bit splitting_ (BS), which splits a full-precision matrix as higher and lower 4 bits: \[\nabla_{\mathbf{Y}}\approx s_{\uparrow}\nabla_{\mathbf{Y}}^{\uparrow}+s_{ \downarrow}\nabla_{\mathbf{Y}}^{\downarrow}, \tag{5}\] where \(s_{\uparrow},s_{\downarrow}\) are two floating-point scalars, and \(\nabla_{\mathbf{Y}}^{\uparrow}\), \(\nabla_{\mathbf{Y}}^{\downarrow}\) are INT4 matrices representing the higher and lower 4 bits, respectively. BS can be implemented by first quantizing \(\nabla_{\mathbf{Y}}\) to INT4 as \(\nabla_{\mathbf{Y}}\approx s_{\uparrow}\nabla_{\mathbf{Y}}^{\uparrow}\) and then quantize the residual to INT4 as \(\nabla_{\mathbf{Y}}-s_{\uparrow}\nabla_{\mathbf{Y}}^{\uparrow}\approx s_{ \downarrow}\nabla_{\mathbf{Y}}^{\downarrow}\). BS can be viewed as an INT8 representation of a matrix, where \(\nabla_{\mathbf{Y}}^{\uparrow}\) and \(\nabla_{\mathbf{Y}}^{\downarrow}\) are the higher and lower 4 bits of the INT8 representation. Next, we discuss how to compute the weight and activation gradient. Weight GradientAs discussed earlier, weight gradient involves the matrix multiplication \(\nabla_{\mathbf{Y}}^{\top}\mathbf{\hat{X}}\), where \(\nabla_{\mathbf{Y}}\in\mathbf{R}^{N\times C}\) and \(\mathbf{\hat{X}}\) is an \(N\times D\) INT4 matrix. By Eq. (5): \[\nabla_{\mathbf{Y}}^{\top}\mathbf{\hat{X}}\approx(s_{\uparrow}\nabla_{\mathbf{ Y}}^{\top}{}^{\top}+s_{\downarrow}\nabla_{\mathbf{Y}}^{\bot}{}^{\top}) \mathbf{\hat{X}}=\nabla_{\mathbf{Y}}^{\top}{}^{\top}\mathbf{X}^{\ddagger}, \tag{6}\] where we define \(\nabla_{\mathbf{Y}}^{\ddagger}=[s_{\uparrow}\nabla_{\mathbf{Y}}^{\dagger};s_{ \downarrow}\nabla_{\mathbf{Y}}^{\downarrow}]^{\top}\in\mathbb{R}^{2N\times C}\) and \(\mathbf{\hat{X}}^{\ddagger}=[\mathbf{\hat{X}};\mathbf{\hat{X}}]\) to be a \(2N\times D\) INT4 matrix. Eq. (6) represents the product of an INT8 \(\nabla_{\mathbf{Y}}^{\top}\) and an INT4 \(\mathbf{\hat{W}}\), and can be implemented by two INT4 MMs \(\nabla_{\mathbf{Y}}^{\top}{}^{\top}\mathbf{\hat{X}}\) and \(\nabla_{\mathbf{Y}}^{\downarrow}{}^{\top}\mathbf{\hat{X}}\). Such MM is rather accurate since \(\nabla_{\mathbf{Y}}\) is represented with 8 bits. However, comparing to a naive quantization of \(\nabla_{\mathbf{Y}}\) to INT4, BS doubles the amount of INT4 operations for MM. We propose _leverage score sampling_ (LSS) to cut the operations of Eq. (5) by half, to the same amount as the naive MM \(s_{\uparrow}\nabla_{\mathbf{Y}}^{\dagger}\mathbf{\hat{X}}\). Noticing that the MM Eq. (6) can be written as the sum of \(2N\) rank-1 matrices: \[\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top}\mathbf{X}^{\ddagger}=\sum_{i=1}^{2N} \nabla_{\mathbf{Y},:i}^{\top}{}^{\top}\mathbf{X}_{i}^{\ddagger}=\sum_{i=1}^{2N }\nabla_{\mathbf{W}_{i}}, \tag{7}\] where \(\nabla_{\mathbf{W}_{i}}=\nabla_{\mathbf{Y},:i}^{\ddagger}{}^{\top}\mathbf{X} _{i}^{\ddagger}\). Due to the sparsity of \(\nabla_{\mathbf{Y}}\), the matrices \(\nabla_{\mathbf{W}_{i}}\) differ in magnitude and small matrices can be discarded without having a big influence on the result. Our proposed LSS assigns each \(\nabla_{\mathbf{W}_{i}}\) a probability \(p_{i}\in[0,1],i=1,\cdots,2N\), that satisfies \(\sum_{i=1}^{2N}p_{i}=N\). We define random masks \(m_{i}\sim\text{Bern}(p_{i})\) and mask matrix \(\mathbf{\hat{M}}\), and approximate it as \[\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top}\mathbf{X}^{\ddagger}\approx\nabla_{ \mathbf{Y}}^{\ddagger}{}^{\top}\mathbf{\hat{M}}\mathbf{X}^{\ddagger}=\sum_{i= 1}^{2N}\frac{m_{i}}{p_{i}}\nabla_{\mathbf{Y},:i}^{\ddagger}\mathbf{X}_{i}^{ \ddagger},\text{where }\mathbf{\tilde{M}}=\text{diag}\left(\frac{m_{1}}{p_{1}},\ldots, \frac{m_{2N}}{p_{2N}}\right),\] which is an unbiased approximation since \(\mathbb{E}\left[\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top}\mathbf{\hat{M}} \mathbf{X}^{\ddagger}\right]=\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top}\mathbb{ E}\left[\mathbf{\tilde{M}}\right]\mathbf{X}^{\ddagger}=\nabla_{\mathbf{Y}}^{ \ddagger}{}^{\top}\mathbf{X}^{\ddagger}\). In expectation, there are only \(N\) nonzero \(m_{i}\)s. Therefore, LSS reduces the cost of MM by half. For LSS to be accurate, we minimize its variance. We have: **Proposition 4.1**.: _(LSS variance for weight gradient)_ \[\mathrm{Var}\left[\sum_{i=1}^{2N}\frac{m_{i}}{p_{i}}\nabla_{\mathbf{Y},:i}^{ \dagger}{}^{\top}\mathbf{X}_{i}^{\ddagger}\right]=\sum_{i=1}^{2N}\frac{1-p_{i} }{p_{i}}\|\nabla_{\mathbf{Y}_{i},:}^{\ddagger}\|^{2}\|\mathbf{X}_{i,:}^{\ddagger }\|^{2},\text{where }\mathrm{Var}\left[\mathbf{X}\right]:=\mathbb{E}\left[\|\mathbf{X}- \mathbb{E}\mathbf{X}\|\right]_{F}^{2}.\] The coefficient \(c_{i}:=\|\nabla_{\mathbf{Y}_{i},:}^{\ddagger}\|\|\mathbf{X}_{i,:}^{\ddagger}\|\) is called the _leverage score_, which can be easily computed in low time complexity. When \(p_{i}\propto c_{i}\), the variance attends its minimum due to Cauchy inequality: \[\sum_{i=1}^{2N}\frac{1}{p_{i}}\|\nabla_{\mathbf{Y}_{i},:}^{\ddagger}\|^{2}\| \mathbf{X}_{i,:}^{\ddagger}\|^{2}=\sum_{i=1}^{2N}\frac{c_{i}^{2}}{p_{i}}=\sum_ {i=1}^{2N}\frac{c_{i}^{2}}{p_{i}}\sum_{i=1}^{2N}p_{i}\geq(\sum_{i=1}^{2N}c_{i})^ {2},\] where the equality holds when \(p_{i}\propto c_{i}\). Intuitively, LSS can approximate the MM Eq. (7) well with significantly lower computational cost when the leverage scores \(\{c_{i}\}\) are diverse, which is indeed the case as shown in Fig. 2. Define \(\mathbf{M}^{\dagger}\) to be the top-left \(N\times N\) submatrix of \(\mathbf{\tilde{M}}\) and \(\mathbf{M}^{\ddagger}\) to be the bottom-right one, we have \[\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top}\mathbf{\tilde{M}}\mathbf{X}^{\ddagger} =s_{\uparrow}\nabla_{\mathbf{Y}}^{\dagger}{}^{\top}\mathbf{\tilde{M}}^{\ddagger }\mathbf{\hat{X}}+s_{\downarrow}\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top} \mathbf{\tilde{M}}^{\ddagger}\mathbf{\hat{X}},\] which can be implemented by two INT4 MMs with sampled rows/columns. Putting everything together, we propose the following MM procedure to compute the weight gradient: **Procedure LSS-MM** 1. Quantize \(\nabla_{\mathbf{Y}}\) with BS to obtain \(\nabla_{\mathbf{Y}}^{\dagger}\) and \(\nabla_{\mathbf{Y}}^{\ddagger}\) in INT4. 2. Compute the leverage score \(\|\nabla_{\mathbf{Y}_{i},:}^{\ddagger}\|\|\mathbf{X}_{i,:}^{\ddagger}\|\) in FP16. 3. Sample the masks \(\{m_{i}\}\). 4. Sample rows of \(\nabla_{\mathbf{Y}}\) and \(\mathbf{\hat{X}}\) given the masks \(\{m_{i}\}\). 5. Compute INT4 MMs \(\nabla_{\mathbf{Y}}^{\dagger}{}^{\top}\mathbf{\tilde{M}}^{\ddagger}\mathbf{ \hat{X}}\) and \(\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top}\mathbf{\tilde{M}}^{\ddagger}\mathbf{ \hat{X}}\), 6. Dequantize and sum up the resultant INT32 matrices to obtain the FP16 result \(\nabla_{\mathbf{Y}}^{\ddagger}{}^{\top}\mathbf{\tilde{M}}\mathbf{X}^{\ddagger}\). As \(\tilde{\mathbf{M}}\) only has \(N\) non-zero elements in expectation, the two matrix multiplications in Step 5 take about \(2NCD\) INT4 MACs, which aligns with the cost of the naive MM \(s_{\uparrow}\nabla_{\mathbf{Y}}^{\uparrow}\hat{\mathbf{X}}\). The overhead of all the other steps is \(O(NC+ND)\) in total. Activation GradientSimilar to the previous discussion, the gradient of input can be written as \[\nabla_{\mathbf{Y}}\hat{\mathbf{W}}\approx(s_{\uparrow}\nabla_{\mathbf{Y}}^{ \uparrow}+s_{\downarrow}\nabla_{\mathbf{Y}}^{\downarrow})\hat{\mathbf{W}}=s_{ \uparrow}\nabla_{\mathbf{Y}}^{\uparrow}\hat{\mathbf{W}}+s_{\downarrow}\nabla_{ \mathbf{Y}}^{\downarrow}\hat{\mathbf{W}}=\left(\hat{\mathbf{I}}^{\ddagger} \nabla_{\mathbf{Y}}^{\ddagger}\right)\hat{\mathbf{W}}, \tag{8}\] where we define \(\nabla_{\mathbf{Y}}^{\ddagger}=[s_{\uparrow}\nabla_{\mathbf{Y}}^{\uparrow};s_ {\downarrow}\nabla_{\mathbf{Y}}^{\downarrow}]\in\mathbb{R}^{2N\times C}\) and \(\hat{\mathbf{I}}^{\ddagger}=[\mathbf{I}\quad\mathbf{I}]\) to be a \(N\times 2N\) INT4 matrix, \(\mathbf{I}\) is a \(N\times N\) identity matrix. The original product can also be implemented by two INT4 MMs \(\nabla_{\mathbf{Y}}^{\ddagger}\hat{\mathbf{W}}\) and \(\nabla_{\mathbf{Y}}^{\downarrow}\hat{\mathbf{W}}\). But different from weight gradients, we now focus on \(\hat{\mathbf{I}}^{\ddagger}\nabla_{\mathbf{Y}}^{\ddagger}\) in Eq. (8) and do leverage score sampling on this MM. A detailed discussion can be found in Appendix B.2, and we only present the leverage score here. Similarly, we write the MM as the sum of \(2N\) smaller multiplications: \[\hat{\mathbf{I}}^{\ddagger}\nabla_{\mathbf{Y}}^{\ddagger}=\sum_{i=1}^{2N}\hat{ \mathbf{I}}^{\ddagger}_{.,i}\nabla_{\mathbf{Y}i}^{\ddagger}\approx\frac{m_{i} }{p_{i}}\sum_{i=1}^{2N}\nabla_{\mathbf{Y}_{i}},\] where we define \(\nabla_{\mathbf{Y}_{i}}=\hat{\mathbf{I}}^{\ddagger}_{.,i}\nabla_{\mathbf{Y}}^{ \ddagger}\) and associate the probability \(p_{i}\) and Bernoulli mask \(m_{i}\sim\text{Bern}(p_{i})\) with the \(i\) multiplication. The leverage score for activation gradient is \(c_{i}:=\|\nabla_{\mathbf{Y}_{i}}^{\ddagger}\|\), and the variance attains minimum when \(p_{i}\propto c_{i}\). More details about the algorithm can be found at Appendix. A.3 On the implementation side, once the mask \(\{m_{i}\}\) is known, we can decompose the MM Eq. (8) as two INT4 MMs: \(\left(\hat{\mathbf{I}}^{\ddagger}\tilde{\mathbf{M}}\nabla_{\mathbf{Y}}^{ \ddagger}\right)\hat{\mathbf{W}}=s_{\uparrow}\hat{\mathbf{M}}^{\dagger} \nabla_{\mathbf{Y}}^{\ddagger}\hat{\mathbf{W}}+s_{\downarrow}\tilde{\mathbf{M}} ^{\dagger}\nabla_{\mathbf{Y}}^{\downarrow}\hat{\mathbf{W}}\). ## 5 Experiments We evaluate our INT4 training algorithm on a wide variety of tasks including language model fine-tuning, machine translation, and image classification. We implement our proposed HQ-MM and LSS-MM algorithms with CUDA and cutlass2, and the implementation details can be found in Appendix A. We replace all the floating-point linear operators with our INT4 implementation except simply using LSQ for embedding layers, and leaving the last classifier layer in full precision. We adopt default architectures, optimizers, schedulers, and hyper-parameters for all the evaluated models. Footnote 2: [https://github.com/NVIDIA/cutlass](https://github.com/NVIDIA/cutlass) ### Converged Model Accuracy We compare the accuracy of the converged model on various tasks in Table 1. The compared methods include full-precision training (FP), INT8 training [3](INT8), FP4 training [46] ("Ultra-low"), 4-bit \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Train type} & \multirow{2}{*}{Model} & \multirow{2}{*}{Metric same} & \multicolumn{2}{c}{Baselines} & \multicolumn{2}{c}{4-bit training methods} \\ \cline{5-8} & & & & FP & INT8 & LSQ+LUQ & HQ+LSS \\ \hline \multirow{2}{*}{GLUE-dev} & \multirow{2}{*}{FT} & Bert-base & Avg & \(82.67_{0.24}\) & \(81.45_{0.13}\) & \(75.29_{0.23}\) & \(80.81_{0.21}\) \\ & & & Best-large & Avg & \(84.57_{0.42}\) & \(82.74_{0.24}\) & \(55.93_{0.47}\) & \(\mathbf{82.25_{0.38}}\) \\ \hline \multirow{2}{*}{SQUAD v1} & \multirow{2}{*}{FT} & Bert-base & F1 & \(88.32_{0.30}\) & \(88.42_{0.20}\) & \(85.75_{0.31}\) & \(\mathbf{87.60_{0.25}}\) \\ \hline \multirow{2}{*}{SQUAD v2} & \multirow{2}{*}{FT} & Bert-base & F1 & \(76.04_{0.68}\) & \(75.63_{0.07}\) & \(71.02_{0.41}\) & \(\mathbf{74.63_{0.18}}\) \\ \hline \multirow{2}{*}{Advearsial QA} & \multirow{2}{*}{FT} & Bert-base & F1 & \(40.99_{0.38}\) & \(40.17_{0.58}\) & \(31.85_{0.30}\) & \(\mathbf{38.70_{0.77}}\) \\ \hline SWAG & \multirow{2}{*}{FT} & Bert-base & Acc & \(79.84_{0.10}\) & \(79.18_{0.19}\) & \(70.79_{1.20}\) & \(\mathbf{77.49_{0.16}}\) \\ \hline \multirow{2}{*}{CONLL} & \multirow{2}{*}{FT} & Bert-base & Acc & \(93.38_{0.13.14}\) & \(87.63_{0.38}\) & \(\mathbf{91.90_{0.48}}\) \\ \cline{2-2} \cline{5-8} & & & BLEU & \(27.5\) & \(25.4(\text{Ultra Low})\) & \(27.17\) & \(-\) \\ \hline \multirow{2}{*}{WMT} & \multirow{2}{*}{PT} & Transformer-base & BLEU & \(27.5\) & \(25.4(\text{Ultra Low})\) & \(-\) & \(25.57\) \\ & & & SaceBLEU & \(26.5\) & & & & \\ \hline \multirow{2}{*}{CIFAR10} & \multirow{2}{*}{FT} & Vi-\(h/32\) & \multirow{2}{*}{Top1 Acc} & \(98.77_{0.98}\) & \(98.59_{0.02}\) & \(97.76_{0.10}\) & \(98.36_{0.05}\) \\ & & & ViT-\(h/32\) & & \(98.98_{0.98}\) & \(98.76\) & \(89.38\) & \(\mathbf{98.74}\) \\ \hline \multirow{2}{*}{CIFAR100} & \multirow{2}{*}{FT} & ViT-\(h/32\) & \multirow{2}{*}{Top1 Acc} & \(91.94_{0.11}\) & \(90.99_{0.07}\) & \(88.63_{0.05}\) & \(\mathbf{98.78_{0.06}}\) \\ & & & ViT-\(h/32\) & & \(93.07\) & \(92.2\) & \(90.97\) & \(\mathbf{91.13}\) \\ \hline \multirow{2}{*}{ImageNet1k} & \multirow{2}{*}{FT} & ViT-\(h/32\) & \multirow{2}{*}{Top1 Acc} & \(81.88\) & \(80.42\) & \(77.25\) & \(\mathbf{79.18}\) \\ & & ViT-\(h/32\) & & \(81.62\) & \(81.3\) & \(77.41\) & \(\mathbf{80.06}\) \\ \cline{1-1} & & ViT-\(h/16\) & & \(84.55\) & \(83.05\) & \(82.4\) & \(\mathbf{82.61}\) \\ \cline{1-1} \cline{2-2} \cline{5-8} & & PT & Deft-small & \(73.1\) & \(70.95\) & \(\mathbf{69.96}\) & \(69.18\) \\ \hline \hline \end{tabular} \end{table} Table 1: Results on language model fine-tuning, transformer pretraining, and vision transformers fine-tuning and pretraining. Standard deviation is reported as subscript. FT refers to Fine-tuning, and PT refers to Pre-training. For WMT the result of 25.4 is result of Ultra-Low, not INT8. logarithm quantization [8] with LSQ for activations and weights (LSQ+LUQ), and our algorithm which utilizes HQ for forward and LSS for backpropagation (HQ+LSS). Ultra-low does not have a public implementation, so we only report its performance from its original paper on the machine translation task. Except for the large machine translation task and the task of large vision transformers, we repeat each run by three times and report the standard deviation as subscripts in tables. We do not include any kind of knowledge distillation or data augmentation. Language model fine-tuning: We use the pretrained BERT-base-uncased and BERT-large-uncased [24] model, and evaluate the performance of our method on GLUE dev-set [52], SQUAD [40], SQUADv2 [39], Adversarial QA [4], CoNLL-2003 [41] and SWAG [60] datasets. We present the average result of bert-base-uncased and bert-large-uncased model on the GLUE dataset. The full results are listed in C.2. Compared with LSQ+LUQ, our method achieves \(5.5\%\) improvement of accuracy on average for the bert-base model and achieves \(>25\%\) improvement of accuracy on average for the bert-large model. We further show the result on the SQUAD, SQUAD 2.0, Adversarial QA, CoNLL-2003, and SWAG datasets. On all of the tasks, compared with LSQ+LUQ, our method achieves better performance. We improve by \(1.8\%\) and \(3.6\%\) on SQUAD and SQUAD 2.0 compared to LSQ+LUQ, respectively. On the more difficult Adversarial QA, we improve by \(6.8\%\) on F1 score. On SWAG we improve by \(6.7\%\) and on CoNLL-2003 we improve by \(4.2\%\) accuracy. Machine translation: We also apply our method for pretraining. We train a Transformer-base [51] model on WMT 14 En-De dataset [6] for machine translation. Note that we reproduce this experiment with Fairseq's recipe 3, which reports the SacreBleu score (26.5 for FP) [36], while Ultra-low and LUQ report the more optimistic original BLEU score (27.5 for FP) [35]. Our HQ+LSS has about \(1.0\%\) BLEU degradation, which is smaller than \(2.1\%\) of Ultra-low and higher than \(0.3\%\) reported in the LUQ paper. Nevertheless, HQ+LSS still performs comparably with existing methods for this pretraining task, and it supports contemporary hardware. Footnote 3: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq) Image Classification: We load ViT checkpoints pretrained on ImageNet21k [13], and fine-tune it on CIFAR-10, CIFAR-100 [27], and ImageNet1k. We use ViT-B/32 and ViT-L/32 for CIFAR datasets and use ViT-B/32, ViT-L/32 and ViT-L/16 for ImageNet1k. On CIFAR10 we achieve \(<0.5\%\) accuracy degradation, while LSQ+LUQ has \(1\%\) degradation for ViT-B/32 and \(0.6\%\) degradation for ViT-L/32. On CIFAR100, INT8 already has \(\sim 1\%\) accuracy degradation, which shows its difficulty. We improve by \(1.1\%\) accuracy for ViT-B/32 and \(0.2\%\) accuracy for ViT-L/32 compared with LSQ+LUQ. On ImageNet1k, we improve by \(2\%\) accuracy for ViT-B/32, \(2.6\%\) accuracy for ViT-L/32 and \(0.2\%\) for ViT-L/32 compared with LSQ+LUQ. We further test the effectiveness of our algorithm for pretraining a DeiT-Small model [50] on ImageNet1K, where HQ+LSS can still converge to similar accuracy level compared to LSQ+LUQ, while being more hardware friendly. ### Ablation Study Here, we conduct ablation studies to show the effectiveness of our forward and backward methods independently on the challenging CoLA dataset. To study the effectiveness of different quantizers for forward propagation, we leave backpropagation in FP16. The result is shown in Fig. 3(a). We first validate the claim in Sec. 3.2 that outliers are the main cause of accuracy degradation in quantized forward propagation. We test an "outlier" method which maintains \(1\%\) largest activation entries in FP. The "outlier" method achieves good performance, which proves that outliers are indeed the most significant challenge of the transformer's forward quantization. The hardware-unfriendly "outlier" method serves as an upper bound of methods to handle outliers. Our HQ outperforms LSQ by better handling the outliers and achieves comparable results to maintaining the outliers. We also investigated whether more granular quantizers, such as per-token quantization or per-channel quantization could be used to quantify outliers, or whether existing methods like SmoothQuant [57] could be used for INT4 FQT. The results are listed in C.3, and we find that without HQ, none of these methods achieve good accuracy under 4-bit quantization, and the result of HQ is not strongly affected when more granular quantization methods are applied. For backpropagation, we compare a simple minimax quantizer [3], LUQ [8] and our LSS, and leave forward propagation in FP16. The minimax quantizer divides the numerical range from the minimum to the maximum into equally large quantization bins. The result is shown in Fig. 3(b). While the bit-width is higher than 2, our LSS achieves results that are comparable and even slightly higher than LUQ. Meanwhile, LSS is more hardware friendly as it requires only INT4 arithmetic. ### Computational and Memory Efficiency Finally, we demonstrate the potential of our method to accelerate neural network training by evaluating our prototypical implementation discussed in Appendix A.6. We emphasize that our implementation is not fully optimized. For example, the backward computation requires an INT4 MM in the form of \(\mathbf{Y}=\mathbf{A}\mathbf{B}\), while cutlass only supports \(\mathbf{Y}=\mathbf{A}\mathbf{B}^{\top}\), so explicit transpose is required. We also do not fuse the linear operators with nonlinearities and normalizations. Therefore, the results cannot fully reflect the potential of INT4 training algorithms. A fully optimized implementation requires heavy engineering, which exceeds the scope of our paper. Operator Speed: We compare the throughput of our proposed HQ-MM (HQ), LSS for computing weight gradient (LSSWeight), LSS for computing activation gradient (LSSAct), and their average throughput (INT4) with a baseline tensor-core FP16 GEMM implementation (FP16) provided by cutlass in Fig. 4 on an Nvidia RTX 3090 GPU which has a peak throughput at 142 FP16 TFLOPs and 568 INT4 TFLOPs. As the matrix size grows, the overhead of quantization diminishes and our INT4 operators can be up to 2.2 times faster compared with FP16 MM. We further analyze the quantization overhead for each operator in Appendix C.5. Training Throughput: We compare the training throughput of the FP16 PyTorch AMP and our INT4 training algorithm for training BERT [24] and GPT [37]-style language models on a system of 8 Nvidia A100 GPUs. We vary the hidden layer size, intermediate fully-connected layer size, and batch size, and plot the speedup of INT4 training in Fig. 5. Our INT4 training algorithm can achieve up to 35.1% speedup for BERT-style models and up to 26.5% speedup for GPT-style models. The training time can be found in Appendix C.4. ## 6 Conclusions We propose a hardware-friendly INT4 training method for transformers. By analyzing the properties of MMs in transformers, we propose HQ and LSS methods to quantize activations and gradients while preserving accuracy. On several important tasks, our method performs comparably or better than existing INT4 methods. Our work can be potentially extended beyond transformers to other MM-only architectures, such as MLP-Mixer [49], graph neural networks [25], and recurrent neural networks [20]. We leave it as a future direction. Broader Impacts: Our algorithm can improve efficiency and reduce the energy consumption of training neural networks, which helps reduce the carbon footprint caused by deep learning. However, our efficient training algorithm might also facilitate the development of large language models with safety concerns for human beings; and malicious AI applications such as fake content generation. Limitations: The main limitation of this work is that it can only accelerate models with a large portion of matrix multiplications (linear layers), but can not accelerate convolution layers. Moreover, the proposed method cannot yet work well for those extremely large models such as OPT-175B. To the best of our knowledge, even INT8 training is still an open problem for these large models.
2302.03874
Participatory Personalization in Classification
Machine learning models are often personalized with information that is protected, sensitive, self-reported, or costly to acquire. These models use information about people but do not facilitate nor inform their consent. Individuals cannot opt out of reporting personal information to a model, nor tell if they benefit from personalization in the first place. We introduce a family of classification models, called participatory systems, that let individuals opt into personalization at prediction time. We present a model-agnostic algorithm to learn participatory systems for personalization with categorical group attributes. We conduct a comprehensive empirical study of participatory systems in clinical prediction tasks, benchmarking them with common approaches for personalization and imputation. Our results demonstrate that participatory systems can facilitate and inform consent while improving performance and data use across all groups who report personal data.
Hailey Joren, Chirag Nagpal, Katherine Heller, Berk Ustun
2023-02-08T04:24:19Z
http://arxiv.org/abs/2302.03874v2
# Participatory Systems for Personalized Prediction ###### Abstract Machine learning models are often personalized based on information that is protected, sensitive, self-reported, or costly to acquire. These models use information about people, but do not facilitate nor inform their _consent_. Individuals cannot opt out of reporting information that a model needs to personalize their predictions, nor tell if they would benefit from personalization in the first place. In this work, we introduce a new family of prediction models, called _participatory systems_, that allow individuals to opt into personalization at prediction time. We present a model-agnostic algorithm to learn participatory systems for supervised learning tasks where models are personalized with categorical group attributes. We conduct a comprehensive empirical study of participatory systems in clinical prediction tasks, comparing them to common approaches for personalization and imputation. Our results demonstrate that participatory systems can facilitate and inform consent in a way that improves performance and privacy across all groups who report personal data. Informed Consent, Personalization, Participation, Data Privacy, Algorithmic Fairness, Healthcare, Clinical Decision Support ## 1 Introduction Machine learning models routinely assign predictions to _people_ - be it to predict if a patient has a rare disease, the risk that a consumer will default on a loan or the likelihood that a student will matriculate. Models in such settings are often _personalized_, in that they use personal information to target heterogeneous subpopulations. Typically, models are personalized with categorical attributes that specify groups [i.e., categorization as per the taxonomy of 26]. In medicine, for example, clinical prediction models include _group attributes_ that are _protected_ (e.g., sex in the CHA\({}_{2}\)DS\({}_{2}\) Score for Stroke Risk), _sensitive_ (e.g., HIV status in the VA COVID-19 Mortality Score), _self-reported_ (e.g., first_menstral_period in the Gail Breast Cancer Risk Scores), or _costly_ to acquire (e.g., lab_values for Alvarado Acute Appendicitis Score). Online platforms that solicit personal data from individuals are designed to support _informed consent_: individuals can opt out of providing personal data, and understand how it will be used to support their experience [see e.g., GDPR consent banners 27, 34]. Personalized models do not provide such functionality: individuals cannot opt out of reporting data used to personalize their predictions, nor tell if it would improve their predictions. In effect, models are built under the assumption that data available at training time will also be available at prediction time. In practice, this has led to a proliferation of models that require individuals to report information they may be unwilling or unable to provide - see e.g., Denver HIV Risk Score, which requires individuals to report age, gender, sexual practices, and ethnicity [30]. In settings where individuals can input their data directly (e.g., online medical diagnostics), individuals may decline to report optional information that would improve their predictions, or report information that is wrong by reporting untruthfully or pigeonholing themselves into a category they do not identify with. The broader lack of support for informed consent in personalization is problematic because standard techniques for personalization do not improve performance for all groups who report personal data [see 44, 53]. In practice, a personalized model can perform _worse_ or the same as a _generic model_ trained without personal information for a group with specific characteristics. Such models violate the implicit promise of personalization - as individuals report personal information without receiving a tailored performance gain in return. These instances of "worsenalization" are prevalent, hard to detect, and hard to resolve [see 53] - but could easily be mitigated by allowing individuals to opt out of personalization, and informing them of its gains (see Fig. 1). In this paper, we introduce a family of machine learning models called _participatory systems_ that facilitate and inform consent. Participatory systems _facilitate_ consent by allowing individuals to report additional personal data at prediction time, and _inform_ consent by showing them how it will affect their predictions. Models that facilitate consent operate as _markets_ in which individuals report personal data in exchange for performance gains, and model developers promote participation by ensuring that reporting will lead to gains for each group. In the context of personalization, incentives are aligned as all parties benefit from more accurate predictions. In turn, the technical challenges stem from designing markets that will operate efficiently - i.e., models that facilitate and inform consent while performing as well as possible. This work addresses these challenges by developing systems that: (i) perform well when individuals opt in (to promote participation) or opt out (to safeguard against abstention); (ii) provide multiple opportunities for individuals to decide what to report and to understand its gain (to facilitate and inform consent). The resulting systems can produce large improvements in performance and privacy across all groups who report personal data, by tailoring and limiting unnecessary data collection when it does not. The main contributions of this work are: 1. We introduce a family of prediction models to facilitate and inform consent in supervised learning tasks. 2. We develop a model-agnostic algorithm to learn participatory systems. Our approach can produce a variety of systems that promote participation in deployment and that handle constraints on data use and acquisition. Figure 1: Simple classification task where participation improves performance and limits data collection. We are given \(n^{+}=50\) positive and \(n^{-}=51\) negative examples for 4 groups defined by the attributes sex x age. We fit the best linear model with a one-hot encoding of group attributes \(h\), and evaluate the gains from personalization with respect to the best linear model without group attributes \(h_{0}\). In a traditional model (left), individuals must report group membership to \(h\). Here, personalization would reduce error from 50 to 24, but assigns the same predictions to [male, young] and detrimental predictions to [female, old]. In a minimal participatory system (right), individuals who opt into personalization receive predictions from \(h\) while those who opt out receive predictions from \(h_{0}\). In this case, individuals in groups [female, old] and [male, young] would opt out of personalization. The resulting system would achieve an overall error rate of 0 and reduce unnecessary data collection. 3. We conduct a comprehensive empirical study of participatory systems in clinical prediction tasks. Our results show how our approach can facilitate and inform consent in a way that improves performance and limits unnecessary data collection. 4. We provide a Python library to build and evaluate participatory systems, available here. Related Work Data Privacy.Participatory systems support modern principles of responsible data use such as _informed consent_ and _collection limitation_ (i.e., data should be collected with the consent of a data subject, and restricted to only what is necessary). These principles are articulated in, e.g., OECD guidelines [42], the GDPR [27], and the California Consumer Privacy Act [17]. These principles stem from a long on the right to data privacy [34]. They are motivated - in part - by a line of work showing that individuals care deeply about their ability to control personal data [5, 10, 12] but differ considerably in their desire or capacity to share it [see e.g. 7, 9, 11, 18, 19, 40, 43]. Personalization.We study personalization where models are personalized with categorical attributes that encode personal characteristics. [i.e., "categorization" rather than "individualization" as per the taxonomy of 26]. Modern techniques build on extensive work for learning models with categorical data [see e.g., 3, 49] to improve model performance at a population level using group attributes - e.g., by accounting for higher-order interaction effects [15, 39, 56] and recursive partitioning [13, 14, 16, 25]. Our work provides an alternative approach to personalization in settings where we may wish to facilitate and inform consent - e.g., when we must assign predictions using features that are collected at prediction time [see e.g., 2, 8, 58]. Algorithmic Fairness.Our work is broadly related to algorithmic fairness in that it seeks to improve model performance at a group level. In particular, our goal is to build systems that perform as well as possible for each group that reports personal data [55]. These systems naturally ensure the "fair use" of group attributes for personalization [44, 53, 55] - which are necessary conditions for each group to report personal information voluntarily and truthfully. This line of work broadly complements research on preference-based fairness [24, 36, 55, 57, 59], on ensuring group fairness across complex group structures [28, 31, 35], and on the study of privacy across subpopulations [11]. ## 2 Participatory Systems We consider a supervised learning task where we personalize a model with categorical attributes. We start with a dataset \(\{(\mathbf{x}_{i},y_{i},\mathbf{g}_{i})\}_{i=1}^{n}\) where each example consists of a feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\), a label \(y_{i}\in\mathcal{Y}\), and a vector of \(k\) categorical attributes \(\mathbf{g}_{i}=[g_{i,1},\ldots,g_{i,k}]\in\mathcal{G}_{1}\times\ldots\times \mathcal{G}_{k}=\mathcal{G}\) used for personalization. We refer to \(\mathcal{G}\) as _group attributes_ and to \(\mathbf{g}_{i}\) as the _group membership_ of person \(i\). We consider a setting where each person can opt out of personalization by declining to report group attributes at prediction time. We let \(\varnothing\) denote the value of a group attribute that a person does not report and let \(\mathbf{r}_{i}=[r_{i,1},\ldots,r_{i,k}]\in\mathcal{R}\subseteq\mathcal{G}\times \mathbf{\varnothing}\) denote the _reported group membership_ of person \(i\). For example, a person with \(\mathbf{g}_{i}=[\texttt{female},\texttt{HIV}=\texttt{+}]\) could report \(\mathbf{r}_{i}=[\texttt{female},\varnothing]\) by declining to report \(\texttt{HIV}\) and \(\mathbf{r}_{i}=\mathbf{\varnothing}:=[\varnothing,\ldots,\varnothing]\) by opting out of personalization entirely. Each model specifies a set of _reporting options_\(\mathcal{R}\) that are available to individuals at prediction time. Thus, a model that did not allow individuals to opt out of personalization who have \(\mathcal{R}=\mathcal{G}\), and a model that allows individuals to report any subset of group attributes would have \(\mathcal{R}=\mathcal{G}\times\mathbf{\varnothing}\). We use the dataset to train a model \(h:\mathcal{X}\times\mathcal{R}\rightarrow\mathcal{Y}\) by empirical risk minimization with a loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}_{+}\). We denote the _empirical risk_ and _true risk_ of a model \(h\) as \(\hat{R}(h)\) and \(R(h)\), respectively. Given a model, we evaluate model performance for each _reporting group_ - i.e., over individuals who report specific values \(\mathbf{r}\in\mathcal{R}\). As part of this evaluation, we consider how model performance changes for group \(\mathbf{r}\) if they were to report \(\mathbf{r}^{\prime}\). Given a model \(h\), we measure its true risk and empirical risk for group \(\mathbf{r}\in\mathcal{R}\) when they report \(\mathbf{r}^{\prime}\in\mathcal{R}\) as: \[R_{\mathbf{r}}(h(\cdot,\mathbf{r}^{\prime})) :=\mathbb{E}\left[\ell\left(h(\mathbf{x},\mathbf{r}),y\right)\mid\mathcal{ R}=\mathbf{r}\right]\] \[\hat{R}_{\mathbf{r}}(h(\cdot,\mathbf{r}^{\prime})) :=\frac{1}{n_{\mathbf{r}}}\sum_{i:\mathbf{r}_{i}=\mathbf{r}}\ell\left(h(\mathbf{x }_{i},\mathbf{r}^{\prime}),y_{i}\right).\] We compute the _gains of personalization_ for each reporting group by comparing the performance of the personalized model to that of a _generic model_\(h_{0}:\mathcal{X}\times\mathcal{Y}\). The generic model represents the best model trained on a dataset without group attributes \(h_{0}\in\operatorname*{argmin}_{h\in\mathcal{H}_{0}}\hat{R}(h)\). We denote the gains of personalization for a reporting group \(\mathbf{r}\) in terms of true risk and empirical risk as: \[\Delta_{\mathbf{r}}(h,h_{0}) :=\hat{R}_{\mathbf{r}}(h_{0})-\hat{R}_{\mathbf{r}}(h)\] \[\hat{\Delta}_{\mathbf{r}}(h,h_{0}) :=\hat{R}_{\mathbf{r}}(h_{0})-\hat{R}_{\mathbf{r}}(h)\] We wish to ensure gains in terms of true risk, but can only measure the gains in terms of empirical risk. We assume that individuals prefer to receive more accurate predictions. This assumption holds in personalization tasks where all individuals prefer to receive correct predictions - e.g., when predicting the risk of a serious illness [see e.g., 38, 50, 51]. It does not hold in applications where some individuals may prefer inaccurate predictions - e.g., predicting the risk of organ failure for an organ transplant [and other "polar" applications in 45]. System ArchitectureIn Fig. 2, we show three participatory systems that differ in terms of their reporting options, their ability to inform consent, and their training and implementation requirements: _Minimal systems_ let individuals opt out of receiving predictions from an existing personalized model \(h\). Individuals who opt out receive predictions from a generic model \(h_{0}\) trained without group attributes. These systems can be built by training one additional model. _Flat systems_ let individuals opt into partial personalization by reporting any subset group attributes. This architecture allows individuals to receive personalized predictions without reporting specific characteristics. For example, a person with \(\mathbf{g}_{i}=\texttt{[old,female]}\) can report \(\mathbf{r}_{i}=\texttt{[old,\varnothing]}\). These systems can improve performance by using a distinct model to assign personalized predictions to each reporting group. _Sequential systems_ let individuals opt into partial personalization by reporting one group attribute at a time. This architecture is better suited for informing consent as users can make reporting decisions by comparing 1 model compared to \(2^{k}\) models. They are also well-suited for settings where group attributes encode information that must be acquired at prediction time (e.g., the outcome of a test result). We represent the interface of each system as an \(M\)-ary tree [i.e., a tree with at most \(M\) branches 37] whose nodes map personalized models to a reporting group. Each tree starts with a generic model at its root and branches out as individuals make reporting decisions. A minimal system corresponds to a tree of depth 1 with \(M=|\mathcal{G}|+1\) leaves. A flat system corresponds to a tree of depth 1 with \(M=|\mathcal{R}|\) leaves. A sequential system corresponds to a \(M\)-ary tree of depth \(k\) where \(M=\max(|\mathcal{G}_{1}|,\ldots,|\mathcal{G}_{k}|)\) is the maximum number of categories for any group attribute. DesiderataWe stipulate that a participatory system \(h:\mathcal{X}\times\mathcal{R}\rightarrow\mathcal{Y}\) should meet three key requirements: _Protect Abstention_: The system should ensure that individuals who opt out of personalization are assigned predictions that achieve the performance of a model trained without this information. This requires the system to outperform the performance of a generic model \(h_{0}\). In practice, this requirement sets a baseline level of performance that model developers can expect when individuals opt out and ensures that the gains used to inform individuals are measured with respect to a model trained in good faith. _Promote Participation_: The system should maximize the personalization gains for each reporting group \(\mathbf{r}\in\mathcal{R}\). Formally, this requires systems that maximize the gains \(\Delta_{\mathbf{r}}(h_{\mathbf{r}},h_{0})\) over a generic model \(h_{0}\). In practice, this requirement promotes participation across reporting groups by improving the relative benefits of opting in. ## 3 Learning Participatory Systems In this section, we describe a model-agnostic algorithm to learn participatory systems that meet the requirements. Our procedure takes as input a pool of candidate models \(\mathcal{M}\), a training dataset \(\mathcal{D}^{\text{train}}\), and a validation \(\mathcal{D}^{\text{valid}}\) dataset. It outputs a collection of participatory systems that ensure personalization gains across reporting groups. By assigning personalized models over a reporting interface, our algorithm can produce the three types of participatory systems in Fig. 2. Our approach combines routines for generating a set of viable reporting interfaces (Line 1); assigning models over the interface (Line 3); and pruning the interface to limit data collection when it does not lead to gains (Line 4). We summarize the procedure in Algorithm 1 and present a detailed description of each routine in Appendix B. Model PoolOur procedure takes as input a pool of personalized models that can be assigned to nodes in a reporting tree \(\mathcal{M}\). At a minimum, \(\mathcal{M}\) should contain two models: a personalized model \(h\) for individuals who opt into personalization and a generic model \(h_{0}\) for individuals who opt out of personalization. The pool can contain models trained on different subsets of data (e.g., a model trained on female patients only), and fit from different model classes (e.g., linear classifiers and random forests). As the best personalized model can vary across intersectional groups, using a Figure 3: Performance profile of participatory systems for the saps dataset for individuals with group membership \(\mathbf{g}_{i}=[30+,\texttt{HIV}+]\). We show test performance with respect to participation in the target population. Here, we control participation by varying the cost of reporting in a simulated model for individual disclosure described in Appendix A. As shown, minimal and sequential systems always outperform the generic model regardless of participation. In regimes where the cost of reporting is low, participation is high. Consequently, a minimal system will achieve the same performance as a personalized model, and a sequential system to achieve the performance of the component model for this subpopulation. We provide additional details and results for other groups in Appendix A. Figure 2: Participatory systems for a task with group attributes \(\mathcal{G}=\texttt{sex}\times\texttt{age}=[\texttt{male},\texttt{female}] \times[\texttt{old},\texttt{young}]\). Each system allows a person to opt out of personalization informing their choice through comparisons between nested models. Systems limit unnecessary data collection by “pruning” reporting options that do not lead to gains – e.g., \([\texttt{young},\texttt{female}]\) is pruned in all systems as it leads to a \(\texttt{gain}\leq 0.0\%\). pool of models allows practitioners to personalize for each group. By default, we include "decoupled models" for each reporting group trained using only data for that group, as such models can perform well on heterogeneous subgroups [47, 53, 55]. ``` Input: \(\mathcal{D}^{\text{assign}}=\{(\mathbf{x}_{i},\mathbf{g}_{i},y_{i})\}_{i=\text{tune}}^{n^{ \text{assign}}}\)training dataset \(\mathcal{D}^{\text{prune}}=\{(\mathbf{x}_{i},\mathbf{g}_{i},y_{i})\}_{i=1}^{n^{\text{ train}}}\)validation dataset \(\mathcal{M}:\{h:\mathcal{X}\times\mathcal{G}\rightarrow\mathcal{Y}\}\)pool of candidate models 1:\(\mathcal{T}\leftarrow\texttt{ViableTrees}(\mathcal{G},\mathcal{D})\)\(|\mathcal{T}|=1\)for minimal & flat systems 2:for\(T\in\mathcal{T}\)do 3:\(T\leftarrow\texttt{AssignModels}(T,\mathcal{M},\mathcal{D}^{\text{assign}})\)assign models 4:\(T\leftarrow\texttt{PruneLeaves}(T,\mathcal{D}^{\text{prune}})\)prune models 5:endfor ``` **Output**\(\mathcal{T}\) ``` **Algorithm 1** Learning Participatory Systems **Enumerating Viable Interfaces** We call the ViableTrees routine in Line 1 to enumerate all viable \(M\)-vary trees for sequential systems. We only call this routine for sequential systems because \(\mathcal{T}\) contains a single tree that is known a priori. This routine can return trees that obey custom constraints on sample size (e.g., only ), as well as on the order of reporting (e.g., users who are male should report age before HIV). This routine will produce at most \(|\mathcal{T}|\leq\prod_{i=1}^{k}i^{m^{k-i}}\)[29] trees. In general, this routine scales to tasks with \(\leq 8\) group attributes. Beyond this limit, one can reduce the size of the enumeration by specifying ordering constraints or a stopping condition based on the number of trees to enumerate before stopping. For a task with 3 binary group attributes \(\mathcal{T}\) contains 24 3-ary trees of depth 3. Given a complete ordering of all 3 group attributes, \(\mathcal{T}\) would contain 1 tree. The groups at the leaves of the tree should contain at least one positive label, one negative label, and \(n_{\mathbf{r}}\geq 30\) samples to avoid overfitting. The routine can filter trees during generation to ensure that these criteria are met among the final set of candidates. Model AssignmentWe assign each reporting group a model using the AssignModels routine in Line 3. Given a reporting group, we consider all models that could use their group membership. Thus, a group that reports age and sex could be assigned predictions from a model that requires age, sex, both, or neither. This implies that we can always assign the generic model to any reporting group, ensuring that the system performs as well as a generic model in terms of the assignment metric on the assignment dataset. By default, we assign each reporting group a model from \(\mathcal{M}\) that optimizes performance on the assignment sample \(\mathcal{D}^{\text{assign}}\). This rule can be customized to account for other criteria based on training data (e.g., one can filter \(\mathcal{M}\) so that we only consider models that generalize). Data Minimization by PruningLine 1 may output trees where a person reports personal information without receiving a gain in performance. This can happen when we assign the same model to nested reporting groups (see e.g., the Flat system in Fig. 2 that assigns \(h_{0}\) to \([\texttt{female},\varnothing]\) and \([\texttt{female},\texttt{young}]\)), or when a model performs just as well as its parent (see e.g., the Sequential system in Fig. 2, where \(h_{7}\) performs as well as \(h_{3}\) for \([\texttt{female},\texttt{old}]\)). We ensure that a participatory system will not solicit data in such cases using the Prune routine in Line 4. This routine takes as inputs a tree \(T\), the candidate models assigned to each node \(\mathcal{M}\), and the pruning (validation) sample \(\mathcal{D}^{\text{prune}}\). It outputs the pruned tree \(T\). The routine performs a one-sided hypothesis test to check if each group \(\mathbf{g}\) prefers the parent model \(h\) to a leaf model \(h^{\prime}\): \[H_{0}:R_{\mathbf{r}}(h)\leq R_{\mathbf{g}}(h^{\prime})\quad\text{vs.}\quad H_{A}:R_{ \mathbf{g}}(h)>R_{\mathbf{r}}(h^{\prime})\] Here, \(H_{0}\) assumes that a group prefers \(h\) over \(h^{\prime}\). Thus, we reject \(H_{0}\) when there is enough evidence to suggest that \(h^{\prime}\) performs better for \(\mathbf{r}\) on pruning data. The testing procedure should be chosen based on the performance metric used to evaluate personalization gains. In general, we can use a bootstrap hypothesis test [22]. However, there may exist more powerful tests for salient performance metrics [see e.g., 21, 23, 52, for accuracy and AUC]. On PerformanceOur procedure allows practitioners to learn systems for prediction tasks by specifying the performance metric used in assignment and pruning. A suitable performance metric should represent the exact gains we would show users (e.g., error for a diagnostic; AUC for triage; ECE for risk assessment). Pruning should be done on a held-out dataset to ensure these gains hold in deployment. Using a pool of models allows practitioners to optimize performance across groups, which translates to gains at the population level. For sequential systems, the procedure outputs all configurations, allowing practitioners to choose between systems on the basis of criteria not known at training time. For example, one can swap the trees to use a system that always requests age before HIV status. By default, we select the configuration that minimizes data collection across groups, such that the ordering of attributes results leads to the greatest number of data requests pruned. On ComputationOur approach provides practitioners with various options to learn participatory systems under a limited computational budget (e.g., one can train only two models and build a minimal system, or train a flat or sequential system with a limited number of models in the pool). Nevertheless, the primary bottleneck when learning participatory systems is _data_ rather than _compute_. Given a finite sample dataset, we are limited in the number of categorical attributes used for personalization. This is because we require a minimum number of samples for each intersectional group to train a personalized model and evaluate its performance. Given that the number of intersectional groups increases exponentially with each attribute, we quickly enter a regime where we cannot train models for a given group (e.g., because we lack sufficient labels) or reliably evaluate its gain for assignment and pruning [see 44]. ## 4 Experiments We benchmark participatory systems and personalized models on real-world clinical prediction tasks. Our goal is to evaluate these approaches in terms of their performance, data usage, and consent in applications where individuals have a low cost of reporting. We include code to reproduce our results in our anonymized repository. ### Setup We consider six classification tasks for clinical decision support where we must train a model that is personalized group attributes that are either protected or sensitive (sex, age, HIV). These are tasks where the information used for personalization is readily available, relevant to the prediction task, and is unlikely to be leaked or misused due to laws surrounding the confidentiality, privacy, and use of medical data [1]. Given these conditions, we expect individuals to have a low cost of reporting - and therefore report personal information so long as there is any benefit [6, 12]. We list the datasets for each prediction task in Table 2 and describe them in Appendix C. We split each dataset into a test sample (20%, used to evaluate out-of-sample performance), and a training sample (80%, used for training, assignment, pruning, and estimating gains to show users). We train three kinds of personalized models for each dataset: * _Static_: These models are personalized using a one-hot encoding of group attributes (1Hot), and a one-hot encoding of intersectional groups (mHot) * _Imputed_: These are variants of personalized models that facilitate consent using imputation. We construct these models by pairing static models with KNN-imputation (KNN-1Hot, KNN-mHot). We report results for these models for an extreme case where all individuals opt out of personalization. In practice, the performance of this imputation will fall between 1Hot (100% opt-in) and KNN-1Hot (100% opt-out). * _Participatory:_ These are participatory systems built using our approach. These include a minimal system built using 1Hot and its generic counterpart (Minimal); and flat and sequential systems built using 1Hot, mHot, and their generic counterparts (Flat, Seq). We train all models - personalized models and the components of participatory systems - from the same model class. We evaluate all models on all datasets using the metrics in Table 1. We repeat these experiments four times, varying the model class (logistic regression, random forests) and the performance metric of interest (error rate, AUC). These variations are chosen to benchmark our approach in major prediction tasks (decision-making, ranking) and to understand the impact of model capacity on our results. ### Results We show results for logistic regression models and error rate in Table 2 and for other model classes and prediction tasks in Appendix D. In what follows, we discuss these results. On Performance GainsOur results show that participatory systems can improve performance for all groups who provide personal data. We find that Flat and Seq achieve the best overall performance on all 6/6 datasets. These gains at a population level are often experienced across all groups who provide personal data, as shown by the fact that Flat and Seq improve both the worst-case and best-case gains of personalization on 5/6 datasets. In contrast, traditional approaches to personalization can improve performance at a group level while reducing performance at a group level. Our results highlight the prevalence of this effect - as we find that static methods exhibit rationality violations on 5/6 datasets Table 2 (c.f. 1/6 datasets for Minimal, Flat, or Seq). In practice, the relative benefits in the performance of a participatory system over a traditional static model stem from (i) allowing users to opt out of instances of detrimental personalization, and (ii) assigning personalized predictions using multiple models (Flat and Seq). For example, on cardio_eicu, 1Hot improves performance at a population level but reduces performance at the group level. In particular, we find that 2 groups experience statistically significant rationality violations, meaning they would have been better off with a generic model that did not require them to report personal data. By comparing the performance of 1Hot to Minimal, we can gauge the performance gain that arises from allowing users to opt out of \begin{table} \begin{tabular}{l l l} **Metric** & **Definition** & **Description** \\ \hline Overall & \(\sum_{g\in\mathcal{G}}\frac{n_{g}}{n}R_{g}(h_{g})\) & Population-level performance of a personalized system/model, computed as a weighted average over all groups \\ \hline Overall & Gain & \(\sum_{g\in\mathcal{G}}\frac{n_{g}}{n}\Delta_{g}(h_{g},h_{0})\) & Population-level gain in performance of a personalized system/model over its generic counterpart \\ \hline Group & Gains & \(\min\limits_{g\in\mathcal{G}}/\max\limits_{g\in\mathcal{G}}\Delta_{g}(h_{g},h_{0})\) & Range of gains of a personalized system/model over its generic counterpart across all groups \\ \hline Rationality & \(\sum_{g\in\mathcal{G}}1\)[reject \(H_{0}\)] & Number of rationality violations detected using a bootstrap hypothesis test with 100 resampling and a significance level of 100%: \(H_{0}:\Delta_{g}(h,h_{0})\geq 0\) and \(H_{A}\) \\ \hline Imputation Risk & \(\min\limits_{g\in\mathcal{G}}\Delta_{g}(h_{g},h_{g^{\prime}})\) & Rika to performance of imputation, or the worst possible performance to a group given they are imputed with the attributes of group \(g^{\prime}\). Relevant for static models only \\ \hline Options Pruned & \(\frac{|\mathbb{R}|-|\mathbb{R}(h)|}{|\mathbb{R}|}\) & Number of reporting options pruned out of the total reporting options for a model or system. Here, \(\mathbb{R}(h)\) denotes the options that are available after \(h\) has been pruned while \(\mathbb{R}\) denotes options available before pruning. \\ \hline Data Use & \(\sum_{g\in\mathcal{G}}\frac{n_{g}}{n}\frac{\text{requested}(h,g)}{\text{dim}(g)}\) & Proportion of total group attribute \(k\) requested from \(h\) from each group, average over all groups in \(\mathcal{G}\) \\ \end{tabular} \end{table} Table 1: Overview of metrics used to evaluate performance, data usage, and consent. We report performance on a held-out test sample. We assume that individuals report group membership to static models, never report group membership to imputed models, and only report to participatory systems when reporting leads to a positive gain. In the latter case, the gain shown to users is estimated using a validation set in the training sample. such instances (i.e., a reduction of test error from \(22.4\%\) to \(21.7\%\)). By comparing the performance on Minimal to Flat and Seq, we can gauge the performance gain that arises from the use of multiple models (i.e., a reduction of test error from \(21.7\%\) to \(16.1\%\)). On Data MinimizationOur results highlight how participatory systems limit data use. On \(6/6\) datasets, the participatory systems perform better across all groups while requesting less personal data. For example, on cardio_eicu, Seq reduces error by \(6.3\%\) compared to 1Hot while requesting, on average, \(87.5\%\) of the data needed by 1Hot. In general, participatory systems can reduce data where personalization doesn't improve performance, e.g., on lungcancer. Even as attributes like sex or age may be readily reported by patients for any performance benefit, the potential to curb data use is valuable when there is a tangible cost associated with data collection - e.g., when models make use of the outcome from a diagnostic test or rating scale for a mental disorder that must be administered by a clinician [48]. Our results show that the opportunities for data minimization may vary substantially across prediction tasks. On apnea for example, we can prune \(6\) reporting options for a Seq system for decision making (error) but \(4\) reporting for Seq when we optimize for ranking (AUC) (see Appendix D). On Facilitating and Informing ConsentOur results highlight the benefits of flat and sequential systems for facilitating and informing consent. These systems provide more opportunities for consent \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Strict} & \multicolumn{3}{c}{Induced} & \multicolumn{3}{c}{PParticipatory} \\ \cline{3-10} Dataset & Metrics & **1Hot** & **mHot** & **KNN-1Hot** & **KNN-mlot** & **Minimal** & **Flat** & **Seq** \\ \hline \multirow{3}{*}{ \begin{tabular}{c} arousal \\ \(n=1152,d=26\) \\ \(\mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\{ \mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\}{\mathcal{G}= \}{\mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\{\mathcal{G}=\}{\mathcal{G}=\{ \mathcal{G}=\mathcal{G}=\{\mathcal{G}=\}{\mathcal{G}=\mathcal{G}=\{\mathcal{G}= \mathcal{G}=\{\mathcal{G}=\mathcal{G}=\{\mathcal{G}=\mathcal{G}=\mathcal{G}=\{ \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}=\mathcal{G}= \mathcal{G}= by allowing users to report a subset of group attributes On saps, for example, we see that users who are HIV positive can report \([30+,\varnothing]\) or \([30+,\texttt{HIV+}]\). In Table 2, we show these opportunities through total number of reporting options available to users at prediction time. Although flat and sequential systems provide the same reporting options, they differ in their capacity to inform consent. In a flat system, users may attempt to gauge the marginal benefit of reporting a specific attribute by comparing the gains between reporting options. For example, in Fig. 4), users who are HIV positive would see a gain of \(3.7\%\) for reporting \([\varnothing,\texttt{HIV+}]\) and \(16.7\%\) for reporting \([30+,\texttt{HIV+}]\), thus concluding that the marginal gain of reporting age is \(16.7\%-3.7\%=13.0\%\). This estimate presumes that the gains of \(3.7\%\) were distributed equally across age groups. Sequential systems naturally overcome such issues by informing users of the exact gains for partial reporting. In the sequential system, group \([30+,\texttt{HIV+}]\) would see that the marginal gain of reporting age is \(21.5\%\). Likewise, group \([<\!\!30,\texttt{HIV+}]\) would see that the marginal gain of reporting age is \(0.0\%\). On the Value of a Model Agnostic ApproachAs expected, we find that complex model class can produce considerable changes in overall accuracy - e.g., we can reduce overall test error for a personalized model from \(20.4\%\) to \(14.1\%\) on saps by training a random forest rather than a logistic regression model (see Appendix D). However, a gain in overall performance does not always translate to gains at the group level. On saps, using a random forest also introduces a rationality violation for one group. These findings highlight the value of a model-agnostic approach, in which we can use a variety of models to achieve better performance while mitigating harm. For example, we can ensure generalization across reporting groups - e.g., by a generic model fit from a complex model class, and personalized models fit from a simpler model class. On the Pitfalls of ImputationOne of the simplest approaches to allow individuals to opt out of personalization is to pair a personalized model with an imputation technique. Although this approach can facilitate consent, it does not meet the requirements in 2. Consider a personalized model that exhibits "worsenalization" in Fig. 1. Even if one could correctly impute the group membership for every person, individuals would still receive more accurate predictions from a generic model \(h_{0}\). In practice, imputation can perform unreliably - as individuals who opt out of reporting their group membership to a personalized model may be imputed the group membership for a group that is assigned considerably different predictions. In such cases, opting out may be beneficial, making it difficult for model developers to promote participation while informing consent. Our results highlight the prevalence of these effect across model classes and prediction tasks. For example, on cardio_eicu our estimate of the "risk of imputation" is \(-5.3\%\), indicating that groups can experience an error rate up to \(5.3\%\) greater if their values are incorrectly imputed at the level of intersectional groups. Our Figure 4: Participatory systems for the saps dataset. These models are trained to predict ICU mortality for groups defined by \(\mathcal{G}=\texttt{HIV}\times\texttt{age}=[+_{+}-]\times[<\!\!30,\texttt{ 30+}]\). Here, \(h_{0}\) denotes the generic model, \(h_{1}\) denotes a \(\texttt{IHot}\) model fit with a one-hot encoding of \(\mathcal{G}\), and \(h_{2}\cdots h_{n}\) are \(\texttt{1Hot}\) and \(\texttt{mHot}\) models fit for reporting groups. Grey stripes indicate pruned reporting options. Numbers above each box indicate the gain with reference to the parent node. For example, in the Sequential system, group (HIV+, 30+) sees an estimated \(21.5\%\) error reduction for age after having reported HIV. In contrast, group (HIV+, <30) sees no gain from reporting age in addition to HIV status, and this option is pruned. results for KNN-1Hot show that this predicted loss in performance can be realized in practice using KNN-imputation as we find that the imputed system leads to rationality violations on 5/6 datasets. ## 5 Concluding Remarks In this work, we introduced a new family of prediction models that allow individuals to report personal data at prediction time. Our systems can facilitate and inform consent in a way that can produce large improvements in performance and privacy for each group that reports personal data. The systems in this work should be seen as foundational machinery for informing consent. In practice, the viability of reaping these benefits will hinge on individual preferences for disclosure, which can change based on the information solicited, the outcome predicted, and the ability to inform users effectively of these impacts [6]. Implementing these systems will require developing tailored approaches to communicate the gains of personalization (e.g., communicating risk and uncertainty). One common concern is that allowing individuals to opt out of personalization could prevent us from collecting data that could be used to monitor or improve its performance. While this is a real possibility, the core issue stems from a lack of transparency surrounding the _purpose_ of data collection [42]. If the purpose of data collection is to monitor or improve a model, then individuals could be given the ability to report this information voluntarily for the sake of auditing or training. If the purpose of data collection is personalization, then individuals are within their right to opt out.
2306.00652
Explanation Graph Generation via Generative Pre-training over Synthetic Graphs
The generation of explanation graphs is a significant task that aims to produce explanation graphs in response to user input, revealing the internal reasoning process. This task is challenging due to the significant discrepancy between unstructured user queries and structured explanation graphs. Current research commonly fine-tunes a text-based pre-trained language model on a small downstream dataset that is annotated with labeled graphs. However, due to the limited scale of available datasets, this approach may prove to be insufficient in bridging the gap between natural language text and structured graphs. In this paper, to alleviate the above limitations, we propose a novel pre-trained framework EG3P(for Explanation Graph Generation via Generative Pre-training over synthetic graphs) for the explanation graph generation task. Specifically, we first propose a text-to-graph generative task to pre-train the model with the goal of bridging the text-graph gap. Additionally, we propose an automatic corpus synthesis strategy for synthesizing a large scale of high-quality corpus, reducing the reliance on costly manual annotation methods. Experimental results on ExplaGraphs show the effectiveness of EG3P that our model surpasses all baseline systems with remarkable margins. Besides, further analysis demonstrates that EG3P is able to generate better explanation graphs on actual reasoning tasks such as CommonsenseQA and OpenbookQA.
Han Cui, Shangzhan Li, Yu Zhang, Qi Shi
2023-06-01T13:20:22Z
http://arxiv.org/abs/2306.00652v1
# Explanation Graph Generation via Generative Pre-training over Synthetic Graphs ###### Abstract The generation of explanation graphs is a significant task that aims to produce explanation graphs in response to user input, revealing the internal reasoning process. This task is challenging due to the significant discrepancy between unstructured user queries and structured explanation graphs. Current research commonly fine-tunes a text-based pre-trained language model on a small downstream dataset that is annotated with labeled graphs. However, due to the limited scale of available datasets, this approach may prove to be insufficient in bridging the gap between natural language text and structured graphs. In this paper, to alleviate the above limitations, we propose a novel pre-trained framework **EG3P**(for **E**xplanation **G**raph **G**eneration via **G**enerative **P**re-training over synthetic graphs) for the explanation graph generation task. Specifically, we first propose a text-to-graph generative task to pre-train the model with the goal of bridging the text-graph gap. Additionally, we propose an automatic corpus synthesis strategy for synthesizing a large scale of high-quality corpus, reducing the reliance on costly manual annotation methods. Experimental results on ExplaGraphs show the effectiveness of **EG3P** that our model surpasses all baseline systems with remarkable margins. Besides, further analysis demonstrates that **EG3P** is able to generate better explanation graphs on actual reasoning tasks such as CommonsenseQA and OpenbookQA.1 Footnote 1: Our code, checkpoints and corpus is released in [https://github.com/cccccent/EG3P](https://github.com/cccccent/EG3P) ## 1 Introduction Generating an explanation to probe why the model obtains answers is a long-term goal in the development of intelligent systems, especially in reasoning-related tasks, such as E-SNLI(Camburu et al., 2018), ECQA(Aggarwal et al., 2021), HotpotQA(Yang et al., 2018) and ExplaGraphs(Saha et al., 2021). According to the types of explanations, existing explanation generation tasks can be mainly divided into three types, including textual highlights explanation generation (Yang et al., 2018; Camburu et al., 2018), natural language explanation generation (Camburu et al., 2018; Wiegreffe et al., 2020; Inoue et al., 2021) and structured explanation generation (Xie et al., 2020; Saha et al., 2021). Among all these tasks, structured explanation generation achieve growing attention recently since the explanation in this task is usually a graph, which is clean enough, and easy to evaluate from the perspective of structure and semantics (denoted as an explanation graph). An example of a structured explanation generation task is shown in Figure 1. Pre-trained language models, such as RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) have demonstrated their powerful capabilities in a great many language understanding tasks. As a result, when it comes to explanation graph generatio Figure 1: An example of the task of explanation graph generation (from ExplaGraphs dataset). Given a piece of natural text, the model needs to generate a graph depicting the reasoning process. marily fine-tune pre-trained language models on downstream tasks directly (Xie et al., 2020; Saha et al., 2020, 2022). While typical pre-trained language models (PLMs) are pre-trained on textual corpus only, fine-tuning on downstream tasks directly may lead to a significant discrepancy between text-based language models and explanation graphs. To mitigate this issue, we argue that pre-training over task data can be an ideal way to bridge the above gap. Such a pre-training manner can be a subtle solution to inject inductive bias into PLMs. However, the scale of existing datasets is relatively small since it costs a lot to label explanation graphs, and pre-training on existing data is insufficient to bridge the gap. To this end, an appealing solution is to continually pre-train PLMs on a large scale automatically synthetic corpus containing explanation graphs instead of human labeling before fine-tuning. The explanation graph is highly structured and contains diverse entities and relations, which is easily synthesized by randomly assigning different values to each entity and relation positions. In this paper, we propose **EG\({}^{3}\)P** (**Ex**planation **G**raph **G**eneration via **G**enerative **P**re-training over Synthetic Graphs), a novel pre-trained framework for explanation graph generation. Specifically, as shown in Figure 2, **EG\({}^{3}\)P** is composed of two key components: the "Text-to-Graph" pre-training task and the construction of the pseudo training data. Different from previous natural language-based pre-training tasks, the "Text-to-Graph" task takes external knowledge sources and questions containing partial reasoning progress information as input, and its target is to generate relevant explanation graphs. In addition, to avoid the high cost of retrieving graphs for the simulated questions from the knowledge base, we propose a novel approach to constructing questions from simulated graphs, which automatically constructs a large amount of pseudo data. Experiment results on the ExplaGraphs benchmark demonstrate that our approach could improve the ability of the model to generate explanatory graphs significantly. Moreover, the model also shows excellent graph generation ability on other reasoning datasets. Overall, we make the following key contributions: * We propose a novel pre-training task by mapping the input question to a structural explanation graph, which guides the model to learn the connections between natural language questions and structured graphs. * We propose a novel approach to synthesize corpus by automatically constructing structured graphs and queries to form the large-scale corpus. * Among the models with similar scales, our model achieves competitive results. Furthermore, the results of our experiments indicate that our model is capable of producing acceptable graphs on reasoning datasets without labeled graphs. ## 2 Overview and Background In this paper, we concentrate on the task of explanation graph generation. An example is depicted in Figure 1. Given a piece of natural language text \(T\), the model needs to generate a graph \(G\) which encapsulates the internal reasoning path of the input text. The specific content of the input \(T\) is contingent upon the specific downstream tasks (belief + augment + stance in stance prediction, question + answer in QA, etc.). For output \(G\), we organize the graph into a sequence of triples in the depth-first search order. In practice, we employ a generative model and treat graph generation as a standard text generation task. A crucial point in this task is addressing the significant discrepancy in semantic expression structure between natural language texts and explanation inference graphs. An ideal way is to let the model learn this expression transfer on a large amount of natural language text and graph alignment data. However, due to the small size of labeled datasets, training on these datasets is difficult to address Figure 2: The overview of **EG\({}^{3}\)P**. The model is first pre-trained on a large amount of synthetic data in the form of “text2graph”, and then fine-tuned on a downstream task with a small amount of data. the issue. Based on all of the above, we propose the two modules of our model: the "text-to-graph" pretraining strategy introduced in section 3, and the method of automatically constructing synthetic corpus introduced in section 4. ## 3 The Text2graph Pre-training Strategy Typical pre-training strategies of various PLMs are based on the corpus of natural language text (e.g. MLM and NSP(BERT), text denoising(BART), and text-to-text generation(T5)). However, in the explanation graph generation task, the explanation graph is different from the natural language text both in the representation form and in the semantic structure, which leads to a huge gap between the two kinds of representations. Apart from typical tasks, some pre-training strategies are applied to recover the triples in the knowledge graph for knowledge graph question answering(KGQA)(Saxena et al., 2022). However, such a pre-training method is not able to cover the explanation graph generation task due to the separation of pre-training steps between structured corpus and natural language, which is unable to bridge the gap between natural language and structured graphs. Since the explanation graph generation task is required to translate natural language text into a graph, we believe the key point is to map the natural language texts to structured graphs in the learning process implicitly. To this end, we set the form of the pre-training task as "text-to-graph" in \(\mathbf{EG^{3}P}\). As depicted in Figure 3, the format of the pre-training task is analogous to that of a normal generation task. Given a query in natural language question and a simulated knowledge source, the model concatenates the two part together as input and generate a sequence of triples representing the reasoning graph from the query to the answer. By learning aligned "text-to-graph" pairs, the model acquires text-to-graph mapping in the process, and its capability for structured text generation is also enhanced. Real input samples are presented in Appendix B for further reference. The query and the graph of the answer come from the auto-construction method we propose, which will be discussed in the next section. To construct the simulated knowledge source (a collection of triples), we take the triples of the gold answer as a starting point and add random triples that are not relevant to the reasoning process to disrupt the collection. The final size of the simulated knowledge source is approximately 1.5 to 2 times the length of the graph. ## 4 The Construction of Synthetic Corpus Pre-training tasks necessitate the support of large-scale corpus. However, all the existing datasets with human-labeled graphs are small in scale due to the high cost of manually annotating, which is not enough to support the pre-training process. To address this issue, we propose an automatic method of constructing the pair of the natural language query and the explanation reasoning graph. The conventional way to get a graph from a piece of natural language text is to search in the external knowledge base. However, the complexity of searching would increase exponentially with the number of nodes and the length of edges in graphs. Therefore, we invert this process, synthesizing a reasoning graph first and then constructing a query based on the graph next. Figure 3: The illustration of “text-to-graph” task. The input comprises the synthetic question and a simulated knowledge source. In the simulated knowledge source, the triple related to the reasoning is marked as blue and the one not related is in grey. In the training process, the triples in the knowledge source will be randomly shuffled and will not be marked as relevant or not. ### The Synthesizing of the Graph Observing the reasoning process of the downstream reasoning tasks, it is evident that the reasoning path of a specific instance not solely depends on the problem entity as the starting point of reasoning, but also depends on other entities in the problem as constraints, which ultimately lead to the sink point. So we construct the explanation graph from back to front to ensure that there is only one sink (i.e. the answer) in the graph and the relationship of each edge is used. The process of construction is shown in Figure 4. Initially, a concept is randomly selected as the sink of the graph (also the answer to the query in the following steps). Subsequently, triples are retrieved recursively, and a random number of them (ranging from 0 to 2) are incorporated into the graph. All the triples are retrieved from ConceptNet(Speer et al., 2017), which is an external knowledge base containing concepts as nodes and relations as edges. Additionally, the relationship "relatedTo" is prevalent among the concepts, which will seriously affect the reasoning process, so it is deleted. Furthermore, certain other relations are merged, resulting in a total of 16 distinct relations. The distribution of the relations is introduced in Appendix A. ### The Construction of the queries Inspired by the work of Liu et al. (2021), we construct three queries with different difficulty levels: easy, normal, and hard for each instance of the graph, as shown in Figure 4. The easy level involves retaining the start node and relation in the intermediate stages of reasoning, while hiding the sink node (which is treated as the answer) and the nodes present in the intermediate stages. The relation is then replaced with natural language annotations based on a predefined template, and the resulting triples are subsequently concatenated in their original order. For the normal level, a similar amount of information is retained as in the easy level, but the concatenated query is further converted into a natural language expression using a predefined template, in order to simulate a realistic question-answering scenario. For the hard difficulty level, only the start node and the first relation are retained, with all other auxiliary information removed, and the question is formulated in natural language. All the template is shown in Appendix B. ## 5 Experiments commonsense phrases, while the edges represent commonsense relations present in the dataset. Each edge in the graph is an expression of one of the reasoning steps, and the ordered links of all edges provide an overall representation of the reasoning process for the user. In terms of the metrics, the dataset defines 6 test metrics, two of which are selected as main metrics by the prior worksSaha et al. (2022): Structural Correctness Accuracy (StCA) evaluating if the graphs satisfy all structural constraints, and Semantic Correctness Accuracy (SeCA) evaluating if the graphs are both structurally and semantically correct. The structural constraints contain several parts: the graph should be a connected DAG, the relations belong to the relation list defined by the dataset and there are at least two concepts from the belief and two from the argument. The semantic correctness is evaluated by a model-based metric Saha et al. (2021), checking whether the semantics of the graph and the standard answer are matched. All the metrics in detail could be found in the Appendix D. Other reasoning datasetsTo prove the generalization ability of the model, we also conducted experiments on two other general commonsense reasoning datasets in addition to ExplaGraphs: CommonsenseQATalmor et al. (2019) and OpenbookQAMihaylov et al. (2018). CommonsenseQA is a 5-way multiple-choice question answering dataset that focuses on commonsense reasoning, while OpenBookQA is a 4-way multiple-choice question answering dataset that requires reasoning with elementary science knowledge. Since there is no labeled commonsense reasoning graph on these datasets, we evaluate the results of the dev set of these two datasets manually from the point of semantics and analyze the model for specific examples. The evaluation of semantics is to check whether the semantics of the graph matches the reasoning process properly. ### Generative Baseline In line with the previous workSaha et al. (2021, 2022), we generate the explanation graphs in a post-hoc manner, with a condition of the belief, the argument, and the predicted stance. In order to objectively compare the results of graph generation, the part of stance prediction in all our experiments is finished by an identical RoBERTa-based model. The first baseline model is BART, our backbone of **EG\({}^{3}\)P**. Furthermore, we also implement other pre-training methods that have been introduced in recent studiesSaxena et al. (2022) on knowledge graph question answering (KGQA), such as link prediction and tail prediction. Link prediction is a common task in knowledge graph embedding(KGE) learning. Given two parts of a knowledge triple (head+relation, head+tail, or relation+tail), the model is required to complete the missing element of the input. For the tail prediction task, the training process is basically the same as link prediction, but the model only needs to predict the tail entity in all instances, which is more similar to the process of step-by-step reasoning from front to back. In order to facilitate the model's understanding of the task, we add a prompt before the input triple: "Predict the head/relation/tail: xxx". The input sample of the two tasks is shown in Appendix B. ### Fine-tuning on Downstream Datasets For the fine-tuning process on ExplaGraphs, we follow the pipeline outlined in previous work as described above. For the fine-tuning process on CommonsenseQA and OpenbookQA, we did not use the model to generate the graph in zero-shot style, because we found that BART-Large without any learning process can hardly generate an acceptable graph in the comparison tests. To improve comparability, we fine-tune the model with the ExplaGraphs dataset before generating explanation graphs on other datasets in different groups of experiments. All the input samples are shown in Appendix B. ### Experimental Setup The experiments include three parts: the construction of the corpus, the process of pre-training, and the process of fine-tuning. For corpus construction, we first synthesize 20 million reasoning graph instances and construct three questions of varying difficulty for each instance. Then, the "query-graph" pairs in three difficulty levels are mixed in equal proportion, ensuring that the total amount of data meets the experimental requirements. Except for experiments discussing the effect of the corpus scale, the scale of the corpus in other experiments is set to 0.3 million. For the pre-training process, we utilize the BART-LargeLewis et al. (2020) model in fairseqOtt et al. (2019), a widely-employed seq2seq model that follows the standard trans former structure, as the backbone of our model. The pre-training process runs up to 50000 steps with a learning rate of 3e-5, a dropout rate of 10%, and a max length of 1536. For the process of fine-tuning, we build the classification model based on RoBERTa-LargeLiu et al. (2019), with a batch size of 32, an initial learning rate of 1e-5 with linear decay, a weight decay of 0.1, and a maximum input length of 128. The model is trained for 10 epochs. Then the fine-tuning step on ExplaGraphs for graph generation runs up to 10000 steps with a batch size of 8 and a max length of input and output of 150, keeping other parameters the same as the pre-training process. The whole training process is conducted on Nvidia-A100-40G. ## 6 Results and Analysis ### Results on ExplaGraphs In this section, we compare the result of our **EG\({}^{\textbf{3}}\)P** with other baselines introduced in Sec 5.2 and some released works on the same task. Following prior work Saha et al. (2022), we report all the metrics on the ExplaGraphs dataset. Effect of "Text-to-Graph" pre-training methodIn this part, we report all the evaluation results on the dev set. As depicted in Table 1, our pre-training method in **EG\({}^{\textbf{3}}\)P** improves StCA by 12.56% and SeCA by 11.3% compared to BART-Large without "text-to-graph" pre-training, indicating our method could significantly enhance the model's capability for graph generation in terms of both structure and semantic understanding. Furthermore, based on the same backbone model, the pre-training method in **EG\({}^{\textbf{3}}\)P** also outperforms other listed pre-training methods in the table across all the metrics, as evident in Table 1, which demonstrates the efficacy of our modeling approach. The gains on the task of link prediction and tail prediction are not relatively significant on structural correctness and semantic correctness, which means the aligned input pair of "text-graph" and the output of graph is crucial for the model to learn the mapping between natural language text and structural graph. The case study is discussed in Appendix E.1. Comparison with other worksIn this part, we compare our results with some other representative results on the ExplaGraphs dataset. * Saha et al. (2022) proposes some methods to construct structurally and semantically positive and negative graphs and leverages these graphs in different contrastive learning models. In order to make a fair comparison, we take the results of this method on BART-Large. * CoCoGenMadaan et al. (2022) treats the structured commonsense reasoning task as a code generation task and uses a code generation language model CODEXChen et al. (2021) to generate the graph with few-shot prompting. There are also other results of the same method on different natural language large language models(LLMs), such as CURIE and DAVINCI. We only compare with the best result of them. The results of the test set are summarized in Table 1. The comparison demonstrates that our proposed method, **EG\({}^{\textbf{3}}\)P**, outperforms both of the aforementioned methods, particularly in terms of semantic correctness accuracy (SeCA). The results show that the pre-training method on aligned "text-graph" pair could help the model learn the mapping between natural language and graphs better than training on a single downstream task. Besides, specific pre-training methods could also endow small models with a better ability of semantic understanding on the specific task (graph generation here) than large language models. ### Other Analysis Effect of the difficulty of the queryIn **EG\({}^{\textbf{3}}\)P** we construct a query in three different difficulties and mix the corpus in the main experiment as multi-task training. Table 2 shows the results on different queries. It is significant that the utilization of a mixed corpus leads to a more substantial improvement than training on a single sub-task alone. Due to the same graph generation form, the structural accuracy(StCA) of all sub-task is improved significantly; the benefits brought by the mixed corpus are mainly reflected in the semantic accuracy(SeCA). A comparison of different sub-tasks reveals that the results for queries of normal difficulty are the most favorable. The queries in normal difficulty retain the form of a natural language compared to easy and retain more intermediate reasoning information compared to hard. This suggests that, in the training process based on a large-scale synthetic corpus, the closer the training task is to the downstream task and the simpler it is, the better the model learns. The model pre-trained on simple corpus demonstrates superior performance in comparison to the one based on the easy corpus. Compared to easy difficulty, the pair of simple query and graph has a form that is more congruent to the explanation graph generation task. This finding aligns with previous workDevlin et al. (2019), which suggests that pre-training on a task that is more closely aligned to the downstream task leads to improved performance. Besides, the model pre-trained on simple corpus also outperforms the one based on the hard corpus, despite the fact that both present the same form. This highlights the importance of selecting an appropriate difficulty level for pre-training tasks in order to achieve optimal efficiency. Effect of the scale of corpusFigure 5 shows the results of the model pre-trained on a different scale of the corpus. We compare the effect of six different scales of corpus on the experiment. Within a certain range, the experimental results are improved by the scale of the corpus. However, when the corpus size exceeds a certain threshold, the marginal benefit of a larger corpus becomes increasingly diminishing, likely due to the limitations of computational resources and insufficient training on a large-scale corpus. Considering all factors, we select a corpus size of 0.3M as the optimal setting for our main experiments, as it yields the best results under the current conditions. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & SA\(\uparrow\) & StCA\(\uparrow\) & SeCA\(\uparrow\) & G-BS\(\uparrow\) & GED\(\downarrow\) & EA\(\uparrow\) \\ \hline BART-BaseSaha et al. (2021)\({}^{\diamondsuit}\) & 86.2 & 21.6 & 11.0 & 16.1 & 0.85 & 10.3 \\ BART-Large\({}^{\diamondsuit}\) & **88.19** & 36.43 & 26.13 & 28.42 & 0.74 & 20.77 \\ & **88.19** & 40.45 & 31.82 & 28.39 & 0.71 & 14.63 \\ & **88.19** & 41.21 & 32.04 & 29.15 & 0.71 & 22.54 \\ & **88.19** & **48.99** & **37.43** & **38.73** & **0.65** & **25.03** \\ \hline BART-LargeSaha et al. (2021)\({}^{\star}\) & 87.2 & 34.20 & 22.20 & 28.90 & 0.75 & 20.00 \\ Contrastive Learning Saha et al. (2022)\({}^{\star}\) & 87.2 & 40.7 & 26.30 & 31.30 & 0.71 & 22.30 \\ CoCoGenMadaan et al. (2022)\({}^{\star}\) & 87.2 & 45.20 & 23.74 & 34.68 & 0.69 & 23.58 \\ **EG\({}^{3}\)P\({}^{\star}\)** & **87.75** & **50.75** & **31.25** & **43.86** & **0.62** & **27.75** \\ \hline \hline \end{tabular} \end{table} Table 1: All the experimental results on the ExplaGraphs dataset. The line with \({}^{\diamondsuit}\) is the result on the dev set. The line with \({}^{\star}\) is the result on the test set. For the detailed disclosure of all evaluation metrics, please refer to the Appendix D. Figure 5: The results of the model pre-trained on the different difficulties of the corpus. We compared the 5 metrics for generated graphs. All the experiments use the same classifier model, reaching 88.19 on SA on the dev set. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & StCA\(\uparrow\) & SeCA\(\uparrow\) & G-BS\(\uparrow\) & GED\(\downarrow\) & EA\(\uparrow\) \\ \hline BART-Large & 36.43 & 26.13 & 28.42 & 73.84 & 20.77 \\ \hline + Easy & 47.99 & 33.16 & 38.71 & 66.23 & 14.23 \\ + Normal & 49.5 & 33.66 & 39.56 & 64.85 & 25.1 \\ + Hard & 45.98 & 27.63 & 36.52 & 67.74 & 23.07 \\ \hline + Mixed & 48.99 & 37.43 & 38.73 & 65.14 & 25.03 \\ \hline \hline \end{tabular} \end{table} Table 2: The results of the model pre-trained on a different scale of the corpus. All the results are on the dev set. As described above, we use the same classifier model in all the experiments, reaching 88.19 on SA. ### Results on other reasoning datasets Table 3 shows the results of human evaluation on CommonsenseQA(CSQA) and OpenbookQA(OBQA). The "text-to-graph" pre-training step improves the semantic accuracy by 10.5 on CSQA and improves the semantic accuracy by 12.0 on OBQA. The experimental results show that the model after "text-to-graph" pre-training is able to generate a fairly exciting explanation graph on other downstream tasks as well. Additionally, this serves as evidence that our methodology enhances the model's capacity for generalization. Observing the generated graph, we find that the explanation graph generated by the model without pre-training only mechanically merely connects the question and the answer with a short path, and even generates some meaningless relations in it. More case study on these two datasets is discussed in Appendix E.2. ## 7 Related Work ### Explanation generation In the task of explanation generation, the model takes a piece of natural language text as input and outputs an explanation in various formats, including (a) textual highlights using a subset of the input textZaidan et al. (2007); Lei et al. (2016); Yu et al. (2019); DeYoung et al. (2020), (b) natural language explanationCamburu et al. (2018); Wiegreffe et al. (2020); Zhang et al. (2020); Inoue et al. (2021) and c) structured explanation, including semi-structured text explanationKhot et al. (2019); Jhamtani and Clark (2020); Geva et al. (2021); Ye et al. (2020) and structured explanation graphsJansen et al. (2018); Xie et al. (2020); Saha et al. (2021). The explanation based on natural language is more expressive and easier understood by readers, but its evaluation process from the perspective of reasoning is often not standardized and rigorousWiegreffe and Marasovic (2021). Therefore, structured explanations have attracted more and more attention from researchers for they are better evaluated in terms of structure and semantics. In this paper, we choose ExplanGraphsSaha et al. (2021) as the main experiment dataset because it is constructed based on commonsense knowledge and comes with relatively comprehensive automated evaluation metrics. ### Structured content generation from language models There are many kinds of works to generate structured content through language models, one of which is graph generation. Graph generation methods can be combined with various tasks, such as event influence graphs generationTandon et al. (2019); Madaan et al. (2020), temporal graphs generationRajagopal et al. (2021); Madaan and Yang (2021), entailment trees generationDalvi et al. (2021), knowledge graph completionLi et al. (2016); Bosselut et al. (2019) and methods for no specific semantics attached graphs generationSimonovsky and Komodakis (2018); Shi et al. (2020); Hwang et al. (2021). In some other semantic parsing-related tasks, there is also the generation of structured content, such as scripts generationSakaguchi et al. (2021); Dalvi et al. (2019); Shi et al. (2022) and program generationChen et al. (2021); Liu et al. (2021). The graphs generated in our paper focus on all kinds of commonsense reasoning tasks. Besides, the main role of our generated graph is an explanation of the internal commonsense reasoning process based on the input. ## 8 Conclusion In this paper, we propose a pre-training framework **EG3P** for a structured explanation generation task. Distinct from existing pre-training tasks based on natural language text, **EG3P** focuses more on training mapping between natural language and graphs. Meanwhile, due to the high cost of manually tagging, we construct queries from the synthetic graph automatically to get a large-scale corpus to support the pre-training process. Using ExplaGraph as a main benchmark, experimental results show that **EG3P** could significantly improve the ability of the model to generate explanations. In addition, on the other dataset, the results of the model after pre-training also showed a considerable improvement. Our approach offers a new possibility for addressing the challenges of limited labeled data in natural language processing tasks. In the future, the ability of the model to generate explanation graphs will benefit from more datasets \begin{table} \begin{tabular}{l c c} \hline \hline & w/o pre-training & w/ pre-training \\ \hline CommonsenseQA & 29.0 & 39.5 \\ \hline OpenbookQA & 34.0 & 46.0 \\ \hline \hline \end{tabular} \end{table} Table 3: The semantic accuracy of the graphs generated on CommonsenseQA and OpenbookQA by human evaluation. w/(w/o) pre-training means with(without) the step of “text-to-graph” pre-training. released with labels and more and more objective evaluation indicators put forward. Additionally, while our current approach processes graphs as strings, utilizing a model architecture that is more suitable for graph generation may further enhance the model's graph generation ability. ### Limitations In our experiments, the most significant limitation is the lack of computational resources. Experimental results in this paper and previous work[14] have shown that a larger scale of models could lead to higher structural and semantic accuracy of explanation graphs in this task. Constrained by computational resources, BART-Large is the largest model on which we can perform the complete process of experiments. We believe that graph generation would be better if sufficient resources were available to perform synthetic data based pre-training on a larger model. In addition, since the evaluation metrics for graph generation tasks are incomplete yet, we can only evaluate a few samples manually outside of the metrics of the dataset, which is more subjective. With more evaluation methods with standardized processes proposed, the results of the experiment will be evaluated more objectively. ## Acknowledgements We would like to thank the anonymous reviewers for their helpful comments. This work was supported by the National Natural Science Foundation of China (No.61976068 and No.62277002). ## Ethics Statement In this paper, we propose a pre-training framework based on synthetic data to improve the ability of the model to generate explanation graphs. The datasets and model we used are all open-source and all the references that draw on the work of others are marked with citations. In the process of constructing the corpus, we ensure that all the triples come from ConceptNet, an open source knowledge base. All the steps involving selection are completely random, so that nothing such as bias or discrimination is introduced in following steps. Finally, our approach is designed to improve the interpretability of the model and won't deviate from the semantics in the input text, so there is no ethical issues in this work.
2304.01335
Charting the Topography of the Neural Network Landscape with Thermal-Like Noise
The training of neural networks is a complex, high-dimensional, non-convex and noisy optimization problem whose theoretical understanding is interesting both from an applicative perspective and for fundamental reasons. A core challenge is to understand the geometry and topography of the landscape that guides the optimization. In this work, we employ standard Statistical Mechanics methods, namely, phase-space exploration using Langevin dynamics, to study this landscape for an over-parameterized fully connected network performing a classification task on random data. Analyzing the fluctuation statistics, in analogy to thermal dynamics at a constant temperature, we infer a clear geometric description of the low-loss region. We find that it is a low-dimensional manifold whose dimension can be readily obtained from the fluctuations. Furthermore, this dimension is controlled by the number of data points that reside near the classification decision boundary. Importantly, we find that a quadratic approximation of the loss near the minimum is fundamentally inadequate due to the exponential nature of the decision boundary and the flatness of the low-loss region. This causes the dynamics to sample regions with higher curvature at higher temperatures, while producing quadratic-like statistics at any given temperature. We explain this behavior by a simplified loss model which is analytically tractable and reproduces the observed fluctuation statistics.
Theo Jules, Gal Brener, Tal Kachman, Noam Levi, Yohai Bar-Sinai
2023-04-03T20:01:52Z
http://arxiv.org/abs/2304.01335v2
# Charting the Topography of the Neural Network Landscape with Thermal-Like Noise ###### Abstract The training of neural networks is a complex, high-dimensional, non-convex and noisy optimization problem whose theoretical understanding is interesting both from an applicative perspective and for fundamental reasons. A core challenge is to understand the geometry and topography of the landscape that guides the optimization. In this work, we employ standard Statistical Mechanics methods, namely, phase-space exploration using Langevin dynamics, to study this landscape for an over-parameterized fully connected network performing a classification task on random data. Analyzing the fluctuation statistics, in analogy to thermal dynamics at a constant temperature, we infer a clear geometric description of the low-loss region. We find that it is a low-dimensional manifold whose dimension can be readily obtained from the fluctuations. Furthermore, this dimension is controlled by the number of data points that reside near the classification decision boundary. Importantly, we find that a quadratic approximation of the loss near the minimum is fundamentally inadequate due to the exponential nature of the decision boundary and the flatness of the low-loss region. This causes the dynamics to sample regions with higher curvature at higher temperatures, while producing quadratic-like statistics at any given temperature. We explain this behavior by a simplified loss model which is analytically tractable and reproduces the observed fluctuation statistics. The optimization of neural networks lies at the core of modern learning methodology, with the goal of minimizing a loss function that quantifies model performance. Naturally, the landscape of the loss function plays a critical role in guiding the optimization process and its properties are closely linked to its performance and generalization capacities [1, 2]. However, the high dimensionality of the parameter space, the non-convexity of the loss function, and the presence of various sources of noise make it challenging to characterize its geometry [3, 4] and subsequently to analyze the optimization process over this complicated landscape. Previous works have studied the topography of the loss landscape and found a number of interesting features. Firstly, it was established that there exists a wealth of global minima, all connected by low-loss paths, a phenomenon referred to as Linear Mode Connectivity [5, 6, 7, 8, 9, 10, 11]. In the final stages of training the network explores this low-loss region and gradient descent predominantly occurs within a small subspace of weight space [12, 13, 14]. In addition, it was seen that the curvature of the explored region sharpens progressively and depends on the learning rate through a feedback mechanism termed "Edge of Stability" [15, 16, 17]. In this work we study the low loss region by injecting noise in a controlled manner during training. Many previous works have studied the importance of noise in the optimization process, modeling it as a stochastic process. Noise sources might include sampling noise in the estimation of the gradient [18, 19, 20], the numerical discretization of gradient flow [21], noisy data [22, 23], stochastic regularization schemes [24] or other sources. Each such noise source gives rise to different noise properties, which qualitatively affect the optimization dynamics [25, 26]. We take a different approach than those described above: we do not use noise to mimic noisy training dynamics, but rather as a probe that allows inferring quantitative geometrical insights about the loss landscape [22, 23]. This is done using standard tools of statistical physics to analyze loss fluctuations, and ensuring that the thermal noise is the only noise source in the system so the stochasticity is completely known. To study the local landscape, we let the system evolve, starting at the minimum, under over-damped Langevin dynamics, defined by the stochastic differential equation \[\mathrm{d}\theta_{t}=-\nabla_{\theta}\mathcal{L}(\theta_{t})\,\mathrm{d}t+ \sqrt{2T}\,\mathrm{d}W_{t}, \tag{1}\] where \(\theta\in\mathbb{R}^{N}\) is the vector of the neural weights and biases, \(\mathcal{L}\) is the loss function (to be specified below), \(T\) the exploration temperature and \(W_{t}\) is a standard \(N\)-dimensional Wiener process. In terms of statistical physics, this is analogous to a system whose phase space coordinates are \(\theta\) and which is described by a Hamiltonian \(\mathcal{L}(\theta)\) in contact with a thermal bath at temperature \(T\). As is well known [27], the long time limit of the probability distribution of \(\theta\) is a Boltzmann distribution, \(p(\theta)\propto e^{-\mathcal{L}(\theta)/T}\), which balances between the gradient and the random noise terms in Eq. (1). Specifically, we explore the topography of the loss function in the vicinity of a typical minimum, for a simple fully connected network performing a classification task of random data in the over-parameterized regime. Our analysis shows that, for the networks that we studied, the minimum is constrained only in a small number of directions in weight-space, as was previously observed in various contexts and is generally expected in the over-parameterized regime [28, 29, 12, 13, 7, 14, 2, 2]. Furthermore, and inline with previous studies, we find that at a given exploration temperature the fluctuations behave as if \(\mathcal{L}\) is effectively quadratic, with \(N_{c}\) independent degrees of freedom with non-vanishing stiffness. In other words, \(N_{c}\) is the co-dimension of the low-loss manifold in the vicinity of the minimum, which our method allows to measure directly. However, contrary to previous works and quite counter-intuitively, we show that this picture _does not_ arise from a simple quadratic approximation of \(\mathcal{L}\) around its minimum, as one might naively interpret these observations. Instead, we find that the stiffness associated with the \(N_{c}\) constrained eigendirections depends linearly on \(T\) over many orders of magnitude, which is a distinctly nonlinear feature. As we explain below, this dependence stems from the exponential nature of the "confining walls" surrounding the low-loss region, and the flatness of the landscape far from these walls. This exponential nature is also what gives rise to the seemingly quadratic properties of the loss fluctuations, but this happens through a delicate balance between the exponential walls and the noise, which cannot be captured with a model of a quadratic loss function. ## I Exact predictions for a quadratic loss Before describing our results, it would be useful to remind the reader what they would expect to observe in the case of a positive-definite quadratic loss function, \(\mathcal{L}=\sum_{i=1}^{N_{c}}\frac{1}{2}k_{i}\Theta_{i}^{2}\), where \(\{\Theta_{i}\}\) are the coefficients of the Hessian's eigenvectors and \(\{k_{i}\}\) are their associated stiffnesses. \(N_{c}\) is the number of dimensions with non-vanishing stiffness. Plugging this into Eq. (1) yields a multivariate Ornstein-Uhlenbeck process which is fully tractable analytically [31]. We briefly summarize here the main results, whose derivations can be found in the supplementary information. First, the fluctuations of \(\mathcal{L}\) follow a \(\Gamma\)-distribution \[P(\mathcal{L};\alpha,\beta)=\frac{\beta^{\alpha}\mathcal{L}^{\alpha-1}}{ \Gamma(\alpha)}\exp(-\beta\mathcal{L}), \tag{2}\] where \(\alpha=N_{c}/2\), \(\beta=1/T\) and \(\Gamma\) is the Gamma function. Second, a direct corollary of Eq. (2) is that the mean and standard deviation of \(\mathcal{L}\) are both proportional to \(T\): \[\begin{split}\mu_{\mathcal{L}}&=\left\langle \mathcal{L}\right\rangle=\tfrac{1}{2}N_{c}T,\\ \sigma_{\mathcal{L}}^{2}&=\left\langle\mathcal{L} ^{2}\right\rangle-\left\langle\mathcal{L}\right\rangle^{2}=\tfrac{1}{2}N_{c} T^{2}\.\end{split} \tag{3}\] This result, a standard example of the equipartition theorem [32], means that each eigendirection contributes \(\tfrac{1}{2}T\) to the total loss, regardless of its associated stiffness. The "heat capacity" \(C_{h}=\partial\mathcal{L}/\partial T\) simply equals \(N_{c}/2\) and is \(T\)-independent. Lastly, in terms of dynamics, the evolution of each eigendirection is uncorrelated from the other ones and shows an exponentially decaying correlation. This is quantified by the two-point correlation \[\chi_{g}(t)=\sigma_{g}^{-2}\left[\left\langle g(t_{0})g(t_{0}+t)\right\rangle -\mu_{g}^{2}\right] \tag{4}\] where \(g\) is any time-dependent quantity. For a quadratic loss we have \(\chi_{\Theta_{i}}=\exp\left(-|t|/\tau_{i}\right)\) and the correlation time \(\tau_{i}\) is simply the inverse of the stiffness \(\tau_{i}=1/k_{i}\). We note that in these terms, the stiffness of the "soft directions" does not need to strictly vanish - \(k_{i}\) should only be low enough so that the correlation time \(\tau_{i}\) would be so long that the dynamics in this eigendirection would not equilibrate during the simulation time. The auto-correlation of \(\mathcal{L}\) is a sum of such exponentials, \(\chi_{\mathcal{L}}=\sum_{i}e^{-k_{i}|t|}\). ## II Numerical experiment We consider a classification problem with \(C=3\) classes using a multi-layer perceptron [33], represented by the function \(f(x;\theta)\!:\!\mathbb{R}^{d}\to\mathbb{R}^{C}\). The network is trained on a training dataset \(\{x^{i},y^{i}\}_{i=1}^{D}\) where \(x^{i}\in\mathbb{R}^{d}\) are the inputs and \(y^{i}\in\{0,1\}^{C}\) are one-hot vectors indicating a randomly assigned correct class. The \(\{x_{i}\}\) are drawn from a standard \(d\)-dimensional normal distribution. Full details regarding the architecture of the network and the dataset are given in the supplementary information. The network's output is transformed to a classification prediction via a softmax function. That is, the estimated probability that an input \(x^{i}\) belongs to class \(k\) is \[p_{k}(x^{i};\theta)=\frac{\exp(f(x^{i};\theta)_{k})}{\sum_{m=1}^{C}\exp(f(x^{i };\theta)_{m})}\, \tag{5}\] where \(f(\cdot)_{k}\) denotes the \(k\)-th entry in \(f\). Finally, the loss is taken to be the cross entropy between the predicted and true labels: \[\mathcal{L}=\frac{1}{D}\sum_{i=1}^{D}\ell(x^{i},y^{i},\theta)\,\ \ \ \ell=-\sum_{k=1}^{C}y_{k}^{i}\log(p_{k}(x^{i};\theta)). \tag{6}\] Our main objective is to explore the topography of the loss function in the vicinity of a typical minimum. To find such a minimum, we train the network using the ADAM optimizer [34] for a predefined amount of epochs. Since the problem is over-parameterized, after some training, the data is perfectly fitted and the loss becomes essentially zero, up to numerical noise. This stage is denoted as "Adam" in Fig. 1a. To explore the vicinity of this minimum, we then let the system evolve under Eq. (1) using the Euler-Maruyama discretization scheme [35], \[\theta_{s+1}=\theta_{s}-\eta\nabla\mathcal{L}(\theta_{s})+\sqrt{2\eta T}\xi_{s }\, \tag{7}\] where \(s\) is the step number, \(\eta=t_{s+1}-t_{s}\) is the discrete time step and \(\xi_{t}\) is a Gaussian random variable with zero mean and unit variance. This exploration is denoted as "Langevin" in Fig. 1. It is seen that the loss increases quickly before reaches a \(T\)-dependent steady state ("thermodynamic equilibrium"). We stress while that the parameter \(\eta\) is reminiscent of the "learning rate" in the machine learning literature, they are not exactly equivalent. Importantly, in our formalism \(\eta\) serves only as the time discretization and appears explicitly in the noise term, whose \(\sqrt{\eta}\) scaling is necessary in order for the dynamics to converge to the Boltzmann distribution in the limit \(\eta\to 0\)[27]. We also note that the convergence of the probability distribution \(p(\theta)\) in the limit \(\eta\to 0\) is a different concept than the convergence of the gradient descent trajectory to that of gradient flow [21]. As such, \(\eta\) is not a parameter of our exploration protocol but rather of the numerical implementation of Eq. (1), and meaningful results should not depend on \(\eta\). ## III Results: Loss fluctuation statistics We begin by inspecting the moments of the loss fluctuations, \(\mu_{\mathcal{L}}\) and \(\sigma_{\mathcal{L}}\), shown in Fig. 1c. It is seen that both of them scale linearly with \(T\). First, we note that our measurements of \(\mu_{\mathcal{L}}\) at a given temperature are independent of \(\eta\), as expected. Furthermore, a basic prediction of statistical mechanics relates the variance of \(\mathcal{L}\) in equilibrium with the heat capacity, namely \(\sigma_{\mathcal{L}}^{2}=T^{2}C_{h}(T)\)[32]. In our case of a \(T\)-independent heat capacity this relation reads \(\sigma_{\mathcal{L}}=\sqrt{C}_{h}T\), which is numerically verified in Fig. 1c. These results support our claim that the dynamics are thermally equilibrated and follow Boltzmann statistics. Going beyond the moments, Fig. 1b shows the full distribution of the loss fluctuation, which are well described by a Gamma distribution. Fig. 1d shows the distribution parameters \(\alpha\) and \(\beta\), defined in Eq. (2), which are estimated from the empirical loss distributions using standard maximum likelihood estimators. It is seen that the distribution parameter \(\beta\) agrees with the exploration temperature \(T\), i.e. \(\beta T\approx 1\), over several orders of magnitude in \(T\) and independently of \(\eta\). The number of stiff dimensions, \(N_{c}=2\alpha\), seems to weakly depend on the temperature, decreasing as \(T\) grows. Lastly, we note that that the linear dependence of \(\mu_{\mathcal{L}}\) and \(\sigma_{\mathcal{L}}\) on \(T\) is a property of the low-loss region explored by the dynamics at low \(T\), and it is not observed if the thermal dynamics are started immediately after initializing the network. This is shown explicitly in the supplementary information. All these observations are _quantitatively_ consistent with a picture of a (locally) quadratic loss function. In other words, at each temperature we can interpret the loss statistics as if they were generated by an effective quadratic loss, which has a \(T\)-dependent number of stiff directions, \(N_{c}(T)=2\alpha(T)\). This number, \(N_{c}\approx 20-60\), is significantly lower than both the dimensionality of \(\theta\) (\(N=900\)) and the number of elements in the dataset, \(D=300\). It is also much _larger_ than the number of classes \(C=3\), which was suggested by Fort et. al. [7] as the number of outlying large Hessian eigenvalues. We find that the effective dimension of the low loss manifold is directly related to the number of points that lie close to the decision boundary. To demonstrate this, we examine the loss \(\mathcal{L}\) of Eq. (6) as a sum over the losses of individual sample points \(\mathcal{L}=D^{-1}\sum_{i}\ell_{i}\). We find numerically that most of the sample points are well classified, contributing negligibly to the total loss. A common way to quantify how many points contribute non-negligibly is the ratio of the \(L_{1}\) and \(L_{2}\) norms of the loss Figure 1: (a) Observed loss dynamics during the exploration. First, the network is trained using the ADAM algorithm (black line). Then, the learning algorithm is changed to Eq. (1), where the noise amplitude is controlled by a temperature-like parameter \(T\) (colored lines). Each curve corresponds to a different temperature, all using \(\eta=10^{-2}\). (b) Distribution of the loss fluctuation in steady state normalized by the temperature. For each distribution, the dashed black line corresponds to a gamma distribution, cf Eq. (2), whose parameters are found using maximum likelihood estimation. The inset shows the same data in log-linear axes. (c) Temperature dependence of \(\mu_{\mathcal{L}}\) (circles) and \(\sigma_{\mathcal{L}}\) (squares). Each point corresponds to an average over multiple runs. The solid line shows a fit to \(\mu_{\mathcal{L}}=\Delta T\). The dashed line shows the equilibrium prediction \(\sigma_{\mathcal{L}}=\sqrt{C_{h}}T\) with the obtained value of \(C_{h}\). (d) Corresponding parameters \(\alpha\) and \(\beta\) for the gamma distribution. The symbols and error bars show the average and standard deviation, respectively, over multiple runs. vector [36], \[\phi\left(\{\ell_{i}\}\right)=\frac{\left(\sum_{i=1}^{D}\ell_{i}\right)^{2}}{\sum _{i=1}^{D}\ell_{i}^{2}}\, \tag{8}\] where \(\ell_{i}\) is the contribution of the \(i\)-th example to the loss. \(\phi\) is a measure of sparsity, which counts how many entries in \(\ell_{i}\) contribute to its sum \(d\). For instance, if \(\ell_{1}=\ell_{2}=\cdots=\ell_{k}\) and all other \(\ell_{i}\) vanish then \(\phi=k\). We calculate \(\phi\) for random snapshots of the network during the dynamics, and plot the averaged results in Fig. 2a. It is seen that \(\phi\), the effective number of sample points pinning the decision boundary, quantitatively agrees with \(\alpha\), twice the effective number of constrained dimension in weight space. ### Temperature dependence However, while the time-independent statistics suggest an effective quadratic loss, the dynamic properties show that the picture is not as simple. Examining again Fig. 1a, one may notice that that the temporal dynamics of \(\mathcal{L}\) seem to slow down at lower temperatures. This is readily verified by looking at the loss auto-correlation, cf. Eq. (4), which shows a distinct slowing down at low \(T\), as seen in Fig. 2a. To quantify this, we define the correlation half-time \(\tau_{\frac{1}{2}}\) as the lag time at which \(\chi_{\mathcal{L}}\) decays to \(\frac{1}{2}\). Plotting \(\tau_{\frac{1}{2}}\) as a function of temperature, cf. Fig. 2b, shows a clear dependence \(\tau_{\frac{1}{2}}\propto T^{-1}\). The fact that \(\tau_{\frac{1}{2}}\) scales as \(T^{-1}\) raises three interesting insights. First, and most importantly, it is inconsistent with a picture of quadratic loss, which implies that the dynamic timescales are \(T\)-independent, \(\tau_{i}=k_{i}^{-1}\). In contrast, we observe that \(\tau_{\frac{1}{2}}\) changes over 4 orders of magnitude with \(T\). Secondly, while the quadratic analogy might not hold, one may still relate the temporal timescale with the local stiffness, i.e. \(k\sim\tau^{-1}\). If this scaling relation holds, we should expect the eigenvalues of the loss Hessian to scale linearly with \(T\). To test this, we measured the Hessian of the loss at 1000 randomly selected points during the exploration at steady state and calculated their eigenvalues using standard numerical procedures [37; 38]. The distribution of these eigenvalues is plotted in Fig. 3a, clearly showing a linear scaling with \(T\). These observations are manifestly inconsistent with a picture of an effectively quadratic loss: in the quadratic picture \(\mu_{\mathcal{L}}\) increases linearly with \(T\) because the system climbs slightly higher up the confining parabolic walls, whose stiffness is constant. Our observation suggests that the picture is quite different: \(\mu_{\mathcal{L}}\) increases in tandem with the stiffness of the confining walls, and due to a delicate balance the net result is indistinguishable from a quadratic picture, as far as static properties are considered. Below we explain this balance and show that it is related to the exponential nature of the confining walls. Lastly, we remark that the relation \(\tau_{\frac{1}{2}}\sim T^{-1}\) gives rise to a distance scale \(L\), defined by \(L^{2}=T\tau_{\frac{1}{2}}\). \(L\) is the distance, in parameter space, that \(\theta\) would diffuse over the time \(\tau_{\frac{1}{2}}\), if subject only to isotropic Gaussian noise. Since the diffusion coefficient scales with \(T\), \(L\) is \(T\)-independent. Furthermore, since \(\tau_{\frac{1}{2}}\) is the correlation time, one can also interpret \(L\) as a correlation length, or the distance that two nearby networks need to diffuse away from each other in order for their loss to decorrelate, i.e. produce significantly different predictions. Since this distance scale does not depend on \(T\), we conclude that it is an intrinsic property of the loss landscape, i.e. a characteristic length scale in weight space. To demonstrate the effect of this length scale, we performed another numerical experiment: Starting from the minimum (the end of training phase I in Fig. 2) we let the system diffuse freely, i.e. evolve in time according to Eq. (1) but without the gradient term. This procedure samples points uniformly and isotropically around the starting point. Indeed, Fig. 3b shows that for distances smaller than \(L\) the loss does note deviate significantly from its minimum value. At larger distances, \(\mathcal{L}\) changes Figure 2: (a) The sparsity \(\phi\), cf. Eq. (8), as a function of \(T\). In gray we overlay our estimations of \(\alpha\), plotted in Fig. 1. It is seen that \(\phi\) quantitatively agrees with \(\alpha\), twice the effective number of constrained dimensions of the low loss manifold. (b) Temperature dependence of \(\tau_{\frac{1}{2}}\). The measurement was repeated over multiple runs, and the plot shows the average (points) and maximum and minimum values (color shading). The black line shows a power law dependence \(\tau_{\frac{1}{2}}=\frac{L^{2}}{T}\). (c) Autocorrelation of the loss (cf. Eq. (4)) in steady-state. The correlation half-time \(\tau_{\frac{1}{2}}\) is the time for which \(\chi_{\mathcal{L}}=0.5\). It is seen that the auto-correlation decays logarithmically at large \(\Delta t\). (d) The same data as in panel c, plotted as a function of the rescaled time lag \(2T\Delta t\). The curves for different \(T\) collapse to a single curve, except at high temperature and long times. by orders of magnitude over a relatively small distance. ### Summary of the numerical observations We summarize here the main properties of the loss fluctuations in the vicinity of the minimum, described above: * Both \(\mu_{\mathcal{L}}\) and \(\sigma_{\mathcal{L}}\) scale linearly with the temperature \(T\), as one would expect from a quadratic loss, cf. Fig. 1c. * Interpreting the fluctuations as if they were generated from a quadratic loss, the effective number of degrees of freedom is found to be small and weakly \(T\)-dependent, cf. Fig. 1d. In addition, it is closely related to the number of sample points that lie close to the decision boundary, cf. Fig. 2a. * The correlation time \(\tau_{1/2}\) scales as \(1/T\) and the Hessian eigenvalues scale as \(T\), which is inconsistent with a quadratic loss and gives rise to an emergent \(T\)-independent length scale \(L\), cf. Fig. 2 and Fig. 3. ## IV An analytical toy model In order to explain our numerical observations, one need to inspect the cross entropy loss Eq. (6). For simplicity, consider a network performing binary classification on a single training example \(\{x,y\}\in\mathbb{R}\times\mathbb{R}\). Since the network is overparameterized, the networks in the low-loss region that we explore classify most of the training samples perfectly. These examples contribute negligibly to the total gradient. However, some samples lie close to the decision boundary. We focus on one such sample \(\{x,y\}\) and assume without loss of generality that the correct class is \(y=1\). Taking a linear approximation of \(f\), the contribution of this sample to the loss is (see supplementary material for derivation) \[\ell(x;\theta)=\log\left(1+e^{f(x;\theta)}\right)\,\ \ \ \ f=\sum_{i}a_{i} \theta_{i}+b \tag{9}\] In this description, the only property of \(\theta_{i}\) that affects the loss is its projection on \(a_{i}\), the direction in weight-space that moves the decision boundary towards the sample point. Since all other directions in weight space are irrelevant, we ignore them and examine a one-dimensional loss function \[\ell_{1D}(\theta)=\log\left(1+e^{a\theta+b}\right)\approx Be^{a \theta}. \tag{10}\] The approximation in Eq. (10) holds in the vicinity of the minimum because the point is well classified and the exponent is expected to be small. We define \(B\equiv e^{b}\), and assume for concreteness that \(a>0\). The statistical mechanics of \(\ell_{1D}\) can be obtained in closed form by calculating the partition function \(Z(\beta)=\frac{1}{\theta_{0}}\int_{-\infty}^{\infty}e^{-\beta\ell_{1D}(x; \theta)}d\theta\), where \(\theta_{0}\) is a resolution scale required to ensure the partition function is dimensionless. Formally, Eq. (10) is minimized at \(\theta\to-\infty\), which effectively sets the decision boundary at infinity and prevents the integral which defines \(Z\) from converging. To avoid this unphysical behavior we impose a hard cut-off Figure 3: (a) The cumulative distribution function of the Hessian eigenvalues sampled during dynamics with \(\eta=10^{-2}\), for various values of \(T\). Very small negative eigenvalues are excluded from this plot. It is seen that at higher temperatures the network explores regions with larger eigenvalues. Inset: the same data plotted as function of \(\lambda/T\) shows a collapse of the distributions, suggesting that the eigenvalues scale linearly with \(T\). (b) The loss as a function of distance in weight space during the exploration. In the warm-colored curves show Langevin exploration (same color code as panel a). The black line shows the behavior in the case of pure diffusion (without gradient descent). The dashed line marks \(L\), the characteristic distance in weight distance obtained from Fig. 2c. Figure 4: The loss function \(\ell_{1D}\), cf. Eq. (10), is plotted in blue. The probability distribution of \(\theta\), \(p(\theta)\propto e^{-\ell_{1D}/T}\) is shown for three temperatures. It is seen that the probability distributions are qualitatively different from the probability distribution of generated by a quadratic loss, \(p_{Q}\), which is Gaussian. For comparison, we plot \(p_{Q}\) obtained from a quadratic approximation at \(T=10^{-1}\). For this figure we chose \(B=1\) and \(\theta_{*}=20\). at \(\theta=-\theta_{*}\), where \(\theta_{*}>0\), which would realistically arise when the decision boundary wanders far away and meets another sample point. With this cutoff, the partition function \(Z(\beta)\) can be obtained analytically in closed form and consequently all other "thermodynamic" quantities can be calculated (see supplementary information for the derivations). The main finding is that this model reproduces the properties of the loss fluctuations described above. Namely, in the limit \(a\theta^{*}\gg 1\) and \(T\ll 1\), both \(\mu_{\mathcal{L}}\), \(\sigma_{\mathcal{L}}\)_and_ the average curvature scale linearly with \(T\), up to logarithmic corrections: \[\begin{split}&\mu_{\ell_{1D}}\simeq\frac{T}{a\theta_{*}-\gamma+ \log(T/B)}\,\\ &\sigma_{\ell_{1D}}^{2}\simeq\frac{T^{2}\left(a\theta_{*}-\gamma+ \log\left(T/B\right)-1\right)}{\left(a\theta_{*}-\gamma+\log\left(T/B\right) \right)^{2}}\,\\ & H_{\ell_{1D}}=\left\langle\nabla_{\theta}^{2}\ell_{1D}\right\rangle \simeq\frac{a^{2}T}{a\theta_{*}-\gamma+\log(T/B)}\.\end{split} \tag{11}\] Here \(\gamma\simeq 0.577\) is the Euler-Mascheroni constant. Finally, because the loss is approximately exponential in \(\theta\), it features an intrinsic length scale \(L\simeq a^{-1}\). We note that this length scale depends on the gradient of the network and therefore in general might differ between two different sample points that reside near the decision boundary. In Fig. 4, we show the full loss given in Eq. (10), and the resulting probability distribution \(p(\theta)\propto e^{-\ell_{1D}/T}\) for various temperatures. It is seen that, due to the flatness of the loss, \(p(\theta)\) is essentially constant at negative \(\theta\) and drops sharply at the decision boundary. As \(T\) grows, the probability explores regions with higher loss and, due to the exponential dependence on \(\theta\), higher curvature. We compare these results against a quadratic approximation for \(\ell_{1D}\), expanded around \(\theta_{0}\) defined by \(\ell_{1D}(\theta_{0})=\mu_{\ell_{1D}}(T)\). It is seen that a quadratic loss is an extremely poor approximation in the low temperature limit. ## V Summary and Conclusions To summarize our findings, we have used Langevin dynamics to investigate the geometry of the low-loss manifold of an overparameterized neural net. We find that the fluctuation statistics of the loss are a powerful probe that allows inferring geometrical insights about the loss topography. For the network studied here - an overparameterized fully connected neural net performing a classification task on randomly distributed data - the picture that emerges is that in the low loss region, which is explored at low temperatures, most of the sample points are well classified and do not contribute significantly to the loss. However, a small number of sample points "pin" the decision boundary, which fluctuates around them. At a given temperature, these fluctuations have the same statistics as fluctuations produced by a quadratic loss function, whose effective number of degrees of freedom is directly related to the number of data points constraining the decision boundary and can be immediately read off the fluctuation statistics. However, we find that a quadratic description of the loss is fundamentally inadequate: the effective stiffness scales linearly with \(T\), and correspondingly the characteristic time scale of loss fluctuations grows at low temperatures as \(1/T\). These observations cannot be reconciled with a quadratic approximation of the loss. Rather, we suggest that this behavior is due to the exponential nature of the cross-entropy loss in the low \(T\) regime. As we demonstrate analytically, an exponential loss function in 1D reproduces the observed fluctuation statistics in the limit of low temperature. These conclusions, of course, pertain to the simplified case studied here - a fully connected network classifying random data. Understanding how they apply to structured data or more complicated network architectures is left for future studies. ## VI Acknowledgements We thank Nadav Cohen, Boaz Barak, Zohar Ringel and Stefano Recanaetsi for fruitful discussions. YBS was supported by research grant ISF 1907/22 and Google Gift grant. NL would like to thank the Milner Foundation for the award of a Milner Fellowship. TK would like to acknowledge Lineage logistics for their funding. TJ was partly supported by the Raymond and Beverly Sackler Post-Doctoral Scholarship.
2303.12488
The calculation of the distribution function of a strictly stable law at large X
The paper considers the problem of calculating the distribution function of a strictly stable law at $x\to\infty$. To solve this problem, an expansion of the distribution function in a power series was obtained, and an estimate of the remainder term was also obtained. It was shown that in the case $\alpha<1$ this series was convergent for any $x$, in the case $\alpha=1$ the series was convergent at $N\to\infty$ in the domain $|x|>1$, and in the case $\alpha>1$ the series was asymptotic at $x\to\infty$. The case $\alpha=1$ was considered separately and it was demonstrated that in that case the series converges to the generalized Cauchy distribution. An estimate for the threshold coordinate $x_\varepsilon^N$ was obtained which determined the area of applicability of the obtained expansion. It was shown that in the domain $|x|\geqslant x_\varepsilon^N$ this power series could be used to calculate the distribution function, which completely solved the problem of calculating the distribution function at large $x$.
Viacheslav V. Saenko
2023-03-22T11:57:58Z
http://arxiv.org/abs/2303.12488v1
# The Calculation of the Distribution Function of a Strictly Stable Law at Large X ###### Abstract The paper considers the problem of calculating the distribution function of a strictly stable law at \(x\to\infty\). To solve this problem, an expansion of the distribution function in a power series was obtained, and an estimate of the remainder term was also obtained. It was shown that in the case \(\alpha<1\) this series was convergent for any \(x\), in the case \(\alpha=1\) the series was convergent at \(N\to\infty\) in the domain \(|x|>1\), and in the case \(\alpha>1\) the series was asymptotic at \(x\to\infty\). The case \(\alpha=1\) was considered separately and it was demonstrated that in that case the series converges to the generalized Cauchy distribution. An estimate for the threshold coordinate \(x_{\varepsilon}^{N}\) was obtained which determined the area of applicability of the obtained expansion. It was shown that in the domain \(|x|\geqslant x_{\varepsilon}^{N}\) this power series could be used to calculate the distribution function, which completely solved the problem of calculating the distribution function at large \(x\). ## 1 Introduction The main method for calculating the probability density and the distribution function of stable laws is the use of integral representations of these quantities. The reason for this situation is the impossibility of obtaining expressions for these quantities in elementary functions in the general case. The exception comprises only five cases: the Levy distribution (\(\alpha=1/2,\theta=1\)), the symmetric Levy distribution (\(\alpha=1/2,\theta=-1\)), the Cauchy distribution (\(\alpha=1,\theta=0\)), Gaussian distribution (\(\alpha=2,\theta=0\)) and generalized Cauchy distribution (\(\alpha=1,-1\leqslant\theta\leqslant 1\)). When performing the inverse Fourier transform of the characteristic function, it is possible to obtain two types of integral representations. The first type includes representations expressing the probability density and the distribution function in terms of an improper integral of the oscillating function. The works [1, 2] are devoted to obtaining and studying such representations. The second type includes integral representations expressing the probability density and distribution function in terms of a definite integral of a monotone function. The works [3, 4, 5] are devoted to obtaining and studying integral representations in the parameterization "B", the works [6, 7], are devoted to obtaining and studying integral representations in the parameterization "M" and the paper [8] to devoted to the parameterization "C". Here, to determine various parametrizations of the characteristic function of the stable law, the notation was used which was introduced in the book [4]. Further in the text, we will continue adhering to these notations. Integral representations of the second type are most widely used due to the convenience of their use. The method of the inverse Fourier transform, which leads to this type of integral representations, is called the stationary phase method. The convenience of using such representations lies in the fact that the integrand is a monotonic function and in a wide range of coordinates and parameters there are no difficulties in calculating the definite integral of such a function. The integral representations for the parameterization "M" of the characteristic function gained in popularity. These integral representations served as the basis for the development of several software products [9, 10, 11, 12, 13]. Both the first and the second type of the integral representation of a stable law have their disadvantages. The main difficulty in using the integral representation of the first type is the oscillating integrand. In some cases, numerical methods are cannot calculate the integral of such a function. In particular, the work [1] indicates the following problems for the integral representation in the parameterization "M": 1) in the case \(\alpha<0.75\) the integration domain becomes very large, which leads to difficulties in numerical integration; the integration domain becomes very large which leads to difficulties in numerical integration; 2) if \(\beta\neq 0\) and \(0<|\alpha-1|<0.001\) there are calculation problems in calculating the term with \((\tan(\pi\alpha/2)(t-t^{\alpha})\); 3) when \(x\) is very large, the integrand oscillates very quickly. In the paper [2] the authors propose to modernize the standard quadrature numerical integration algorithm to adapt the calculation of integrals of an oscillating function. This gave an opportunity to reduce the lower limit of the parameter \(\alpha\) from \(0.75\) to \(0.5\). To calculate the probability density at large \(x\) it is proposed to use the expansion of the probability density in a power series. However, the paper points out that the proposed scheme is not applicable for symmetric distributions in the case of \(\alpha<0.5\) and for asymmetric distributions in the cases of \(\alpha<0.5\) and \(0.9<\alpha<1.1\). The second type of integral representations also has some features that lead to difficulties in numerical integration. The cause of the calculation difficulties is the behavior of the integrand at very small and very large values of the coordinate \(x\). In the case of the integral representation for the probability density in these two cases, the integrand has the form of a very narrow peak. As a result, numerical integration algorithms cannot determine this peak and give an incorrect integration result. This behavior of the integrand is pointed out in the articles [2, 6, 10, 14]. To eliminate this problem various numerical algorithms are proposed to use in the papers [6, 10, 11] However, all these algorithms increase the accuracy of calculations, but do not eliminate the problem completely. To solve this problem, it is expedient to use the approaches not possessing any specific features in these areas that can lead to calculation difficulties. The most suitable idea is to use expansions of the probability density and distribution function in power series at \(x\to 0\) and \(x\to\infty\). The articles [15, 16] show that the use of expansions of stable laws in power series at \(x\to 0\) and \(x\to\infty\) makes it possible to solve the problem of calculating stable laws completely at very small and very large \(x\). However, in these articles the problem of calculating the distribution function of a strictly stable law in the case of \(x\to\infty\) was left out of consideration. Therefore, the main purpose of this paper is to fill this gap. This paper considers the problem of calculating the distribution function in the case of \(x\to\infty\) with the characteristic function \[\hat{g}(t,\alpha,\theta,\lambda)=\exp\left\{-\lambda|t|^{\alpha}\exp\{-i \frac{\pi}{2}\alpha\theta\,{\rm sign}\,t\}\right\},\quad t\in{\bf R}, \tag{1}\] where \(\alpha\in(0,2]\), \(|\theta|\leqslant\min(1,2/\alpha-1)\), \(\lambda>0\). According to the terminology introduced in the book [4], this characteristic function corresponds to the parameterization "C". In the paper [8] the inverse Fourier transform of this characteristic function was performed using the stationary phase method, and integral representations for the probability density and distribution function were obtained (see Appendix A). As one can see the formula (32) expresses the distribution function in terms of a definite integral, and belongs to the second type of integral representations. In the general case, the integrand in (33) is a monotonic function varying from \(0\) to \(1\) as the integration variable changes from the lower limit point \(\varphi=-\pi\theta/2\) to the upper limit point \(\varphi=\pi/2\). In the case \(0<\alpha<1\) it is a decreasing function, and in the case of \(1<\alpha\leqslant 2\) it is an increasing function. However, at very small and very large values of \(x\) the change in the function from \(0\) to \(1\) occurs so fast that numerical integration algorithms cannot recognize it. As a result, this leads to an incorrect calculation of the integral and points to the fact that in this range of coordinates it is no longer possible to use the integral representation (32) to calculate the distribution function. In this paper, to calculate the distribution function in the specified range of coordinates, it is proposed to use the expansion of the distribution function in a power series at \(x\to\infty\). To do this, such an expansion of the distribution function will be obtained and the conditions for the applicability of this expansion will be determined. ## 2 Representation of the distribution function as a power series We will obtain the expansion of the distribution function in a series at \(x\to\infty\) for a strictly stable law with a characteristic function (1). Without loss of generality, we will assume that the scale parameter is \(\lambda=1\). It is generally accepted to call strictly stable laws with the scale parameter \(\lambda=1\) standard strictly stable laws and shorthand notations are used for them. The characteristic function is usually denoted as \(\hat{g}(t,\alpha,\theta,1)\equiv\hat{g}(t,\alpha,\theta)\), the probability density - \(g(x,\alpha,\theta,1)\equiv g(x,\alpha,\theta)\), the distribution function - \(G(x,\alpha,\theta,1)\equiv G(x,\alpha,\theta)\), a strictly stable random quanitiity \(Y(\alpha,\theta,1)\equiv Y(\alpha,\theta)\). Further in the text we will use this notation. It should be noted that to transform a standard strictly stable law into a strictly stable law with an arbitrary \(\lambda\) one can use remark 5 and remark 7 from the paper [8], (see also [4, 17]). We also need the inversion property, which for a standard strictly stable law with the characteristic function (1) has the form **Property 1.**_For any admissible parameters \((\alpha,\theta)\)_ \[Y(\alpha,-\theta)\stackrel{{ d}}{{=}}-Y(\alpha,\theta).\] The proof of this property was given in the paper [8] (see also [4, 5]). In the terms of the distribution function \(G(x,\alpha,\theta)\) this property takes the form \[G(-x,\alpha,\theta)=1-G(x,\alpha,-\theta). \tag{2}\] The convenience of this property lies in the fact that, when studying the distribution function, it gives us an opportunity to confine ourselves to considering only the case \(x\geqslant 0\). Expressions for the case \(x<0\) are obtained using this formula. To solve the stated problem, we need to expand the probability density into a series at \(x\to\infty\). A similar expansion was obtained in the article [16], where the following theorem was proved. **Theorem 1.**_In the case \(x\to\pm\infty\) for any admissible set of parameters \((\alpha,\theta)\) except for the values \(\theta=\pm 1\) for the probability density \(g(x,\alpha,\theta)\) the representation in the form of a power series is valid_ \[g(x,\alpha,\theta)=g_{N}^{\infty}(|x|,\alpha,\theta^{*})+R_{N}^{\infty}(|x|, \alpha,\theta^{*}),\] _where \(\theta^{*}=\theta\,{\rm sign}(x)\) and_ \[g_{N}^{\infty}(x,\alpha,\theta)=\frac{1}{\pi}\sum_{n=0}^{N-1}\frac{(-1)^{n+1} }{n!}\Gamma(\alpha n+1)\sin\left(\tfrac{\pi}{2}\alpha n(1+\theta)\right)x^{- \alpha n-1},\quad x>0, \tag{3}\] \[|R_{N}^{\infty}(x,\alpha,\theta)|\leqslant\frac{x^{-\alpha N-1}}{\pi N!} \left(\Gamma(\alpha N+1)+x^{-\alpha}\Gamma(\alpha(N+1)+1)\right),\quad x>0.\] Using this theorem, one can obtain an expansion of the distribution function at \(x\to\infty\). As a result, the following theorem turns out to be true. **Theorem 2.**_For any admissible values of parameters \((\alpha,\theta)\) except for the values \(\theta=\pm 1\) at \(x\to\pm\infty\) for the distribution function \(G(x,\alpha,\theta)\) the representation in the form of a power series is valid_ \[G(x,\alpha,\theta)=\tfrac{1}{2}(1+{\rm sign}(x))-{\rm sign}(x)\left(G_{N}^{ \infty}(|x|,\alpha,\theta^{*})+\mathcal{R}_{N}^{\infty}(|x|,\alpha,\theta^{*} )\right), \tag{4}\] _where \(\theta^{*}=\theta\,{\rm sign}(x)\),_ \[G_{N}^{\infty}(x,\alpha,\theta) =\frac{1}{\pi}\sum_{n=1}^{N-1}\frac{(-1)^{n+1}}{n!}\Gamma(\alpha n )\sin\left(\tfrac{\pi}{2}\alpha n(1+\theta)\right)x^{-\alpha n},\quad x>0, \tag{5}\] \[|\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)| \leqslant\frac{x^{-\alpha N}}{\pi N!}\left(\Gamma(\alpha N)+x^{- \alpha}\Gamma(\alpha(N+1))\right),\quad x>0. \tag{6}\] **Proof** To prove it we will use theorem 1. Without loss of generality, we will consider the case \(x>0\). The case \(x<0\) can be obtained using the inversion property (2). By definition, at \(x>0\) the distribution function has the form \[G^{(+)}(x,\alpha,\theta)=1-\int_{x}^{\infty}g(\xi,\alpha,\theta)d\xi,\quad x>0, \tag{7}\] where \(g(x,\alpha,\theta)\) is the probability density of a strictly stable law and the superscript "\((+)\)" shows that this expression determines the distribution function on the positive part of the semiaxis. Thus, the expansion of the distribution function in a series is determined by the expansion of the probability density in a series. It is known that the expansion of any function in a Taylor series consists of the \(N\)-th partial sum and the remainder term. Consequently, the expansion of the probability density \(g(x,\alpha,\theta)\) can be written in the form \[g(x,\alpha,\theta)=g_{N}^{\infty}(x,\alpha,\theta)+R_{N}^{\infty}(x,\alpha, \theta),\quad x>0, \tag{8}\] where \(g_{N}^{\infty}(x,\alpha,\theta)\) is the \(N\)-th partial sum and \(R_{N}^{\infty}(x,\alpha,\theta)\) is the remainder term of the series. In the case \(x\to\infty\) the first summand is determined by the expression (3) and for the remainder term, we use the expression obtained in the article [16] \[R_{N}^{\infty}(x,\alpha,\theta)=\frac{1}{\pi x}\Re ie^{-i\frac{\pi}{2}\theta} \int_{0}^{\infty}\exp\left\{-\tau e^{-i\frac{\pi}{2}\theta}\right\}R_{N} \left(-\left(\frac{i\tau}{x}\right)^{\alpha}\right)d\tau,\quad x>0, \tag{9}\] where \(R_{N}(y)=\frac{y^{N}}{N!}e^{y\zeta}\), \((0<\zeta<1)\) is the remainder term in the Lagrange form. It should be noted that in the case of \(x\to\infty\) the expression (8), as well as theorem 1 and the expression (9) are valid when the condition \(\tau/x\to 0\) is satisfied, where \(\tau\) is the integration variable in the inverse Fourier transform formula. In particular, the integration variable in the formula (9). See the paper [16] for detail. Hence, here, and further in the text, we will assume everywhere that \(\tau/x\to 0\). To obtain the expansion of the distribution function in a power series at \(x\to\infty\) we will substitute the expression (8) in the expression (7). As a result, we get \[G^{(+)}(x,\alpha,\theta)=1-\int_{x}^{\infty}g_{N}^{\infty}(\xi,\alpha,\theta) d\xi-\int_{x}^{\infty}R_{N}^{\infty}(x,\alpha,\theta)d\xi,\quad x>0, \tag{10}\] where \(g_{N}^{\infty}(x,\alpha,\theta)\) has the form (3), and \(R(x,\alpha,\theta)\) is determined by the expression (9). Interchanging the order of integration and summation in the second summand, we obtain \[G_{N}^{\infty}(x,\alpha,\theta)=\int_{x}^{\infty}g_{N}^{\infty} (\xi,\alpha,\theta)d\xi=\frac{1}{\pi}\sum_{n=0}^{N-1}\frac{(-1)^{n+1}}{n!} \Gamma(\alpha n+1)\sin\left(\frac{\pi}{2}\alpha n(1+\theta)\right)\int_{x}^{ \infty}\xi^{-\alpha n-1}d\xi=\\ =\frac{1}{\pi}\sum_{n=0}^{N-1}\frac{(-1)^{n+1}}{n!}\Gamma(\alpha n )\sin\left(\frac{\pi}{2}\alpha n(1+\theta)\right)x^{-\alpha n},\quad x>0.\] We should pay attention that at \(n=0\) the correspondent summand in the sum is equal to zero. Therefore, the summation can be started with \(n=1\). As a result, we come to the expression (5). Now we will obtain an expression for the remainder term. Using the expression (9) and changing the order of integration in some places, for the third summand in (10) we obtain \[\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)=\int_{x}^{\infty}R_{N }^{\infty}(\xi,\alpha,\theta)d\xi=\frac{1}{\pi}\Re ie^{-\frac{\pi}{2}\theta} \int_{x}^{\infty}\frac{d\xi}{\xi}\int_{0}^{\infty}\exp\left\{-\tau e^{-\frac {\pi}{2}\theta}\right\}R_{N}\left(-\left(\frac{i\tau}{\xi}\right)^{\alpha} \right)d\tau=\\ =\frac{1}{\pi}\Re ie^{-\frac{\pi}{2}\theta}\int_{x}^{\infty} \frac{d\xi}{\xi}\int_{0}^{\infty}\exp\left\{-\tau e^{-i\frac{\pi}{2}\theta} \right\}\frac{1}{N!}\left(-\left(\frac{i\tau}{\xi}\right)^{\alpha}\right)^{N} \exp\left\{-\left(\frac{i\tau}{\xi}\right)^{\alpha}\zeta\right\}d\xi=\\ =\frac{1}{\pi N!}\Re ie^{-\frac{\pi}{2}\theta}\int_{0}^{\infty} \exp\left\{-\tau e^{-i\frac{\pi}{2}\theta}\right\}(-(i\tau)^{\alpha})^{N}d \tau\int_{x}^{\infty}\xi^{-\alpha N-1}\exp\left\{-\left(\frac{i\tau}{\xi} \right)^{\alpha}\zeta\right\}d\xi \tag{11}\] Unfortunately, we cannot calculate the integral since the exact value of the variable \(\zeta\) is unknown. It is only known that this variable takes values from the interval \(0<\zeta<1\). Nevertheless, it is possible to estimate the value of this integral. We will consider \(|\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)|\). Taking into account that the case \(\tau/x\to 0\) is being considered, we can expand the multiplier \(\exp\left\{-\left(\frac{i\tau}{\xi}\right)^{\alpha}\zeta\right\}\) in a Taylor series and leave only the summands of the first order of smallness. We have \[\exp\left\{-\left(\frac{i\tau}{x}\right)^{\alpha}\zeta\right\}=\sum_{k=0}^{ \infty}\frac{(-1)^{k}}{k!}\left(\left(\frac{i\tau}{x}\right)^{\alpha}\zeta \right)^{k}\approx 1-\zeta\left(\frac{i\tau}{x}\right)^{\alpha}\] To calculate the obtained integral we will also need the following formula given in [18] (see SS1.5, the formula (31)) \[\int_{0}^{\infty}t^{\gamma-1}e^{-ct\cos\beta-ict\sin\beta}dt=\Gamma(\gamma)c^ {-\gamma}e^{-i\gamma\beta},\ -\frac{\pi}{2}<\beta<\frac{\pi}{2},\ \Re\gamma>0\ \mbox{or}\ \beta=\pm\frac{\pi}{2},\ 0< \Re\gamma<1.\] If we use the Euler formula \(\cos\beta+i\sin\beta=e^{i\beta}\), then this integral can be represented in the form \[\int_{0}^{\infty}t^{\gamma-1}e^{-ct\exp\{i\beta\}}dt=\Gamma(\gamma)c^{-\gamma }e^{-i\gamma\beta},\quad-\frac{\pi}{2}<\beta<\frac{\pi}{2},\ \Re\gamma>0\ \mbox{or}\ \beta=\pm\frac{\pi}{2},\ 0< \Re\gamma<1. \tag{12}\] Taking into account that \(x>0\) for (11) the following estimates turn out to be valid \[|\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)| =\frac{1}{\pi N!}\left|\Re ie^{-\frac{\pi}{2}\theta}\int_{0}^{ \infty}\exp\left\{-\tau e^{-i\frac{\pi}{2}\theta}\right\}(-(i\tau)^{\alpha})^ {N}d\tau\int_{x}^{\infty}\xi^{-\alpha N-1}\exp\left\{-\left(\frac{i\tau}{\xi} \right)^{\alpha}\zeta\right\}d\xi\right|\] \[\leqslant\frac{1}{\pi N!}\left|\int_{0}^{\infty}\exp\left\{-\tau e ^{-i\frac{\pi}{2}\theta}\right\}(-(i\tau)^{\alpha})^{N}d\tau\int_{x}^{\infty} \xi^{-\alpha N-1}\left(1-(i\tau)^{\alpha}\xi^{-\alpha}\zeta\right)d\xi\right|=\] \[=\frac{1}{\pi N!}\left|\int_{0}^{\infty}\exp\left\{-\tau e^{-i \frac{\pi}{2}\theta}\right\}(-(i\tau)^{\alpha})^{N}\left(\frac{x^{-\alpha N}}{ \alpha N}-(i\tau)^{\alpha}\frac{\zeta x^{-\alpha(N+1)}}{\alpha(N+1)}\right)d \tau\right|\leqslant\] \[\leqslant\frac{1}{\pi N!}\frac{x^{-\alpha N}}{\alpha N}\left| \int_{0}^{\infty}\exp\left\{-\tau e^{-i\frac{\pi}{2}\theta}\right\}\tau^{ \alpha N}d\tau\right|+\frac{1}{\pi N!}\frac{\zeta x^{-\alpha(N+1)}}{\alpha(N+1 )}\left|\int_{0}^{\infty}\exp\left\{-\tau e^{-i\frac{\pi}{2}\theta}\right\} \tau^{\alpha(N+1)}d\tau\right|=\] \[=\frac{x^{-\alpha N}}{\pi N!}\left(\frac{\Gamma(\alpha N+1)}{ \alpha N}+\zeta x^{-\alpha}\frac{\Gamma(\alpha(N+1)+1)}{\alpha(N+1)}\right) \leqslant\frac{x^{-\alpha N}}{\pi N!}\left(\Gamma(\alpha N)+x^{-\alpha}\Gamma (\alpha(N+1))\right).\] Here, when passing in the last equality, the formula (12) was used to calculate the integrals, and when passing in the last inequality, it was assumed that \(\zeta=1\). To substantiate the validity of using the formula (12) when calculating the integrals in this expression, we examine the range of the argument \(-\frac{\pi}{2}\theta\). The range of admissible values for the parameter \(\theta\) is determined by the inequality \(|\theta|\leqslant\min(1,2/\alpha-1)\). From this it follows that if \(\alpha\leqslant 1\), then \(-1\leqslant\theta\leqslant 1\), and if \(1<\alpha\leqslant 2\), then \(-(2/\alpha-1)\leqslant\theta\leqslant 2/\alpha-1\). Thus, for any \(0<\alpha\leqslant 2\) we obtain \(-\frac{\pi}{2}\leqslant-\frac{\pi}{2}\theta\leqslant\frac{\pi}{2}\). The extreme values of this interval \(\pm\frac{\pi}{2}\) are attained at the values \(\alpha\leqslant 1\) and \(\theta=\mp 1\). Now we will compare the integral in (12) with the integrals in the expression during the passage in the last equality. We see that the integral (12) coincides with these integrals except for the case \(-\frac{\pi}{2}\theta=\pm\frac{\pi}{2}\). These two points are out of the range of admissible values for the argument \(\beta\) in the formula (12). Therefore, they should be excluded from consideration. Now getting back to (10), we obtain \[G^{(+)}(x,\alpha,\theta)=1-G_{N}^{\infty}(x,\alpha,\theta)-\mathcal{R}_{N}^{ \infty}(x,\alpha,\theta),\quad x>0, \tag{13}\] where for \(\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)\) the estimate is valid \[|\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)|\leqslant\frac{x^{-\alpha N}}{\pi N!}\left(\Gamma(\alpha N)+x^{-\alpha}\Gamma(\alpha(N+1))\right),\quad x>0.\] Since the case \(x>0\) has been considered so far, these expressions are valid at \(x>0\). To obtain the expansion of the distribution function at \(x<0\) we will use the inversion property. Using in the formula (2) the expression (13) we obtain \[G^{(-)}(-x,\alpha,\theta)=G_{N}^{\infty}(x,\alpha,-\theta)+\mathcal{R}_{N}^{ \infty}(x,\alpha,-\theta),\quad x>0.\] If we now introduce the notation \(\theta^{*}=\theta\,\mathrm{sign}(x)\) and take the coordinate \(x\) in absolute value, then we can combine the formulas for \(G^{(+)}(x,\alpha,\theta)\) and \(G^{(-)}(x,\alpha,\theta)\) into one formula. As a result, we obtain the expression (4). Thus, the theorem is proved. The proved theorem determines the expansion of the distribution function of a strictly stable law with characteristic function (1) into a power series at \(x\to\infty\). Now we examine the issue of the convergence of the obtained expansion. Since this expansion was obtained by integrating the expansion for the probability density, taking into account the results of Corollary 1, proved in the paper [16], one can state that this series converges in the case of \(\alpha<1\) for all \(x\), in the case \(\alpha=1\), only for \(|x|>1\), and in the case \(\alpha>1\) the series is asymptotic one at \(x\to\infty\). A more precise formulation is given by the following corollary. **Corollary 1.**_In the case \(\alpha<1\) the series (5) converges for any \(x\) at \(N\to\infty\). In this case for the distribution function \(G(x,\alpha,\theta)\) for any admissible \(\theta\) the representation is valid in the form of an infinite series_ \[G(x,\alpha,\theta)=\tfrac{1}{2}(1+\mathrm{sign}(x))-\frac{\mathrm{sign}(x)}{ \pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n!}\Gamma(\alpha n)\sin\left(\tfrac{ \pi}{2}\alpha n(1+\theta^{*})\right)|x|^{-\alpha n}. \tag{14}\] _In the case \(\alpha=1\) the series (5) converges for the values \(|x|>1\) at \(N\to\infty\). In this case the representation is valid in the form of an infinite series for the distribution function \(G(x,1,\theta)\) for any admissible \(\theta\)_ \[G(x,1,\theta)=\tfrac{1}{2}(1+\mathrm{sign}(x))-\frac{\mathrm{sign}(x)}{\pi} \sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin\left(\tfrac{\pi}{2}n(1+\theta^{*} )\right)|x|^{-n},\quad|x|>1. \tag{15}\] _In the case \(\alpha>1\) the series (5) diverges for any \(x\) at \(N\to\infty\). In this case the asymptotic expansion is valid for the distribution function \(G(x,\alpha,\theta)\) for any admissible \(\theta\)_ \[G(x,\alpha,\theta)\sim\tfrac{1}{2}(1+\mathrm{sign}(x))-\frac{\mathrm{sign}(x) }{\pi}\sum_{n=1}^{N-1}\frac{(-1)^{n+1}}{n!}\Gamma(\alpha n)\sin\left(\tfrac{ \pi}{2}\alpha n(1+\theta^{*})\right)|x|^{-\alpha n},\quad x\to\pm\infty. \tag{16}\] _Here, everywhere \(\theta^{*}=\theta\,\mathrm{sign}(x)\)._ **Proof** Without loss of generality, we first consider the case \(x>0\). The expansion for the case \(x<0\) will be obtained using the inversion property (2). It was previously obtained that at positive \(x\) the representation (13) is valid. From this expression and also from (6) it follows that \[|G^{(+)}(x,\alpha,\theta)-1+G_{N}^{\infty}(x,\alpha,\theta)|\leqslant\frac{x ^{-\alpha N}}{\pi N!}\left(\Gamma(\alpha N)+x^{-\alpha}\Gamma(\alpha(N+1)) \right),\quad x>0. \tag{17}\] We examine the convergence of the series (5). Since this series is sign-alternating, the inequalities are valid \[G_{N}^{\infty}(x,\alpha,\theta)\leqslant|G_{N}^{\infty}(x,\alpha,\theta)|\leqslant\frac{1}{\pi}\sum_{n=1}^{N-1}\left|\frac{(-1)^{n+1}}{n!} \Gamma(\alpha n)\sin\left(\tfrac{\pi}{2}\alpha n(1+\theta)\right)x^{-\alpha n }\right|\\ \leqslant\frac{1}{\pi}\sum_{n=1}^{N-1}\frac{\Gamma(\alpha n)}{ \Gamma(n+1)}x^{-\alpha n},\quad x>0.\] We will make use of the Cauchy criterion in the limiting form and the Stirling formula \[\Gamma(z)\sim e^{-z}z^{z-\frac{1}{2}}\sqrt{2\pi},\quad z\to\infty,|\arg z|<\pi. \tag{18}\] As a result, we obtain \[\lim_{n\to\infty}\left(\frac{1}{\pi}\frac{\Gamma(\alpha n)}{\Gamma (n+1)}x^{-\alpha n}\right)^{1/n}=\lim_{n\to\infty}\left(\frac{e^{-\alpha n}( \alpha n)^{\alpha n-1/2}\sqrt{2\pi}x^{-\alpha n}}{\pi e^{-n-1}(n+1)^{n+1-1/2} \sqrt{2\pi}}\right)^{1/n}=\lim_{n\to\infty}\frac{e^{-\alpha}(\alpha n)^{\alpha -\frac{1}{2n}}x^{-\alpha}}{\pi^{\frac{1}{n}}e^{-1-\frac{1}{n}}(n+1)^{1+\frac{1 }{2n}}}\\ =e^{1-\alpha}\alpha^{\alpha}x^{-\alpha}\lim_{n\to\infty}n^{\alpha }(n+1)^{-1}=e^{1-\alpha}\alpha^{\alpha}x^{-\alpha}\lim_{n\to\infty}n^{\alpha- 1}=\begin{cases}0,&\text{if }\alpha<1,\\ x^{-1},&\text{if }\alpha=1,\\ \infty,&\text{if }\alpha>1.\end{cases} \tag{19}\] From this it is clear that in the case \(\alpha<1\) the series (5) is convergent for any \(x\), in the case \(\alpha=1\) this series converges at \(x>1\) and diverges at \(x\leqslant 1\). In the case \(\alpha>1\) this series diverges for any \(x\). We examine the behavior of the remainder term (6) in the case \(N\to\infty\). Using the Stirling's formula (18) and taking into account that \(N+1\approx N\) at \(N\to\infty\) we obtain \[\lim_{N\to\infty}\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)\leqslant \lim_{N\to\infty}\frac{x^{-\alpha N}}{\pi N!}\left(\Gamma(\alpha N)+x^{-\alpha }\Gamma(\alpha(N+1))\right)\\ =\frac{1}{\pi}\lim_{N\to\infty}\frac{x^{-\alpha N}\Gamma(\alpha N )+x^{-\alpha(N+1)}\Gamma(\alpha(N+1))}{\Gamma(N+1)}\approx\frac{2}{\pi}\lim_{ N\to\infty}x^{-\alpha N}\frac{\Gamma(\alpha N)}{\Gamma(N)}\\ =\frac{2}{\pi}\lim_{N\to\infty}x^{-\alpha N}\frac{e^{-\alpha N}( \alpha N)^{\alpha N-1/2}\sqrt{2\pi}}{e^{-N}N^{N-1/2}\sqrt{2\pi}}=\frac{2}{\pi }\lim_{N\to\infty}x^{-\alpha N}e^{N(1-\alpha)}\alpha^{\alpha N-1/2}N^{N(\alpha -1)}\\ =\frac{2}{\pi\sqrt{\alpha}}\lim_{N\to\infty}\exp\left\{N(1- \alpha)(1-\ln N)+\alpha N(\ln\alpha-\ln x)\right\}=\begin{cases}0,&\text{if } \alpha<1\\ \infty,&\text{if }\alpha=1,x\leqslant 1,\\ 0,&\text{if }\alpha=1,x>1,\\ \infty,&\text{if }\alpha>1.\end{cases} \tag{20}\] We will consider the case \(\alpha<1\). Generalizing the results obtained, we see that in this case the series (5) is convergent, and the limit of the remainder term \(\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)\) is equal to zero. From this it follows that the right part (17) is an element of an infinitesimal sequence. In turn, this means that for any fixed \(x\) the sequence \(1-G_{N}^{\infty}(x,\alpha,\theta)\) converges to the distribution function \(G^{(+)}(x,\alpha,\theta)\) at \(N\to\infty\). Consequently, in the case \(\alpha<1\), for the distribution function \(G^{(+)}(x,\alpha,\theta)\) the representation is valid in the form of an infinite series \[G^{(+)}(x,\alpha,\theta)=1-\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{ n!}\Gamma(\alpha n)\sin(\tfrac{\pi}{2}\alpha n(1+\theta))x^{-\alpha n},\quad x>0\] Using the inversion property (2) for negative \(x\) we obtain \[G^{(-)}(x,\alpha,\theta)=\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n! }\Gamma(\alpha n)\sin(\tfrac{\pi}{2}\alpha n(1-\theta))(-x)^{-\alpha n},\quad x <0.\] If now we introduce the parameter \(\theta^{*}=\theta\operatorname{sign}(x)\) and take the coordinate \(x\) in absolute value, then we can combine the last two formulas. As a result, we get the formula (14). This proves the first item of the corollary. Now we consider the case \(\alpha=1\) and \(x>1\). As it follows from the expression (19), in this case the series (5) is convergent. It also follows from the expression (20) that \(\lim_{N\to\infty}\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)=0\) Therefore, the right part of the expression (17) is an element of an infinitesimal sequence. In turn, this means that for any fixed \(x>1\) the sequence \(1-G_{N}^{\infty}(x,1,\theta)\) converges to the distribution function \(G^{(+)}(x,1,\theta)\). Therefore, the representation in the form of an infinite series is valid for the distribution function \(G^{(+)}(x,1,\theta)\) \[G^{(+)}(x,1,\theta)=1-\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin \left(\tfrac{\pi}{2}n(1+\theta)\right)x^{-n},\quad x>1. \tag{21}\] Using the formula (2) for negative \(x\) we obtain the formula \[G^{(-)}(x,1,\theta)=\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin \left(\tfrac{\pi}{2}n(1-\theta)\right)(-x)^{-n},\quad x<-1.\] Using now the notation \(\theta^{*}=\theta\operatorname{sign}(x)\) and taking the coordinate \(x\) in absolute value we can combine the last two formulas in one expression. As a result, we obtain the expression (15). This proves the second item of the corollary. Now we consider the case \(\alpha>1\). From the expression (19) it follows that in this case the series (5) is divergent at \(N\to\infty\). However, from the expression (6) it is clear that at some fixed \(N\) the estimate is valid \[\mathcal{R}_{N}^{\infty}(x,\alpha,\theta)=O(x^{-\alpha N}),\quad x\to\infty.\] Thus, for each fixed \(N\) at \(x>0\) from the expression (13) we obtain \[G^{(+)}(x,\alpha,\theta)=1-\frac{1}{\pi}\sum_{n=1}^{N-1}\frac{(-1)^{n+1}}{n!} \Gamma(\alpha n)\sin(\tfrac{\pi}{2}\alpha n(1+\theta))x^{-\alpha n}+O\left(x^ {-\alpha N}\right),\quad x\to\infty.\] Using the formula (2) we obtain the representation for negative \(x\) \[G^{(-)}(x,\alpha,\theta)=\frac{1}{\pi}\sum_{n=1}^{N-1}\frac{(-1)^{n+1}}{n!} \Gamma(\alpha n)\sin(\tfrac{\pi}{2}\alpha n(1-\theta))(-x)^{-\alpha n}+O \left((-x)^{-\alpha N}\right),\quad x\to-\infty.\] Introducing now the notation \(\theta^{*}=\theta\operatorname{sign}(x)\) and taking the coordinate \(x\) in absolute value we can combine the last two expressions into one. As a result, we obtain \[G(x,\alpha,\theta)=\tfrac{1}{2}(1+\operatorname{sign}(x))-\frac{\operatorname {sign}(x)}{\pi}\sum_{n=1}^{N-1}\frac{(-1)^{n+1}}{n!}\Gamma(\alpha n)\sin( \tfrac{\pi}{2}\alpha n(1+\theta^{*}))|x|^{-\alpha n}+O\left(|x|^{-\alpha N} \right),\quad x\to\pm\infty.\] Thus, we have obtained the definition of an asymptotic series. Therefore, this expression can be written as (16). This proves the third item of the corollary. It should be noted that we do not consider the case \(\alpha=1\) and \(|x|<1\) since in this case the series (5) diverges. Thus, the corollary is completely proved. It should be noted that the case \(\alpha=1\) is related to one of those few cases when both the probability density and the distribution function are expressed in terms of elementary functions. In this case, the distribution function is expressed by the formula (34). The derivation of this formula, as well as the proof of corollary 2 (see Appendix A) can be found in the paper [8]. In the article [16] it was shown that the expansion of the probability density in a series in the case \(\alpha=1\) at \(x\to\infty\) converges to the density \(g(x,1,\theta)=\frac{\cos(\pi\theta/2)}{\pi(x^{2}-2x\sin(\pi\theta/2)+1)}\) at \(N\to\infty\). Similarly, for the distribution function one can show that the expansion (15) converges to the distribution function (34) at \(|x|>1\). We formulate this result in the form of a remark. **Remark 1.** In the case \(\alpha=1\) for any \(-1<\theta<1\) in the domain \(|x|>1\) the series (15) converges to the distribution function (34). **Proof** To prove this, we will consider the distribution function (34) and show that the expansion of this distribution function in a Taylor series at \(x\to\infty\) has the form (15). Using the reduction formulas \(\cos\left(\frac{\pi}{2}\theta\right)=\sin\left(\frac{\pi}{2}+\frac{\pi}{2} \theta\right)\), \(\sin\left(\frac{\pi}{2}\theta\right)=-\cos\left(\frac{\pi}{2}+\frac{\pi}{2}\theta\right)\), we will write down the distribution function (34) in the form \[G(x,1,\theta)=\frac{1}{2}+\frac{1}{\pi}\arctan\left(\frac{x+\cos\left(\frac{ \pi}{2}(1+\theta)\right)}{\sin\left(\frac{\pi}{2}(1+\theta)\right)}\right) \tag{22}\] Further, since we need to obtain the expansion of the distribution function in a Taylor series at \(x\to\infty\), then in this expression we will substitute the variable \(x=1/y\) in this expression and start considering the case \(x\geqslant 0\). The relationship obtained in this way we will denote as \(G^{(+)}(y,1,\theta)\). As a result, we obtain \[G^{(+)}(y,1,\theta)=\frac{1}{2}+\frac{1}{\pi}\arctan\left(\frac{\frac{1}{y}+ \cos\left(\frac{\pi}{2}(1+\theta)\right)}{\sin\left(\frac{\pi}{2}(1+\theta) \right)}\right),\quad y\geqslant 0. \tag{23}\] Hence it is clear that the behavior of the function (23) at \(y\to 0\) corresponds to the behavior of the function (22) at \(x\to\infty\). Therefore, expanding the function (23) in a Taylor series in the vicinity of the point \(y=0\) and getting back to the variable \(x\), we obtain the expansion of the distribution function (22) into a power series at \(x\to\infty\). We will take into account that the function \(\arctan(x)\) is infinitely differentiable. Consequently, the expansion of the function (23) in a Taylor series in the vicinity of the point \(y=0\) has the form \[G^{(+)}(y,1,\theta)=G^{(+)}(0,1,\theta)+\sum_{n=1}^{\infty}\frac{1}{n!}\left. \frac{d^{n}G^{(+)}(y,1,\theta)}{dy^{n}}\right|_{y=0}y^{n}. \tag{24}\] At the beginning, we will calculate the first derivative of the function (23). We obtain \[\frac{dG^{(+)}(y,1,\theta)}{dy}=-\frac{\sin\left(\frac{\pi}{2}(1+\theta) \right)}{\pi\left(y^{2}+2y\cos\left(\frac{\pi}{2}(1+\theta)\right)+1\right)}.\] We represent this expression in the form \[\frac{dG^{(+)}(y,1,\theta)}{dy}=-\frac{\sin\left(\frac{\pi}{2}(1+\theta) \right)}{\pi}f(g(y)),\] where \[f\equiv f(g)=1/g,\quad g\equiv g(y)=y^{2}+2y\cos\left(\frac{\pi}{2}(1+\theta) \right)+1. \tag{25}\] Thus, for the derivative of the series of \(n\) of the function \(G^{(+)}(y,1,\theta)\) we get \[\frac{d^{n}G^{(+)}(y,1,\theta)}{dy^{n}}=\frac{d^{n-1}}{dy^{n-1}} \frac{dG^{(+)}(y,1,\theta)}{dy}=-\frac{\sin\left(\frac{\pi}{2}(1+\theta) \right)}{\pi}\frac{d^{n-1}f(g(y))}{dy^{n-1}}\] \[=-\frac{\sin\left(\frac{\pi}{2}(1+\theta)\right)}{\pi}\sum_{k=0} ^{\left[\frac{n-1}{2}\right]}\frac{(-1)^{n-1-k}(n-1)!(n-1-k)!}{k!(n-1-2k)!} \frac{\left(2y+2\cos\left(\frac{\pi}{2}(1+\theta)\right)\right)^{n-1-2k}}{ \left(y^{2}+2y\cos\left(\frac{\pi}{2}(1+\theta)\right)+1\right)^{n-k}},\] where the formula was used \[\frac{d^{n}f(g(y))}{dy^{n}}=\sum_{k=0}^{\left[\frac{n}{2}\right]}\frac{(-1)^{n -k}n!(n-k)!}{k!(n-2k)!}\frac{\left(2y+2\cos\left(\frac{\pi}{2}(1+\theta) \right)\right)^{n-2k}}{\left(y^{2}+2y\cos\left(\frac{\pi}{2}(1+\theta)\right) +1\right)^{n-k+1}},\] which was obtained in the article [16]. At the point \(y=0\) this derivative has the value \[\left.\frac{d^{n}G^{(+)}(y,1,\theta)}{dy^{n}}\right|_{y=0}=-\frac{ \sin\left(\frac{\pi}{2}(1+\theta)\right)}{\pi}(-1)^{n-1}(n-1)!\\ \times\sum_{k=0}^{\left[\frac{n-1}{2}\right]}\frac{(-1)^{k}(n-k-1 )!}{k!(n-2k-1)!}\left(2\cos\left(\frac{\pi}{2}(1+\theta)\right)\right)^{n-2k-1 }=\frac{(n-1)!}{\pi}(-1)^{n-1}\sin\left(\frac{\pi}{2}n(1+\theta)\right). \tag{26}\] Here it was taken into account that \((-1)^{-k}=(-1)^{k}\), and also the formula was used for \(\sin(n\varphi)\) (see, for example, [19]) \[\sin(n\varphi)=\sin\varphi\sum_{k=0}^{\left[\frac{n-1}{2}\right]}(-1)^{k} \frac{(n-k-1)!}{k!(n-2k-1)!}(2\cos\varphi)^{n-2k-1}.\] Now we substitute the expression (26) in the formula (24) and take into account that \(G^{(+)}(0,1,\theta)=\frac{1}{2}+\arctan(\infty)=1\). As a result, the expression (24) becomes \[G^{(+)}(y,1,\theta)=1-\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n} \sin\left(\frac{\pi}{2}(1+\theta)\right)y^{n}.\] Thus, we have obtained the expansion of the function (23) in a Taylor series in the vicinity of the point \(y=0\). To obtain the expansion of the distribution function (22) at \(x\to\infty\), one must return to the variable \(x\) in the last expression. By substituting the variable \(y=1/x\), we obtain \[G^{(+)}(x,1,\theta)=1-\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n} \sin\left(\frac{\pi}{2}(1+\theta)\right)x^{-n}.\] This expression is the expansion of the distribution function (22) at \(x\to\infty\). As one can see, it completely coincides with the expansion obtained earlier (21). Corollary 1 shows that this series converges at \(n\to\infty\) in the domain \(x>1\). Hence \[G^{(+)}(x,1,\theta)=1-\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n} \sin\left(\frac{\pi}{2}(1+\theta)\right)x^{-n},\quad x>1.\] To obtain the expansion of the distribution function for negative \(x\) we use the inversion property and, in particular, the formula (2). As a result, we get \[G^{(-)}(x,1,\theta)=\frac{1}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin \left(\frac{\pi}{2}n(1-\theta)\right)(-x)^{-n},\quad x<-1.\] If we now introduce the parameter \(\theta^{*}=\theta\,\mbox{sign}(x)\) and take the variable \(x\) in absolute value then it is possible to combine the formulas for \(G^{(+)}(x,1,\theta)\) and \(G^{(-)}(x,1,\theta)\) into one formula. As a result, we obtain \[G(x,1,\theta)=\frac{1}{2}(1+\mbox{sign}(x))-\frac{\mbox{sign}(x)}{\pi}\sum_{n =1}^{\infty}\frac{(-1)^{n+1}}{n}\sin\left(\frac{\pi}{2}n(1+\theta^{*})\right) |x|^{-n},\quad|x|>1.\] This formula coincides completely with the expression (15). Thus, the expansion of the distribution function (22) coincides exactly with the expansion (15)which was obtained earlier. Therefore, in the domain \(|x|>1\) the series (15) converges to the distribution function (34). The remark has been proved. Theorem 2 gives an opportunity to calculate the distribution function for large values of the coordinate \(x\) using a power series (4). As mentioned in the Introduction, for large values of the coordinate \(x\) the use of the integral representation (32) no longer allows one to calculate the distribution function correctly, and here it is necessary to apply other calculation methods. The most obvious and probably the only way to calculate the distribution function at large values of \(x\) is to use the expansions (4). However, before we use this expansion, it is necessary to obtain a criterion that makes it possible to determine the coordinate \(x\) in which, for a specified value of \(\alpha\) and the number of summands \(N\) in the sum (4), the specified accuracy of calculating the distribution function will be achieved. Such a criterion can be obtained by evaluating the remainder term (6). From the formula (4) and (6) it follows that \[\left|G(x,\alpha,\theta)-\tfrac{1}{2}(1+\operatorname{sign}(x))+\operatorname {sign}(x)G_{N}^{\infty}(|x|,\alpha,\theta^{*})\right|\leqslant\frac{|x|^{- \alpha N}}{\alpha N!}\left(\Gamma(\alpha N)+|x|^{-\alpha}\Gamma(\alpha(N+1)) \right).\] If now for the specified \(\alpha\) and \(N\) we give the value of the absolute error \(\varepsilon\), the value of which should not exceed the true absolute error of the calculation using the expansion (4), i.e. \[\left|G(x,\alpha,\theta)-\tfrac{1}{2}(1+\operatorname{sign}(x))+\operatorname {sign}(x)G_{N}^{\infty}(|x|,\alpha,\theta^{*})\right|\leqslant\varepsilon,\] then this gives an opportunity to introduce the threshold coordinate \(x_{\varepsilon}^{N}\). The value of the threshold coordinate can be found from the solution to the equation \[\frac{\left|x_{\varepsilon}^{N}\right|^{-\alpha N}}{\alpha N!}\left(\Gamma( \alpha N)+\left|x_{\varepsilon}^{N}\right|^{-\alpha}\Gamma(\alpha(N+1))\right) =\varepsilon. \tag{27}\] Unfortunately, it is impossible to solve this equation analytically and obtain an explicit expression for the threshold coordinate \(x_{\varepsilon}^{N}\). Nevertheless, numerical methods without much difficulty help to find a solution to this equation for specified \(\alpha\), \(N\) and \(\varepsilon\). As a result, we obtain the condition \[\left|G(x,\alpha,\theta)-\tfrac{1}{2}(1+\operatorname{sign}(x))+\operatorname {sign}(x)G_{N}^{\infty}(|x|,\alpha,\theta^{*})\right|\leqslant\varepsilon, \quad|x|\geqslant x_{\varepsilon}^{N}. \tag{28}\] It means that at the values of \(|x|\geqslant x_{\varepsilon}^{N}\) the absolute error in calculating the distribution function using a power series (4) will not exceed the specified accuracy level \(\varepsilon\). Here \(\alpha,N\) and \(\varepsilon\) are specified and \(x_{\varepsilon}^{N}\) is found from the solution to the equation (27). Taking account of all the foregoing, we obtain formulas for calculating the distribution function \[G_{N}(x,\alpha,\theta)=\tfrac{1}{2}(1+\operatorname{sign}(x))-\operatorname {sign}(x)G_{N}^{\infty}(|x|,\alpha,\theta^{*}),\quad|x|\geqslant x_{ \varepsilon}^{N}. \tag{29}\] Here \(G_{N}^{\infty}(x,\alpha,\theta)\) is determined by the expression (5), and the threshold coordinate \(x_{\varepsilon}^{N}\) is found from the solution to the equation (27). The accuracy level \(\varepsilon\) and the number of summands \(N\) are specified beforehand. In this case, it can be guaranteed that in the domain \(|x|\geqslant x_{\varepsilon}^{N}\) at a specified \(N\) the absolute error in calculating the distribution function using this formula will not exceed \(\varepsilon\). Figures 1, 2, and 3 show the calculation results of the distribution function using the formula (29) and the results of calculating the quantity of the absolute error for the values \(\alpha=0.7,1,1.3\). Figures 1a, 2a, and 3a demonstrate the results of calculating the distribution function. In these figures, the solid curve corresponds to the exact value of the distribution function. In the case \(\alpha\neq 1\) the integral representation (32) was used for calculations, and in the case \(\alpha=1\) the formula (34) was used. The dashed-dotted curves in these figures correspond to the results of the calculation using the expansion (29) for the values \(N=3,10,30,60,90\). The circles in these figures show the position of the threshold coordinate \(x_{\varepsilon}^{N}\) for the chosen values of \(N\) and the specified accuracy level \(\varepsilon=10^{-5}\). The value of the threshold coordinate for each \(N\) and selected \(\varepsilon\) was found by solving the equation (27). Figures 1b, 2b, and 3b give the absolute error of calculating the distribution function using the expansion (29). In these figures, the solid curve is the exact value of the absolute error \(|G(x,\alpha,\theta)-G_{N}(x,\alpha,\theta)|\). Here to calculate \(G(x,\alpha,\theta)\) the integral representation (32) was used in the case \(\alpha\neq 1\), and the formula (34) was applied in the case \(\alpha=1\). To calculate \(G_{N}(x,\alpha,\theta)\) the formula (29) was used. The dashed-dotted curves are the estimate of the remainder term (6). The dotted line shows the position of the specified accuracy level \(\varepsilon\), and the circles are the position of the threshold coordinate \(x_{\varepsilon}^{N}\) for each value of \(N\). Figures 1b, 2b, and 3b clearly show that in the domain \(x>x_{\varepsilon}^{N}\) the exact value of the absolute error (solid curves) turns out to be less than the estimate of the remainder term (6) (dashed-dotted curves). It is also clearly seen that in the domain \(x>x_{\varepsilon}^{N}\), both the exact value of the absolute error and the residual term estimate are less than the value of the selected accuracy level \(\varepsilon\). This means that in the domain \(|x|>x_{\varepsilon}^{N}\) the formula (29) can be used to calculate the distribution function. At the same time, it can be guaranteed that the absolute error in calculating the distribution function using this formula will not exceed the selected accuracy level \(\varepsilon\), and in reality it will be much less than this value. If we now analyze the behavior of the threshold coordinate \(x_{\varepsilon}^{N}\) from the number of terms \(N\), then we can see that for each of the considered cases \(\alpha<1\), \(\alpha=1\) and \(\alpha>1\) this behavior differs. It can be seen from fig. 1 that in the case \(\alpha=0.7\) as \(N\) increases the value \(x_{\varepsilon}^{N}\) decreases. This behavior of the threshold coordinate is the result of corollary 1. Indeed, in the first item of this corollary it is proved that in the case \(\alpha<1\) at \(N\to\infty\) the series (14) converges for any \(x\). This means that as the number of terms in the formula (29) increases, the accuracy of calculating the distribution function at some fixed point \(x\) will increase. In turn, this leads to the fact that the range of coordinates \(x\) under which the condition (28) is satisfied will expand. Therefore, as \(N\) increases the value of the threshold coordinate \(x_{\varepsilon}^{N}\) will decrease. To calculate the limit value of the coordinate \(x_{\varepsilon}^{N}\) at \(N\to\infty\) we will consider the equation (27) and assume that \(N\to\infty\). Taking account that \(N+1\approx N\) at \(N\to\infty\) this equation takes the form Figure 1: a) The distribution function \(G(x,\alpha,\theta)\) for the parameter values shown in the figure. The solid curve is the integral representation (32), the dash-dotted curves are the representation in the form of a power series (29) for different values of the number of summands \(N\) in the sum. The circles show the position of the threshold coordinate \(x_{\varepsilon}^{N}\) for corresponding values of \(N\) and the specified accuracy level \(\varepsilon\). (b) Graph of the absolute error of calculating the distribution function \(G(x,\alpha,\theta)\) with the use of a power series (29). The solid curves are the exact value of the absolute error \(|G(x,\alpha,\theta)-G_{N}(x,\alpha,\theta)|\), the dash-dotted curves are the residual term estimate (6), the dotted line is the specified accuracy level \(\varepsilon\), the circles show the position of the threshold coordinate \(x_{\varepsilon}^{N}\) Figure 3: a) The distribution function \(G(x,\alpha,\theta)\) for the parameter values shown in the figure. The solid curve is the integral representation (32), the dash-dotted curves are the representation in a power series (29) for different values of the number of summands \(N\) in the sum. The circles show the position of the threshold coordinate \(x_{\varepsilon}^{N}\) for the corresponding values \(N\) and the specified accuracy level \(\varepsilon\). (b) Graph of the absolute error of calculating the distribution function \(G(x,\alpha,\theta)\) using a power series (29). The solid curves are the exact value of the absolute error \(|G(x,\alpha,\theta)-G_{N}(x,\alpha,\theta)|\), the dash-dotted curves are the residual term estimate (6), the dotted line is the specified accuracy level \(\varepsilon\), the circles show the position of the threshold coordinate \(x_{\varepsilon}^{N}\) Figure 2: a) The distribution function \(G(x,\alpha,\theta)\) for the parameter values shown in the figure. The solid curve is the formula (34), the dash-dotted curves are the power series representation (29) for different values of the number of summands \(N\) in the sum. The circles show the position of the threshold coordinate \(x_{\varepsilon}^{N}\) for the corresponding values of \(N\) and the specified level of accuracy \(\varepsilon\). (b) Graph of the absolute error of calculating the distribution function \(G(x,\alpha,\theta)\) with the use of a power series (29). The solid curves are the exact value of the absolute error \(|G(x,\alpha,\theta)-G_{N}(x,\alpha,\theta)|\), the dash-dotted curves are the residual term estimate (6), the dotted line is the specified accuracy level \(\varepsilon\), the circles show the position of the threshold coordinate \(x_{\varepsilon}^{N}\) \(2|x_{\varepsilon}^{N}|^{-\alpha N}\Gamma(\alpha N)=\varepsilon\pi\Gamma(N)\). From here we find \[|x_{\varepsilon}^{N}|=\left(\frac{2\Gamma(\alpha N)}{\pi\varepsilon\Gamma(N)} \right)^{\frac{1}{\alpha N}}.\] We now find the limit of this expression at \(N\to\infty\). Using the Stirling formula (18), we obtain \[\lim_{N\to\infty}|x_{\varepsilon}^{N}|=\lim_{N\to\infty}\left( \frac{2\Gamma(\alpha N)}{\pi\varepsilon\Gamma(N)}\right)^{\frac{1}{\alpha N}} =\lim_{N\to\infty}\left(\frac{2}{\pi\varepsilon}\frac{e^{-\alpha N }(\alpha N)^{\alpha N-\frac{1}{2}}\sqrt{2\pi}}{e^{-N}(N)^{N-\frac{1}{2}}\sqrt {2\pi}}\right)^{\frac{1}{\alpha N}}\] \[=e^{\frac{1}{\alpha}-1}\alpha^{-1}\lim_{N\to\infty}\left(\frac{2} {\pi\varepsilon\alpha^{2}}\right)^{\frac{1}{\alpha N}}N^{1-\frac{1}{\alpha}}= \begin{cases}0,&\alpha<1\\ 1,&\alpha=1\\ \infty,&\alpha>1.\end{cases} \tag{30}\] Thus, in the case \(\alpha<1\) we get \(\lim_{N\to\infty}|x_{\varepsilon}^{N}|=0\). This result is a consequence of the convergence of the series (5) in the case \(\alpha<1\). A similar behavior of the threshold coordinate is also observed in the case \(\alpha=1\). Figure 2 shows that as the number of \(N\) summands in the formula (29) increases the value of the threshold coordinate decreases. However, unlike the previous case, it follows from the expression (30) that in this case \(\lim_{N\to\infty}|x_{\varepsilon}^{N}|=1\). Such behavior of the threshold coordinate is the result of corollary 1, where in the second item it was proved that in the case \(\alpha=1\) the series (15) converges at \(N\to\infty\) in the domain \(|x|>1\). In the case \(\alpha>1\) the behavior of the threshold coordinate changes. Figure 3a shows that as the number of summands \(N\) in the formula (29) increases, the value of the threshold coordinate first decreases: \(x_{\varepsilon}^{3}>x_{\varepsilon}^{10}>x_{\varepsilon}^{30}\). However, a further increase in \(N\) leads to an increase in the threshold coordinate: \(x_{\varepsilon}^{30}<x_{\varepsilon}^{60}<x_{\varepsilon}^{90}\). Such behavior of the threshold is in full accordance with corollary 1, where in the third item it was proved that in the case \(\alpha>1\) the series (5) diverged at \(N\to\infty\). The cause of the divergence of this series lies in the presence of the multiplier \(\Gamma(\alpha n)/\Gamma(n+1)\). Hence, it is clear that at \(\alpha>1\) this multiplier is more than \(1\) and as \(n\) increases the value of this multiplier only increases. Such behavior of this multiplier is the reason for the divergence of the series (5). In this series there is also a multiplier \(x^{-\alpha n}\). As one can see, as the value of \(x\) increases, the value of this multiplier decreases. The competition between these two factors leads to the observed behavior of the threshold coordinate \(x_{\varepsilon}^{N}\). Indeed, the threshold coordinate is found as a result of solving the equation \(|G(x,\alpha,\theta)-G_{N}(x,\alpha,\theta)|=\varepsilon\). Therefore, first the increase in the number of summands \(N\) in the sum (29) leads to a decrease in the coordinate \(x_{\varepsilon}^{N}\). This is testified by the fact that \(x_{\varepsilon}^{3}>x_{\varepsilon}^{10}>x_{\varepsilon}^{30}\). However, further increase in \(N\) leads to the fact that the factor \(\Gamma(\alpha n)/\Gamma(n+1)\) starts growing rapidly, and to compensate for this growth, it is necessary to increase \(x\) in the multiplier \(x^{-\alpha n}\). This is what leads to a shift of the threshold coordinate towards larger values of \(x\) as \(N\) increases. Fig. 3 demonstrates such behavior from which it is clear that \(x_{\varepsilon}^{30}<x_{\varepsilon}^{60}<x_{\varepsilon}^{90}\). In case, if \(n\to\infty\), then the factor \(\Gamma(\alpha n)/\Gamma(n)\to\infty\). Therefore, to compensate for this growth, it is necessary that \(x\to\infty\) in the multiplier \(x^{-\alpha n}\). Thus, the obtained conclusion is in full accordance with the expression (30), which shows that \(\lim_{N\to\infty}|x_{\varepsilon}^{N}|=\infty\), if \(\alpha>1\). It should be pointed out, if in the considered case (\(\alpha>1\)) we fix some arbitrary \(N\), then in view of the presence of the factor \(x^{-\alpha N}\) with an increase in the value of \(x\) one can achieve any preset calculation accuracy \(\varepsilon\). Consequently, at \(\alpha>1\) the formula (29) is asymptotic at \(|x|\to\infty\), which is the result of corollary 1. ## 3 Calculation of the distribution function at \(x\to\infty\) We return to the question of calculating the distribution function of a strictly stable law in the case of large values of the coordinate \(x\). The main approach to calculate the distribution function is to use the integral representation (32). In theory, this integral representation is valid for all values of the parameters \(\alpha,\theta\) (except for the value \(\alpha=1\)) and all \(x\). However, in practice, it is not always possible to calculate the integral numerically included in this integral representation. Problems arise at small and large values of the coordinate \(x\). The cause of the difficulties that arise is the behavior of the integrand in the formula (32). Fig. 4 presents a graph of the integrand of the integral representation (32) for the parameters \(\alpha=1.1,\theta=0\) and the specified values of coordinates \(x\) depending on the integration variable \(\varphi\). The variable \(\varphi\) changes within the range from \(-\pi\theta/2\) to \(\pi/2\). It is clear from the figure that at very small and large values of \(x\) the integrand in (32) increases very sharply from \(0\) to the value of \(1\). In the case \(\alpha<1\) the behavior of the integrand will be inverse. In this case, the function is decreasing, and therefore it will sharply decrease from \(1\) to \(0\). With a further decrease or increase in the value of \(x\) the steep increase (in the case \(\alpha>1\)) or decrease (in the case \(\alpha<1\)) in sections will increase. As a result, at some \(x\) numerical integration algorithms cannot recognize the monotonic nature of the function and begin to produce an incorrect result. The most suitable method for calculating the distribution function in the case of small and large values of \(x\) is to use asymptotic expansions. The problem of calculating the distribution function in the case \(x\to 0\) was considered in the article [15]. In the case \(x\to\infty\) it is expedient to use theorem 2 and, in particular, the formula (29). Figure 5 shows the results of calculating the distribution function \(G(x,\alpha,\theta)\) using the integral representation (32) (solid curves) and the formula (29) (dash-dotted curves) at large values of \(x\). The left figure shows the case \(\alpha<1\), the right one shows the case \(\alpha>1\). To calculate the integral in the formula (32) the Gauss-Kronrod algorithm was used. One can see from the figures that at large values of \(x\) the numerical integration algorithm used is incapable of calculating the integral in (32) and starts giving an incorrect result. It can also be seen from the figure that the value of the critical coordinate \(x_{\rm cr}\), at which the numerical integration algorithm starts calculating the integral incorrectly, depends on the value of \(\alpha\). For the value \(\alpha=0.5\) the value is \(x_{\rm cr}\approx 3.2\cdot 10^{13}\), for the value \(\alpha=0.7\) the value is \(x_{\rm cr}\approx 3\cdot 10^{9}\), for the value \(\alpha=0.9\) the value is \(x_{\rm cr}\approx 10^{7}\), for the value \(\alpha=1.1\) the value is \(x_{\rm cr}\approx 4\cdot 10^{5}\), for the value \(\alpha=1.4\) the value is \(x_{\rm cr}\approx 3.5\cdot 10^{4}\), for the value \(\alpha=1.7\) the value is \(x_{\rm cr}\approx 6\cdot 10^{3}\). It is clear that as the value \(\alpha\) decreases, the value \(x_{\rm cr}\) increases. Thus, at \(|x|>x_{\rm cr}\) other methods of calculating the distribution function should be used. The use of the formula (29) to calculate the distribution function at \(|x|>x_{\rm cr}\) solves the problem Figure 4: The relationship between the integrand of the integral representation for the distribution function (32) and the value of the integration variable \(\varphi\). The figure shows graphs of the integrand for the values of the parameters \(\alpha=1.1,\theta=0\) and the specified values of the coordinate \(x\) completely. It is clear that in the domain \(x<x_{\rm cr}\) the calculation results using the integral representation (32) and the formula (29) coincide completely. It should be noted that such a coincidence will be observed in the domain \(x_{\varepsilon}^{N}\leqslant x\leqslant x_{\rm cr}\). In the domain \(x>x_{\rm cr}\) the numerical integration algorithm no longer makes it possible to obtain the correct value of the distribution function using the integral representation (32), whereas the use of the formula (29) does not lead to any calculation difficulties. It should be noted that the number of summands \(N=30\) was used to calculate the distribution function with the help of the formula (29). The threshold coordinate \(x_{\varepsilon}^{N}\) for the accuracy level \(\varepsilon=10^{-5}\) for the presented graphs has the following values: at \(\alpha=0.5,x_{\varepsilon}^{30}=0.088\), at \(\alpha=0.7,x_{\varepsilon}^{30}=0.402\), at \(\alpha=0.9,x_{\varepsilon}^{30}=1.000\), at \(\alpha=1.1,x_{\varepsilon}^{30}=1.860\), at \(\alpha=1.4,x_{\varepsilon}^{30}=3.552\) and at \(\alpha=1.7,x_{\varepsilon}^{30}=5.612\). As one can see, the values of the threshold coordinate for each of the graphs presented in the figure are significantly less than the range of values of \(x\), which is given in the figures. Consequently, it can be asserted that in the domain \(|x|\geqslant x_{\varepsilon}^{N}\) to calculate the distribution function, one can use the formula (29). In this case, the absolute error of calculating the distribution function will not exceed the specified accuracy level \(\varepsilon\) and as \(x\) increases the absolute error of calculation will only decrease. Thus, the use of theorem 2 and, in particular, the formula (29) solves the problem of calculating the distribution function completely at \(x\to\infty\). It should be noted that the presented results are related to standard strict-stable laws, i.e. to laws with scale parameter \(\lambda=1\). To transform the distribution function of a standard strictly stable law into a distribution function of a strictly stable law with an arbitrary \(\lambda\) one can use remark 7 from the article [8], (see also [4, 17]). ## 4 Conclusion The article considers the problem of calculating the distribution function of a strictly stable law with the characteristic function (1) at large values of the coordinate \(x\). The need to solve this problem is dictated by the inability of numerical integration algorithms to calculate the integral correctly in the integral representation (32) at large \(x\). The cause of such difficulties lies in the behavior of the integrand. In this regard, the use of the integral representation to calculate the distribution function at large values of the coordinate is no longer possible, and it is necessary to apply other approaches to solve this problem. To solve it, it was proposed to use the expansion of the distribution function in a power series at \(x\to\infty\). In the article, such an expansion was obtained, as well as an estimate for the remainder term. Figure 5: Distribution function \(G(x,\alpha,\theta)\) a strictly stable law. The figure on the left is the case \(\alpha<1\), the figure on the right is the case \(\alpha>1\). The values of the indicator \(\alpha\) are given in the figures for all graphs \(\theta=0\). The solid curves are integral representation (32), dash-dotted curves are the representation in a power series (29) The results are formulated in theorem 2. The convergence of this series has been studied and it has been shown that in the case of \(\alpha<1\) the series is convergent, in the case \(\alpha>1\) it is asymptotic, and in the case \(\alpha=1\) the series is convergent at \(|x|>1\). These results are formulated as a corollary 1. It should be noted that the results formulated in this corollary for the cases \(\alpha<1\) and \(\alpha>1\) are not new and generalize the known results related to the convergence of the expansion of the distribution function in a series (see, for example [4, 5]). Nevertheless, the study of the convergence of the series for the expansion of the distribution function in the case \(\alpha=1\) was carried out for the first time. The study of this case showed that the obtained series is convergent at \(|x|>1\). In addition, we managed to show that in this case at \(N\to\infty\) this series converges to the distribution function of the generalized Cauchy distribution (34). The estimate of the remainder term obtained in theorem 2, turned out to be very useful in the problem of calculating the distribution function. Using this estimate, we managed to obtain the equation (27) for the threshold coordinate \(x_{\varepsilon}^{N}\). The threshold coordinate makes it possible to determine the range of coordinates \(x\) in which the absolute calculation error does not exceed the required level of accuracy \(\varepsilon\) at specified values of \(\alpha\) and the number of summands \(N\) in the expansion (5). As a result, the formula (29) is valid for calculating the distribution function. The calculations performed showed that when using this formula, the absolute error in calculating the distribution function in the domain \(|x|>x_{\varepsilon}^{N}\) does not exceed the required level of accuracy \(\varepsilon\), and in reality is much less than this value. As \(|x|\) increases, the absolute calculation error will only decrease. This makes it possible to use the formula (29) to calculate the distribution function even for those values of \(x\), for which the use of the integral representation (32) turns out to be impossible. Indeed, the calculations have shown that for the integral representation (32) there is a critical value of the coordinate \(x_{\rm cr}\) at which numerical integration algorithms can no longer calculate the integral correctly (see Fig. 5). At the same time, using the formula (29) does not lead to any calculation difficulties. Thus, using this formula to calculate the distribution function of a strictly stable law in the coordinate range \(|x|>x_{\rm cr}\) solves the problem of calculating the distribution function at large \(x\). As noted in the Introduction, the integral representation of the distribution function (32) has two domains in which numerical methods have difficulties in calculating the integral. These are the ranges of coordinates at \(x\to 0\) and at \(x\to\infty\). This article shows that in the domain of large values of coordinates, the formula (29) can be used for calculation. The problem of calculating the distribution function at small values of \(x\) was solved earlier in the paper [15]. In this paper, the expansion of the distribution function in a power series at \(x\to 0\) was obtained and the area of applicability of this expansion was determined. Thus, if to use the formula (29) to calculate the distribution function at large values of \(x\), at small values of \(x\) to use the results of the article [15], and to use in the intermediate domain the integral representation (32), then we get an opportunity to calculate correctly the distribution function of a strictly stable law with the characteristic function (1) on the entire real line. In conclusion, it should be pointed out that there are similar difficulties in calculating the probability density of a strictly stable law. Calculation difficulties also arise at small and large values of the coordinate \(x\). The problem of calculating the probability density is solved in the papers [15, 16]. These articles show that if in the domain of small coordinates \(x\) to use the probability density expansion from the article [15], in the domain of large coordinates to use the expansion from the article [16], and in the intermediate domain to use the integral representation for the density probability obtained in the article [8], then we get an opportunity to calculate the probability density of a strictly stable law correctly on the entire real line. Thus, the problem of calculating the probability density and distribution function of a strictly stable law with the characteristic function (1) on the entire real line turns out to be solved. **Acknowledgements** The author thanks M. Yu. Dudikov for translation of the article into English. Integral representation of the distribution function To perform the inverse Fourier transform and obtain the probability density distribution, the following lemma is useful, which determines the inversion formula **Lemma 1.**_The probability density distribution \(g(x,\alpha,\theta)\) for any admissible set of parameters \((\alpha,\theta)\) and any \(x\) can be obtained using the inverse transform formulas_ \[g(x,\alpha,\theta)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-itx}\hat{g}(t, \alpha,\theta)dt=\left\{\begin{array}{l}\frac{1}{\pi}\Re\int_{0}^{ \infty}e^{itx}\hat{g}(t,\alpha,-\theta)dt,\\ \frac{1}{\pi}\Re\int_{0}^{\infty}e^{-itx}\hat{g}(t,\alpha,\theta)dt. \end{array}\right. \tag{31}\] The proof of this lemma can be found in the article [8]. There is also an integral representation for the distribution function. For a strictly stable law with a characteristic function (1) it was obtained in the article [8] and formulated as a corollary **Corollary 2.**_The distribution function of the stable law \(G(x,\alpha,\theta)\) with characteristic function (1) can be represented in the form_ 1. _If_ \(\alpha\neq 1\)_, then for any_ \(|\theta|\leqslant\min(1,2/\alpha-1)\) _and_ \(x\neq 0\)__ \[G(x,\alpha,\theta)=\tfrac{1}{2}(1-\operatorname{sign}(x))+\operatorname{sign} (x)G^{(+)}(|x|,\alpha,\theta^{*}),\] (32) _where_ \(\theta^{*}=\theta\operatorname{sign}(x)\)_,_ \[G^{(+)}(x,\alpha,\theta)=1-\frac{(1+\theta)}{4}(1+\operatorname{ sign}(1-\alpha))\\ +\frac{\operatorname{sign}(1-\alpha)}{\pi}\int_{-\pi\theta/2}^{ \pi/2}\exp\left\{-x^{\alpha/(\alpha-1)}U(\varphi,\alpha,\theta)\right\}d \varphi,\quad x>0,\] (33) _and_ \(U(\varphi,\alpha,\theta)\) _is determined by the expression_ \[U(\varphi,\alpha,\theta)=\left(\frac{\sin\left(\alpha\left(\varphi+\frac{\pi} {2}\theta\right)\right)}{\cos\varphi}\right)^{\alpha/(1-\alpha)}\frac{\cos \left(\varphi(1-\alpha)-\frac{\pi}{2}\alpha\theta\right)}{\cos\varphi}.\] 2. _If_ \(\alpha=1\)_, then for any_ \(-1\leqslant\theta\leqslant 1\) _and any_ \(x\)__ \[G(x,1,\theta)=\frac{1}{2}+\frac{1}{\pi}\arctan\left(\frac{x-\sin(\pi\theta/2)} {\cos(\pi\theta/2)}\right).\] (34) 3. _If_ \(x=0\)_, then for any admissible_ \(\alpha\) _and_ \(\theta\)__ \[G(0,\alpha,\theta)=(1-\theta)/2.\]
2305.16560
Energetic cost for speedy synchronization in non-Hermitian quantum dynamics
Quantum synchronization is crucial for understanding complex dynamics and holds potential applications in quantum computing and communication. Therefore, assessing the thermodynamic resources required for finite-time synchronization in continuous-variable systems is a critical challenge. In the present work, we find these resources to be extensive for large systems. We also bound the speed of quantum and classical synchronization in coupled damped oscillators with non-Hermitian anti-PT-symmetric interactions, and show that the speed of synchronization is limited by the interaction strength relative to the damping. Compared to the classical limit, we find that quantum synchronization is slowed by the non-commutativity of the Hermitian and anti-Hermitian terms. Our general results could be tested experimentally and we suggest an implementation in photonic systems.
Maxwell Aifer, Juzar Thingna, Sebastian Deffner
2023-05-26T01:02:10Z
http://arxiv.org/abs/2305.16560v1
# Energetic cost for speedy synchronization in non-Hermitian quantum dynamics ###### Abstract Quantum synchronization is crucial for understanding complex dynamics and holds potential applications in quantum computing and communication. Therefore, assessing the thermodynamic resources required for finite-time synchronization in continuous-variable systems is a critical challenge. In the present work, we find these resources to be extensive for large systems. We also bound the speed of quantum and classical synchronization in coupled damped oscillators with non-Hermitian anti-\(\mathcal{PT}\)-symmetric interactions, and show that the speed of synchronization is limited by the interaction strength relative to the damping. Compared to the classical limit, we find that quantum synchronization is slowed by the non-commutativity of the Hermitian and anti-Hermitian terms. Our general results could be tested experimentally and we suggest an implementation in photonic systems. The study of synchronization dates at least to the 17th century, when Huygens noted the gradual build-up of correlations in the motion of coupled pendula [1]. Similar behavior has since been found ubiquitously in nature, such as in many-body physics, biology, and even human activities [2; 3; 4; 5; 6; 7; 8; 9; 10]. As the coordination of multiple objects implies some kind of communication, synchronization is a signature of information flow, and is a key mechanism for establishing order from disorder in complex systems [11; 12; 13; 14; 15; 16]. As such, it is of interest in thermodynamics, where information is a central quantity [17; 18; 19; 20; 21; 22; 23]. That synchronization can be seen at the smallest observable scales makes it relevant also in quantum information theory, where it has become an emerging research focus due to potential applications in quantum computing and communication [24; 25]. In quantum dynamics, the primary focus has been on synchronization in discrete systems [26; 27; 28; 29; 30; 31; 32], whereas continuous-variable models are often treated classically [33; 34; 35]. However, to study the quantum limit of classical models, genuine continuous-variable scenarios are required [36; 37; 38; 39; 40; 5]. Multiple ways of quantifying synchronization have been devised for both discrete and continuous-variable quantum systems [37; 41; 42; 43; 44; 45; 39], however there is no clear consensus as to which metric is universally applicable. Moreover, existing work also provides only limited insight into the time and energy scales on which the process occurs. Most of our current understanding of quantum synchronization is limited to the long-time, steady-state behavior. While this simplifies the description, there are some questions which can only be answered using a finite-time approach, in particular regarding the required resources for synchronization. To this end, some progress has been made using quantum speed limits [46; 47; 48; 49; 50; 51; 52; 53; 54; 55], and which are now understood to also constrain the rate of synchronization [56]. Similar results have been proposed using the Lieb-Robinson bound [57]. Yet, the finite-time analysis of synchronization is still in an early developmental stage. In this letter, we apply quantum speed limits and quantum thermodynamics to a general model of continuous-variable quantum systems. A measure of complete synchronization in continuous-variable systems is defined, which is scale-invariant and is sufficient for phase synchronization. We obtain bounds which relate the degree of synchronization to the distance from thermodynamic equilibrium, resulting in an extensive expression for the minimal work necessary to achieve synchronization. Specifically, we study a quantum master equation that includes both non-Hermitian dynamics and a dissipative term in the Gorini-Kassakowski-Sudarshan-Lindblad (GKSL) form, resulting in a non-linear dynamical semigroup. We find that the rate of synchronization is determined by a competition between the irreversible entropy production caused by damping, which slows synchronization, and the strength of the anti-Hermitian coupling, which speeds up synchronization. The resulting upper bound on the synchronization rate has terms of the form of the Mandelstam-Tamm inequality [58], where speed scales with the uncertainty of the energy, except in this case even the uncertainties of the Hermitian and anti-Hermitian parts of the Hamiltonian are crucial. As an example, we consider a dissipatively coupled photonic dimer. The classical counterpart of all our results are obtained and in case of the dimer we find that the model reduces to the celebrated coupled Stuart-Landau oscillators [59; 60]. For the dimer model we find that the quantum system synchronizes in a parameter regime wherein it is impossible for the classical model to synchronize, thereby displaying a quantum advantage. Measure of synchronizationWe consider \(N\) quantum oscillators with annihilation operators \(\hat{a}_{1}\ldots\hat{a}_{N}\). The corresponding dimensionless quadrature operators \(\hat{\mathbf{r}}=(\hat{x}_{1},\hat{p}_{1},\ldots\hat{x}_{N},\hat{p}_{N})^{T}\) read [61; 62; 63] \[\hat{x}_{j}=\frac{\hat{a}_{j}+\hat{a}_{j}^{\dagger}}{\sqrt{2}},\ \ \hat{p}_{j}=\frac{\hat{a}_{j}-\hat{a}_{j}^{\dagger}}{i\sqrt{2}}. \tag{1}\] For the system to synchronize, the phase space coordinates of the different oscillators need to converge. There can be two distinct types of synchronization: (i) complete synchronization (amplitude and phase synchronization), where the phase space trajectories of multiple subsystems converge, and (ii) phase synchronization, for which tshe phase angles of multiple subsystems converge [16]. An intuitive measure to characterize complete synchronization, but not phase synchronization, of a quantum bipartite system is \(\mathcal{S}_{c}=2\left<(\hat{\mathbf{r}}_{2}-\hat{\mathbf{r}}_{1})^{2}\right>^{-1}\)[37]. The growth of \(\mathcal{S}_{c}\) does not require phase synchronization,e.g., in case of amplitude death \(\mathcal{S}_{c}\) always grows but there is no phase synchronization [38]. Therefore, we define a new measure of _complete_ synchronization by looking at the distance between the oscillators _relative_ to the total radius in phase space. For a bipartite system, we have \[D^{2}\equiv\frac{\langle(\hat{\mathbf{r}}_{2}-\hat{\mathbf{r}}_{1})^{2}\rangle }{\langle\hat{\mathbf{r}}^{2}\rangle}, \tag{2}\] and we note that the so-defined \(D\) is scale-invariant with respect to \(\hat{\mathbf{r}}\). Hence, the degree of synchronization does not change when the system is viewed at different magnifications. The bipartite distance measure can be expressed in terms of angular and radial measures of similarity, \[D^{2}=1-\mathcal{S}_{r}\mathcal{S}_{\theta}, \tag{3}\] where \[\mathcal{S}_{\theta}=\frac{\langle\hat{\mathbf{r}}_{1}\cdot\hat{\mathbf{r}}_{2 }\rangle}{\left|\langle\hat{\mathbf{r}}_{1}\rangle\right|\left|\langle\hat{ \mathbf{r}}_{2}\rangle\right|}\equiv\cos\theta, \tag{4}\] and \[\mathcal{S}_{r}=2\sqrt{\frac{\langle\hat{\mathbf{r}}_{1}^{2}\rangle}{\langle \hat{\mathbf{r}}^{2}\rangle}\left(1-\frac{\langle\hat{\mathbf{r}}_{1}^{2} \rangle}{\langle\hat{\mathbf{r}}^{2}\rangle}\right)}. \tag{5}\] The quantity \(\mathcal{S}_{r}\) defined above is similar to the binary entropy function [64], and is maximized when \(\langle\hat{\mathbf{r}}_{1}^{2}\rangle=\langle\hat{\mathbf{r}}_{2}^{2}\rangle\) where \(\mathcal{S}_{r}=1\). Equations (4) and (5) reveal that \(D^{2}\) is between \(0\) and \(2\), with values less than \(1\) indicating synchronization and values greater than \(1\) indicating anti-synchronization. It is also clear from the form of Eq. (3) that for \(D^{2}\) to become small, both \(\mathcal{S}_{r}\) and \(\mathcal{S}_{\theta}\) must approach their maximal values of \(1\), implying that we capture complete and phase synchronization by requiring a decay of \(D^{2}\) in time. For a system of \(N\) oscillators, Eq. (2) can be generalized as \[D^{2}\equiv 2\left(1-\frac{\langle\bar{\mathbf{r}}^{2}\rangle}{\langle \bar{\mathbf{r}}^{2}\rangle}\right), \tag{6}\] where \(\bar{\mathbf{r}}=\left(\sum_{j=1}^{N}\hat{x}_{j},\sum_{j=1}^{N}\hat{p}_{j} \right)^{T}/N\) and \(\langle\overline{\mathbf{r}^{2}}\rangle=\langle\mathbf{r}^{2}\rangle\left/N\right.\). Equation (6) reduces to Eq. (2) for two oscillators, and intuitively captures the notion of synchronization in phase space. Moreover, \(D^{2}\) is non-negative, which follows from Jensen's inequality. Throughout this work, we consider scenarios in which the \(N\) oscillators are initially uncoupled and separately in contact with a thermal bath at inverse temperature \(\beta\). Initially, the oscillators are allowed to come to their respective equilibrium states \(\hat{\rho}_{i}^{\text{eq}}\), and then a coupling between them is turned on. Hence, \(\hat{\rho}_{0}=\bigotimes_{j=1}^{N}\hat{\rho}_{j}^{\text{eq}}\) is our initial state. To quantify if our system has synchronized we will require that the distance \(D\) becomes small and then _stays_ small. In other words, given a \(D_{s}\), the system synchronizes to within the distance \(D_{s}\) if there exists a time \(\tau\) such that for all \(t\geq\tau\), \(D\leq D_{s}\). The smallest \(\tau\) for which this holds will be called the synchronization time \(\tau_{s}\). DynamicsFurther, the considered \(N\) oscillators have natural frequencies \(\omega_{1},\cdots,\omega_{N}\) and an arbitrary anti-Hermitian coupling. The corresponding Hamiltonian reads \[\hat{H}=\hat{H}_{0}+i\hat{H}_{c}=\sum_{j=1}^{N}\omega_{j}\left(\hat{n}_{j}+ \frac{1}{2}\right)+i\hat{H}_{c}, \tag{7}\] where \(\hat{H}_{c}\) is Hermitian, \(\hat{n}_{j}\) is the number operator \(\hat{a}_{j}^{\dagger}\hat{a}_{j}\), and we set \(\hbar=1\). Non-Hermitian Hamiltonians such as the one in Eq. (7) are effective descriptions of _controlled_ dissipation in open quantum systems [65, 66, 67, 68, 69, 70, 71, 72]. In addition, our system interacts with a thermal environment [73, 74], yielding the following quantum master equation (QME) \[\frac{d\hat{\rho}}{dt}=-i[\hat{H}_{0},\hat{\rho}]+\{\hat{H}_{c},\hat{\rho}\}-2 \left<\hat{H}_{c}\right>\hat{\rho}+\mathcal{D}[\hat{\rho}], \tag{8}\] where \(\langle\hat{O}\rangle=\text{tr}\{\hat{O}\hat{\rho}\}\), and the term \(-2\left<\hat{H}_{c}\right>\hat{\rho}\) is included to preserve normalization. This _nonlinear_ equation satisfies the convex quasi-linearity condition and the semigroup property making it a valid quantum evolution [75]. The dissipator \(\mathcal{D}\) takes the GKSL form \(\mathcal{D}[\hat{\rho}]=\sum_{i}\hat{F}_{i}\hat{\rho}\hat{F}_{i}^{\dagger}- \left\{\hat{F}_{i}^{\dagger}\hat{F}_{i},\hat{\rho}\right\}/2\). We also split the Liouvillian into non-interacting \(\mathcal{L}_{0}[\hat{\rho}]\) and interacting \(\mathcal{L}_{c}[\hat{\rho}]\) parts given by, \[\mathcal{L}_{0}[\hat{\rho}] =-i[\hat{H}_{0},\hat{\rho}]+\mathcal{D}[\hat{\rho}] \tag{9}\] \[\mathcal{L}_{c}[\hat{\rho}] =\{\hat{H}_{c},\hat{\rho}\}-2\left<\hat{H}_{c}\right>\hat{\rho},\] such that \(d\hat{\rho}/dt=\mathcal{L}(\hat{\rho})=\mathcal{L}_{0}(\hat{\rho})+\mathcal{L}_{c }(\hat{\rho})\). The state \(\hat{\rho}_{0}\) is a stationary state of the Hermitian \(\hat{H}_{0}\) and Lindblad terms,i.e., \(\mathcal{L}_{0}(\hat{\rho}_{0})=0\). Note that Eq. (8) is nonlinear in \(\hat{\rho}\) only because of the term \(-2\left<\hat{H}_{c}\right>\hat{\rho}\), as \(\langle\hat{H}_{c}\rangle\) itself depends linearly on \(\hat{\rho}\). We will first examine the case of general \(\hat{H}_{c}\), and later specialize to a specific dimer model which results in a _quantum_ Stuart-Landau equation. Quantum synchronization far from equilibriumAs the system evolves, it will depart from the initial state \(\hat{\rho}_{0}\) due to the anti-Hermitian coupling. Let \(\hat{\rho}_{G;E}\) denote a Gibbs state of the uncoupled system with energy \(E\), \(\hat{\rho}_{G;E}\propto\exp(-\beta_{E}\hat{H}_{0})\), and \(\text{tr}\{\hat{\rho}_{G;E}\hat{H}_{0}\}=E\). We introduce a measure \(\chi\), an _ergotropy_[76, 77]_of synchronization_, to quantify the departure of the reduced state \(\hat{\rho}\) from the set of Gibbs states of the uncoupled system, \[\chi\equiv\min_{U}S(\hat{\rho}\|\hat{\rho}_{G;U}), \tag{10}\] in terms of the quantum relative entropy \(S(\hat{\rho}_{1}\|\hat{\rho}_{2})=\text{tr}\{\hat{\rho}_{1}(\ln\hat{\rho}_{1}-\ln \hat{\rho}_{2})\}\)[78]. The minimization over \(U\) in Eq. (10) means that \(\chi\) is a property of the state \(\hat{\rho}\) and does not depend on the environment parameters (i.e., the bath temperature). This is preferable as the degree of synchronization itself should be independent of the bath parameters, so \(\chi\) should be as well to obtain a meaningful relation between them. In the supplemental material [79], we show that for the distance measure \(D\) to become small, \(\chi\) must become large, and we generally have \[\chi\geq-2(N-1)\ln\left(\frac{1}{2}e^{1/(N-1)}\sqrt{\frac{(N/\kappa)^{N/(N-1)} }{2(N-1)}}D\right), \tag{11}\] where \(\kappa=\omega_{\text{min}}/\omega_{\text{max}}\), and for \(N=2\) the above general expression reduces to \[\chi\geq-2\ln\left(\frac{eD}{\sqrt{2}\kappa}\right). \tag{12}\] The analogous bound for classical bipartite systems (see supplemental material [79]) reads \[\chi^{\text{(cl)}}\geq-2\ln\left(\frac{\sqrt{2}D}{\kappa}\right), \tag{13}\] which is defined in terms of the classical relative entropy [80; 81]. We note that the results (12) and (13) have been proved for the case of coupled oscillators, but the proof only relies on certain mild assumptions including the positivity of heat capacity, which is generally valid with some exceptions [82]. A sample of random two-mode Gaussian states is shown in Fig. 1 and compared to Eqs. (12) and (13). Evidently there is a region between the bounds where there may exist states exhibiting a _quantum advantage_, although such states are not present in our random sample. For example, we see that for \(D=0.5\), the classical bound requires \(\chi\) to be almost unity, whereas the more permissive quantum bound requires \(\chi\) to be only slightly greater than zero. If such states exist with \(\chi\) much less than unity for \(D=0.5\), the quantum analysis yields a lower cost to achieve the same degree of synchronization. As stated earlier, we assume the system to begin in the uncoupled equilibrium state \(\hat{\rho}_{0}\), and we quantify the departure from this _particular_ equilibrium state by introducing a parameter \[L=S(\hat{\rho}\|\hat{\rho}_{0}). \tag{14}\] Unlike \(\chi\) that quantifies the distance of the reduced state \(\hat{\rho}\) from all possible Gibbs states and _minimizes_ over the energy, the parameter \(L\) measures the distance with respect to only one specific Gibbs state given by the initial condition. Given the distance measures \(\chi\) and \(L\) the following chain of inequalities follows, \[L\geq\chi\geq\Lambda. \tag{15}\] Above \(\Lambda=\min_{\hat{\sigma}\in\Omega}S(\hat{\rho}\|\hat{\sigma})\) is the relative entropy of entanglement with \(\Omega\) being the set of all separable states of the system [83]. By assumption, \(L=\chi=0\) at time \(t=0\). Therefore as a consequence of Eq. (11), for the system to synchronize in time \(\tau_{s}\) to distance \(D_{s}\) at time \(\tau_{s}\) we must have \[L\geq\chi_{\text{min}}(N,\kappa,D_{s}). \tag{16}\] Also note that [79]\(\dot{L}=\beta\dot{E}-\dot{S}\), so if we write the first law of thermodynamics as \(\dot{E}=\dot{W}-\dot{Q}\), and the second law as \(\dot{S}+\beta\dot{Q}\geq 0\)[84; 85; 86; 87; 88], we have \[\dot{W}\geq\frac{1}{\beta}\dot{L}. \tag{17}\] Above \(S=-\text{tr}\left\{\hat{\rho}\ln\hat{\rho}\right\}\) is the von Neumann entropy, not to be confused with the relative entropy which will always appear with arguments \(S(\cdot\|\cdot)\). Moreover, since \(W(0)=L(0)=0\) it follows that \(W\geq L\), so we have a lower bound on the amount of work required for synchronization, which is our first main result \[W\geq\frac{1}{\beta}\,\chi_{\text{min}}(N,\kappa,D), \tag{18}\] where \(\chi_{\text{min}}(N,\kappa,D)\) is the right hand side of Eq. (11). Interestingly, the quantity \(\chi_{\text{min}}\) is asymptotically linear in \(N\), indicating that the work requirement for synchronization is extensive. The minimum asymptotic work cost \[\chi_{\text{min}}^{\infty}=-\ln\left(\frac{D^{2}}{8\kappa}\right)N, \tag{19}\] Figure 1: Quantum (\(\chi_{\text{min}}\)) and classical (\(\chi_{\text{min}}^{\text{(cl)}}\)) lower bounds on \(\chi\), and convex hull (\(\tilde{\chi}_{\text{min}}\)) of \(10^{6}\) random Gaussian states (1000 states plotted as colored circles). Convex hull is the same for quantum and classical sample states. such that \[\lim_{N\rightarrow\infty}\frac{\chi_{\text{min}}}{\chi_{\text{min}}^{\infty}}=1. \tag{20}\] Classically, such an asymptotic cost \[\chi_{\text{min}}^{\text{(cl)}\infty}=-\ln\left(\frac{D^{2}}{2\kappa}\right)N, \tag{21}\] is lower than the quantum case indicating that in the limit of many oscillators, the thermodynamic costs of synchronizing classical systems will always be lower than synchronizing equivalent quantum systems. However for small values of \(N\) the asymptotic expressions are invalid and the classical synchronization cost may be less, and we leave the full investigation of such cases to future work. Rate of quantum synchronizationThe rate of evolution of \(L\) can be found directly from the master equation (8) [79], and is given by \[\dot{L}=2\,\text{tr}\left\{(\hat{H}_{c}-\langle\hat{H}_{c}\rangle)\hat{\rho} \ln\hat{\rho}\right\}+2\beta_{0}\mathcal{C}_{CE}-\sigma_{0}. \tag{22}\] where \(\mathcal{C}_{CE}=\frac{1}{2}\left\langle\hat{H}_{c}\hat{H}_{0}\right\rangle- \langle\hat{H}_{c}\rangle\,\langle\hat{H}_{0}\rangle\) is the covariance of \(\hat{H}_{0}\) and \(\hat{H}_{c}\), and \(\sigma_{0}=-\,\text{tr}\left\{\mathcal{D}[\hat{\rho}](\ln\hat{\rho}-\ln\hat{ \rho}_{0})\right\}\) is the irreversible entropy production [89] of the uncoupled system, which is always non-negative [66, 90]. In the case that \(\hat{H}_{c}\) is a unbounded operator, \(\dot{L}\) can be bounded from above, \[\begin{split}\dot{L}&\leq 2\Delta_{C}\sqrt{\mathcal{E}+S_{G ;E}^{2}}\\ &\quad+2\beta_{0}\sqrt{\Delta_{E}^{2}\Delta_{C}^{2}-\frac{1}{2} \left|\langle[\hat{H}_{0},\hat{H}_{c}]\rangle\right|^{2}}-\sigma_{0}.\end{split} \tag{23}\] Here, we use the von Neumann entropy of the Gibbs state, \(S_{G;E}=-\text{tr}\{\hat{\rho}_{G;E}\ln\hat{\rho}_{G;E}\}\), as well as the capacity of entanglement \(\mathcal{E}=\,\text{tr}\,\{\hat{\rho}(\ln\hat{\rho})^{2}\}-S^{2}\), which is the second moment of surprisal [91]. In the special case where \(\hat{H}_{c}\) is a bounded operator, we may instead use the alternative bound given in the supplemental material [79] where the capacity of entanglement does not appear. A classical limit of the master equation (8) can be derived by identifying a quantum phase space distribution with a classical probability density [92, 93, 94], and a corresponding classical bound (see supplemental material [79]) reads \[\begin{split}\dot{L}^{\text{(cl)}}&\leq\beta_{0} \left\langle\nabla H_{c}\cdot\nabla H_{0}\right\rangle-\left\langle\nabla^{2 }H_{c}\right\rangle\\ &\quad+2\Delta_{C}\left\langle\ln(f)^{2}\right\rangle+2\beta_{0} \Delta_{C}\Delta_{E}-\sigma_{0}.\end{split} \tag{24}\] Equation (24) differs from Eq. (23) because of the presence of the geometric terms associated with phase space flow [95], as well as the absence of the commutator. The interpretation of Eq. (23) is as follows: the final term \(\sigma_{0}\) is the rate of irreversible entropy production [90] which one would obtain in the absence of coupling, and this term will always be negative. Therefore the other terms must have a net positive effect larger than this entropy production rate in order for synchronization to occur. The first term is of the form of a quantum speed limit with the additional factor of entropy. This term unsurprisingly implies that stronger non-Hermitian coupling leads to faster synchronization. The second term is reminiscent of the Mandelstam-Tamm quantum speed limit [58], and involves the second moments of both the Hermitian and anti-Hermitian parts of the Hamiltonian, however there is a penalty that scales with the square of their commutator. This term arises from the uncertainty relation [96], and can be explained by the fact that synchronization is most effective when there is a large correlation between the observables corresponding to the Hermitian and anti-Hermitian parts of the Hamiltonian. Coupled waveguide modelYang et al. [97] use an adiabatic elimination procedure to reduce a system of three coupled waveguides to a system of two waveguides with an effective anti-\(\mathcal{PT}\) symmetric Hamiltonian with the non-Hermitian coupling of the form \[\hat{H}_{c}=\frac{k}{2}(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^{\dagger} \hat{a}_{1}). \tag{25}\] In addition to this controlled dissipative interaction, we consider the two modes in contact with heat reservoirs, resulting in a master equation of the form of Eq. (8) with four jump operators \(\hat{L}_{j+}=(\hat{a}_{j}^{\dagger})^{2}\), \(\hat{L}_{j-}=\hat{a}_{j}^{2}\). Results of the simulated quantum master equation (8) with the coupling (25) are shown in Figs. 2 and 3. Interestingly, in Fig. 2(a) we see that the trajectories in the \(\chi\)-\(D\) plane are generally confined within the convex hull of randomly generated Gaussian states. However, this is not surprising given that the coupling is bilinear, and the initial state is Gaussian, so the evolved state should be Gaussian as well [61]. In Fig. 2(b) we see the time evolution of \(D\) and \(\chi\), and in particular Figure 2: Trajectories in the \(\chi\)-\(D\) plane with \(k\) values (from left to right) \(k=5\), \(k=3\), \(k=1\), \(k=0.5\), \(k=-0.5\), \(k=-1\), \(k=-3\) are shown in (a) Classical (dotted line) and quantum (dashed line) lower bounds on \(\chi\), and convex hull of \(10^{6}\) random Gaussian states (solid line). Time-dependence of \(D\) (solid lines) and \(\chi\) (dashed lines) with \(k\) values (solid lines bottom to top), dashed lines to top to bottom) \(k=5\), \(k=3\), \(k=1\), \(k=0.5\) in (b) The frequencies of the two oscillators are \(\omega_{1}=2\pi\), \(\omega_{2}=3\pi\). Both bath temperatures are set to \(T=20\). The coupling strengths \(\gamma_{+}=0.001\) and \(\gamma_{-}\) is determined via \(\beta\) and local detailed balance condition as described in the supplemental material [79]. it is notable that \(\chi\) is almost always monotonically increasing whereas \(D\) has more significant reversals in direction. It can also be clearly seen from Fig. 2(a) that \(\chi\) is generally increasing even when \(D\) does not change appreciably, meaning that work will be wasted in such cases where \(k\) is not large enough for synchronization to occur. In the classical limit, this dynamics reduces to \[\dot{z}_{1}=\left[\frac{k}{2}-i\omega_{1}-\gamma_{1}\left|z_{1}\right|^{2} \right]+\frac{k}{2}(z_{2}-z_{1})-i2\sqrt{\gamma_{1}}\xi(t), \tag{26}\] \[\dot{z}_{2}=\left[\frac{k}{2}-i\omega_{2}-\gamma_{2}\left|z_{2}\right|^{2} \right]+\frac{k}{2}(z_{1}-z_{2})-i2\sqrt{\gamma_{2}}\xi(t), \tag{27}\] where we use a complex phase space representation \(z_{j}=(x_{j}+ip_{j})/\sqrt{2}\) and \(\xi(t)\) denotes an idealized delta-correlated noise process [79]. Note that Eqs. (26) and (27) describe a pair of coupled Stuart-Landau oscillators with amplitude-dependent noise [98; 99]. We find that the classical Stuart-Landau system displays synchronization in the regime \(k\geq|\omega_{2}-\omega_{1}|\) (see Fig. 3 and supplemental material [79]). The corresponding quantum system is also synchronous for \(k\geq|\omega_{2}-\omega_{1}|\), and synchronizes for smaller values of \(k\) than the classical system [79]. This presents clear evidence of a quantum advantage that leads to a wider regime of synchronization well beyond the classical case. _Concluding remarks_ In this work is that we have found the minimal amount of work required for synchronization of an arbitrary number of oscillators, as well as bounded the speed at which synchronization may occur. Our numerical results for the model of a anti-\(\mathcal{PT}\)-symmetric Hamiltonian serve to illustrate an experimentally realizable system where this process may occur. Our analysis is formulated mostly in terms of informational quantities, and therefore allows for an information-theoretic interpretation of the synchronization process as a form of communication. This connection could be made more explicit in future work by relating our measure of synchronization to mutual information. It remains to be seen whether there are states that display a quantum advantage, in the sense of Fig. 1, and a related goal for future work is to understand the apparent quantum advantage displayed in synchronization of the dimer model (see Fig. 3). J.T. acknowledges support from the Institute for Basic Science in South Korea (IBS-R024-Y2). The authors would like to thank M. Rohith for the useful discussions. S.D. acknowledges support from the U.S. National Science Foundation under Grant No. DMR-2010127 and the John Templeton Foundation under Grant No. 62422.
2301.04493
The SeaLiT Ontology -- An Extension of CIDOC-CRM for the Modeling and Integration of Maritime History Information
We describe the construction and use of the SeaLiT Ontology, an extension of the ISO standard CIDOC-CRM for the modelling and integration of maritime history information. The ontology has been developed gradually, following a bottom-up approach that required the analysis of large amounts of real primary data (archival material) as well as knowledge and validation by domain experts (maritime historians). We present the specification of the ontology, RDFS and OWL implementations, as well as knowledge graphs that make use of this data model for integrating information originating from a large and diverse set of archival documents, such as crew lists, sailors registers, naval ship registers and payrolls. We also describe an application that operates over these knowledge graphs and which supports historians in exploring and quantitatively analysing the integrated data through a user-friendly interface. Finally, we discuss aspects related to the use, evolution and sustainability of the ontology.
Pavlos Fafalios, Athina Kritsotaki, Martin Doerr
2023-01-11T14:37:32Z
http://arxiv.org/abs/2301.04493v1
The SeaLiT Ontology - An Extension of CIDOC-CRM for the Modeling and Integration of Maritime History Information ###### Abstract. We describe the construction and use of the SeaLiT Ontology, an extension of the ISO standard CIDOC-CRM for the modelling and integration of maritime history information. The ontology has been developed gradually, following a bottom-up approach that required the analysis of large amounts of real primary data (archival material) as well as knowledge and validation by domain experts (maritime historians). We present the specification of the ontology, RDFS and OWL implementations, as well as knowledge graphs that make use of this data model for integrating information originating from a large and diverse set of archival documents, such as crew lists, sailors registers, naval ship registers and payrolls. We also describe an application that operates over these knowledge graphs and which supports historians in exploring and quantitatively analysing the integrated data through a user-friendly interface. Finally, we discuss aspects related to the use, evolution and sustainability of the ontology. Ontologies, Maritime History, CIDOC-CRM, Data Integration, Semantic Interoperability + Footnote †: Authors’ address: Pavlos Fafalios, [email protected]; Athina Kritsotaki, [email protected]; Martin Doerr, [email protected], Centre for Cultural Informatics and Information Systems Laboratory, FORTH-ICS, N. Plastira 100, Heraklion, Greece, GR-70013. ## 1. Introduction Maritime history is the study of human activity at sea. It covers a broad thematic element of history, focusing on understanding humankind's various relationships to the oceans, seas, and major waterways of the globe (MARTIN, 2017). A large area of research in this field requires the collection and integration of data coming from multiple and diverse historical sources, in order to perform qualitative and quantitative analysis of empirical facts and draw conclusions on possible impact factors (Falaldi, 2017; D'Alessio et al., 2018). Consider, for instance, the real use case of the SeaLiT project (ERC Starting Grant in the field of maritime history)1, which studies the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea between the 1850s and the 1920s (Bartin et al., 2018). Historians in this project have collected a large number of archival documents of different types and languages, including crew lists, payrolls, sailor registers, naval ship register lists, and employment records, gathered from multiple authorities in different countries (more about this project in Sect. 2.1). Complementary information about the same entity of interest, such as a ship, a port, or a captain, may exist in different archival documents. For example, for the same ship, one source may provide information about its owners, another source may provide construction details and characteristics of the ship (length, width, tonnage, horsepower, etc.), while other sources may provide information about the ship's voyages and crew. Footnote 1: [https://sealitproject.eu/](https://sealitproject.eu/) Information integration is crucial in this context for performing valid data analysis and drawing safe conclusions, such as finding answers to questions that require combining and aggregating information, like _"finding the number of sailors per residence location that arrived at a specific port and who were crew members in ships of a specific type, e.g. Brig"_. Moreover, information integration under a common data model can produce data of high value and long-term validity that can be reused beyond a particular research activity or project, as well as integrated with other datasets by the wider (historical science) community. To this end, this paper describes the construction and use of the _SeaLiT Ontology_. The ontology aims at facilitating a shared understanding of maritime history information by providing a common and extensible semantic framework for information modeling and integration. It uses and extends the CIDOC Conceptual Reference Model (CRM) (ISO 21127:2014)2 as a formal ontology of human activity, things and events happening in space and time [3]. Footnote 2: [https://cidoc-crm.org/](https://cidoc-crm.org/) The ontology was designed considering requirements and knowledge of domain experts (a large group of maritime historians), expressed through research needs, inference processes they follow, and exceptions they make. It was developed in a bottom-up manner by analysing large and heterogeneous amounts of primary data, in particular archival documents of different types and languages gathered from authorities in several countries, including crew lists, payrolls, civil registers, sailor registers, naval ship registers, employments records, censuses, and others. All modeling decisions were validated by the domain experts and, in practice, by transforming their data (transcripts) to a rich semantic network based on the SeaLiT Ontology, which enables them (through a user-friendly interface) to find answers to information needs that require combining information of different sources. We describe the methodology and the steps we followed for designing the ontology, and provide its specification, RDFS and OWL implementations, as well as knowledge graphs that make use of the ontology for integrating data transcribed from a large and diverse set of archival documents. We also describe a data exploration application that operates over these knowledge graphs and which currently supports maritime historians in exploring and analysing the integrated data. Table 1 provides the key access links to the SeaLiT Ontology as well as related resources and information. The rest of this paper is organised as follows: Section 2 describes the context of this work, provides the required background, and discusses related work. Section 3 details the methodology and principles we have followed for building the ontology. Section 4 presents the ontology, describes an example on how a part of the model was revised several times to incorporate new historical knowledge, and provides its specification as well as an RDFS and an OWL implementation. Section 5 describes the application of the ontology in a real context. Section 6 discusses its usage and sustainability. Finally, Section 7 concludes the paper and outlines future work. \begin{table} \begin{tabular}{l l} \hline \hline SeaLiT Ontology Specification & [https://zenodo.org/record/6797750](https://zenodo.org/record/6797750) \\ DOI of the SeaLiT Ontology & 10.5281/zenodo.6797750 \\ Namespace of the SeaLiT Ontology & [http://www.sealitproject.eu/ontology/](http://www.sealitproject.eu/ontology/) \\ SeaLiT Ontology RDFS (Turtle) & [https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.ttl](https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.ttl) \\ SeaLiT Ontology RDFS (RDF/XML) & [https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.rdf](https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1_RDFS.rdf) \\ SeaLiT Ontology OWL (RDF/XML) & [https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl](https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl) \\ \hline SeaLiT Knowledge Graphs (KGs) & [https://zenodo.org/record/6460841](https://zenodo.org/record/6460841) \\ DOI of SeaLiT KGs & 10.5281/zenodo.6460841 \\ ResearchSpace application over the KGs & [http://rs.sealitproject.eu/](http://rs.sealitproject.eu/) \\ \hline License of SeaLiT Ontology \& KGs & Creative Commons Attribution 4.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Key access links and information of the SeaLiT Ontology. ## 2. Context, Background and Related Work ### The SeaLiT Project The ontology has been developed in the context of the SeaLiT project3, a European project in the field of maritime history (ERC Starting Grant, No 714437). The project studies the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea between the 1850s and the 1920s. Historians in SeaLiT investigate the maritime labour market, the evolving relations among ship-owners, captain, crew, and local societies, and the development of new business strategies, trade routes, and navigation patterns, during the transitional period from sail to steam. The main concepts on which the scientific research focuses, are the ships (including various information such as type, usage, dimensions, technology), the people related to the ships (sailors, ship owners, students, relatives) and the historical events/activities related to these (such as voyages, recruitments, payments). Footnote 3: [https://sealitproject.eu/](https://sealitproject.eu/) The archival sources considered and studied in SeaLiT range from hand written ship log books, crew lists, payrolls and employment records, to registers of different types such as civil, sailors, students and naval ship registers. These archival sources have been gathered from different authorities in countries of the Mediterranean and the Black Sea, and are written in different languages, including Spanish, Italian, French, Russian, and Greek. The full archival corpus studied in SeaLiT is described in the project's web site.4 Footnote 4: [https://sealitproject.eu/archival-corpus](https://sealitproject.eu/archival-corpus) ### The ISO standard C1DOC-CRM The SeaLiT Ontology uses and extends the CIDOC-CRM (Conceptual Reference Model)5, in particular its stable version 7.1.1, which means that each class of the SeaLiT Ontology is a direct subclass or a descendant of a CIDOC-CRM class. Footnote 5: [http://www.cidoc-crm.org/](http://www.cidoc-crm.org/) CIDOC-CRM is a high-level, event-centric ontology of human activity, things and events happening in spacetime, providing definitions and a formal structure for describing the implicit and explicit concepts and relationships used in cultural heritage documentation [(3)]. It is the international standard (ISO 21127:2014)6 for the controlled exchange of cultural heritage information, intended to be used as a common language for domain experts and implementers to formulate requirements for information systems, providing a way to integrate cultural heritage information of different sources. Footnote 6: [https://www.iso.org/standard/57832.html](https://www.iso.org/standard/57832.html) The cholesterol stable release of CIDOC-CRM (version 7.1.1) consists of 81 classes and 160 unique properties. The highest-level distinction in CIDOC-CRM is represented by the top-level concepts of E77 Persistent Item (equivalent to the philosophical notion of endurant), E2 Temporal Entity (equivalent to the philosophical notion of perdurant) and, further, the concept of E92 Spacetime Volume which describes the entities whose substance has or is an identifiable, confined geometrical extent in the material world that may vary over time. Fig. 1 depicts how the high level classes of CIDOC-CRM are connected. ### Related Work Over the last years, methods and technologies of the Semantic Web have started playing a significant and ever increasing role in historical research. The survey in [(13)] reviews the state of the art in the application of semantic technologies to historical research, in particular works related to i) knowledge modeling (ontologies, data linking), ii) text processing and mining, iii) search and retrieval, and iv) semantic interoperability (data integration, classification systems). As regards ontologies for the modeling of _maritime history_ information, the most relevant work is an ongoing project on the ontology management environment OntoME [1] that aims to provide a data model for the field of maritime/nautical history.7 The project is a cooperation between the Huygens Institute for the History of the Netherlands, LARHRA and the Data for History consortium. The current (draft) model consists of 13 classes and 12 properties, while it makes use of CIDOC-CRM as well as extensions of CIDOC-CRM. The ontology is unfinished and not for use yet (as of December 15, 2022). Footnote 7: [https://ontome.net/namespace/66](https://ontome.net/namespace/66) _Conflict8_ is an ontology developed in the context of the SAILS project (2010-2013)9 that models concepts useful for describing the First World War. The provided ontology version (0.1) is actually a _taxonomy_ consisting of 175 classes, some of which allow modeling information related to maritime history, like the classes Ship, Ship_journey, Ship_type, and Ownership. Similarly, there are ontologies that could be used for modeling other _parts_ of the model, such as _GoodRelations_[8], a lightweight ontology for exchanging e-commerce information, for the part that concerns payments for products. Footnote 8: [http://ontologies.michelepasin.org/docs/conflict/index.html](http://ontologies.michelepasin.org/docs/conflict/index.html) Footnote 9: [http://sailsproject.cerch.kcl.ac.uk/](http://sailsproject.cerch.kcl.ac.uk/) We selected to use CIDOC-CRM because it is the standard ontology for cultural heritage documentation, extensively used in the fields of cultural heritage, history and archaeology. It is directly related to the domain of discourse of history, as a discipline that studies the life of humans and societies in the past. This scope, studied from the point of view of maritime historical research, can be represented by the abstraction of reality offered by CIDOC-CRM. As an example, we can directly take advantage of the (direct or inherited) properties of the CIDOC-CRM class E7 Activity, such as _'P14 carried out by'_, _'P4 has time-span'_, _'P7 took place at'_, etc., and use them for describing instances of classes of the SeaLiT Ontology that are subclasses of E7 Activity (e.g. Voyage, Arrival, Recruitment, etc.). Therefore, using CIDOC-CRM facilitates data integration with relevant (existing or future) datasets that also make use of CIDOC-CRM, but also it enables data sustainability because CIDOC-CRM is a living standard and has a very active community that constantly works on it and improves it. Finally, there is a plethora of ontologies which have been developed as extensions of CIDOC-CRM, e.g. CRMas [14] for documenting archaeological science, CRMgeo [9] for geospatial information, CRMdig [18] for provenance of digital objects, IAM [4] for factual argumentation, and others. Figure 1: High level properties and classes of CIDOC-CRM. ## 3. Design Methodology and Principles ### Overall Methodology The ontology has been created gradually, following a bottom-up strategy (Beng et al., 2019), working with real empirical data and information needs, in particular digitised historical records (transcripts) and corresponding data structures in various forms, as well as research questions provided by a large group of historians. The archival material together with the research questions define the modeling requirements. The main characteristics of our strategy are summarised as follows: * Study and analysis of a large and diverse set of archival sources related to maritime history. This material provides historical information about ships, persons (such as sailors, captains, ship owners, students), and relevant activities and events (such as voyages, recruitments, payments, teaching activities). * Gathering of research questions and corresponding information needs (_competency questions_) for which the considered archival sources can provide answers or important relevant information. * Lengthy discussions with a large group of maritime historians from different institutions and countries (Spain, Italy, France, Croatia, Greece), for consulting as well as understanding of inference processes and exceptions they make. In more detail, our approach focused on studying and analysing the historical sources from the historians perspective, following their respective research questions and practices of documentation. In order to achieve that, we had to consult all the data providers (coming from different research teams and countries) for a long period and to do extensive research on their research practices and the historical data for the development and the validation of the model. As a result, the model was designed from actual data values, from existing (and used) structured information sources (such as spreadsheets) and historical records (transcripts) that include the original information. The model's concepts were refined several times during the span of the project for considering new information coming from new kinds of sources. Table 2 provides the considered archival sources as well as an overview of the recorded information and an example record (transcript) for each source.10 Footnote 10: A web application that allows exploring the data in the transcripts of these archival sources is available at: [https://catalogues.sealitproject.eu/](https://catalogues.sealitproject.eu/) As regards the research questions and information needs provided by the historians, their majority concerns aggregated information, such as _number of sailors per origin location that arrived at a specific port, average tonnage of ships, wage level per country, percentages of immigration in relation to the sailors' profession_, etc. Other information needs concern the retrieval of a specific list of entities (e.g. _ship construction places during a specific time period_), comparative information (e.g. _time of sailors' service in relation to the time on land, number of women/men in ships_, etc.), or the retrieval of a specific value (e.g. _total number of officers employed by the company in a specific year or span of years_).11 Footnote 11: The full list of information needs is available at [https://users.ics.forth.gr/~fafafalios/SeaLiT_Competency_Questions_InfoNeeds.pdf](https://users.ics.forth.gr/~fafafalios/SeaLiT_Competency_Questions_InfoNeeds.pdf) For creating the ontology, we followed a custom engineering methodology (Gathering and Gathering, 2018) which, though, maintains most of the features supported by existing methodologies, such as HCOME (Gathering and Gathering, 2018) and DILIGENT (Gathering and Gathering, 2018). In particular: * Data-driven / bottom-up processing (our strategy for the development of the ontology) * Involvement of domain experts (maritime historians in our case) * Iterative processing (gradual, highly-iterative ontology development) * Collaborative engineering processing (within a small team of conceptual modeling experts) * Validation and exploitation (validation by domain experts and application in a real context) * Detailed versioning (multiple intermediate versions, currently in stable version 1.1) ACM J. Comput. Cult. Herit., Vol. -, No. -, Article. Publication date: January 2022. \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}} \hline \hline Archival source & Overview of recorded information and example transcript \\ \hline Crew and displacement list (Roll) & ships (name, type, construction location, construction year, registry location, owners), ports of provenance, arrival ports, destination ports, crew members (name, father’s name, birth place, residence location, professional, age), embarkation ports, discharge ports. **[example transcript: [https://tinyurl.com/ukexzefle](https://tinyurl.com/ukexzefle))** \\ \hline Crew List (Ruoli di Equipaggio) & ships (name, type, construction location, construction year, registry number, registry port, owners), voyages (date from/to, duration, total crew number), destinations, departure ports, arrival ports, crew members (name, residence location, birth year, serial number, profession), embarkation ports, discharge ports. **[example transcript: [https://tinyurl.com/zu35frya](https://tinyurl.com/zu35frya)]** \\ \hline General Spanish Crew List & ships (name, type, tonnage, registry port), ship owners, crew members (name, age, residence location), voyages (date from/to, total crew number), embarkation ports, destinations. **[example transcript: [https://tinyurl.com/3axscfret](https://tinyurl.com/3axscfret))** \\ \hline Saliors Register (Libro de registro de marinares) & seafares (name, father’s name, mother’s name, birth date, birth place, profession, military service organisation locations) **[example transcript: [https://tinyurl.com/2p8kzmon](https://tinyurl.com/2p8kzmon)]** \\ \hline Register of Maritime Personnel & persons (name, father’s name, mother’s name, birth place, birth date, residence location, marital status, previous profession, military service organisation location). **[example transcript: [https://tinyurl.com/4v6hnwjj](https://tinyurl.com/4v6hnwjj)]** \\ \hline Seagoing Personnel & persons (name, father’s name, marital status, birth date, profession, end of service reason, work status type), ships (name), destinations. **[example transcript: [https://tinyurl.com/2x5cu37n](https://tinyurl.com/2x5cu37n)]** \\ \hline Naval Ship Register List & ships (name, type, tonnage, length, construction location, registration location, owner). **[example transcript: [https://tinyurl.com/bdnx87tr](https://tinyurl.com/bdnx87tr)]** \\ \hline List of Ships & ships (name, previous name, type, registry port, registry year, construction place, construction year, tonnage, engine construction place, engine manufacturer, nominal power, indicated power, owners). **[example transcript: [https://tinyurl.com/2cphfpe](https://tinyurl.com/2cphfpe)]** \\ \hline Civil Register & persons (name, profession, origin location, age, sex, marital status, death location, death reason, related persons). **[example transcript: [https://tinyurl.com/bdzeja8n](https://tinyurl.com/bdzeja8n)]** \\ \hline Maritime Register, La Ciotat & persons (name, birth date, birth place, residence location, profession, service sector), embarkation locations, disembarkation locations, ships (name, type, navigation type), captains, patrons. **[example transcript: [https://tinyurl.com/fkhyp4a](https://tinyurl.com/fkhyp4a)]** \\ \hline Students Register & students (origin location, profession, employment company, religion, related persons), courses (title, subject, date from/to, semester, total number of students). **[example transcript: [https://tinyurl.com/mrypechb](https://tinyurl.com/mrypechb)]** \\ \hline Census La Ciotat & occupants (name, age, birth year, birth place, nationality, marital status, religion, profession, working organisation, household role, address). **[example transcript: [https://tinyurl.com/4drfcbtt](https://tinyurl.com/4drfcbtt)]** \\ \hline Census of the Russian Empire & occupants (name, patronymic, sex, age, marital status, estate, religion, native language, household role, occupation, address). **[example transcript: [https://tinyurl.com/43xcrvux](https://tinyurl.com/43xcrvux)]** \\ \hline Payroll (of Greek Ships) & ships (name, type, owners), captains, voyages (date from/to, total days, days at sea, days at port, overall total wages, overall tension fund, overall net wage), persons (name, adult/child, literacy, origin location, profession/rank), employments (recruitment date, discharge date, recruitment location, monthly wage, total wage, pension fund, net wage). **[example transcript: [https://tinyurl.com/ztijkjw7](https://tinyurl.com/ztijkjw7)]** \\ \hline Payroll (of Russian Steam Navigation and Trading Company) & ships (name, owners), persons (name, patronymic, adult/child, sex, birth date, estate, registration place), recruitments (port, type of document, rank/specialisation, salary per month). **[example transcript: [https://tinyurl.com/y5urjhc9](https://tinyurl.com/y5urjhc9)]** \\ \hline Employment records (Shipyards of Messageries Maritimes, La Ciotat) & workers (name, sex, birth year, birth place, residence location, marital status, profession, status of service in company, workshop manager). **[example transcript: [https://tinyurl.com/yc3havck](https://tinyurl.com/yc3havck)]** \\ \hline Logbook & ships (name, type, telegraphic code, tonnage, registry port, owners), captains, departure ports, destination ports, route movements, calendar event types. **[example transcript: [https://tinyurl.com/mxx2re9k](https://tinyurl.com/mxx2re9k)]** \\ \hline Accounts Book & ships (name, type, owners), voyages, captains, departure ports, destination ports, ports of call, transactions (type, recording location, supplier, mediator, receiver). **[example transcript: [https://tinyurl.com/4uNyb8](https://tinyurl.com/4uNyb8)]** \\ \hline \hline \end{tabular} \end{table} Table 2. Considered archival sources and type of recorded information. ### Design Steps and Principles The basis for the model was CIDOC-CRM since it is a standard suitable for recording historical information relating who, when, where, and what. From an ontological point of view, we followed the below steps: 1. We have extended CIDOC-CRM by creating new classes as subclasses of CIDOC-CRM classes and defining properties accordingly (with some of them being subproperties of CIDOC-CRM properties). After extending or revising the model for a given type of archival source and corresponding information needs, we created mappings for transforming the data from the source schema to a semantic network (RDF triples) based on the designed (target) model. This conceptual alignment was an important step to the ontology development process, contributing to redesign concepts and finalise the model. 2. We distinguished the entities included in the existing schemata into those that directly or indirectly imply an _event_ and to those that imply _objects_, mobile or immobile, and classified them in abstraction levels according to whether they represent individuals, or set of individuals. We realised that most binary relationships acquire substance as temporal entities (e.g. _has met, has created_, etc.). This principle helped us to detect hidden events in the data structures. 3. We classified the existing relations between the entities according to the abstraction level which their domain and range entity belong to, and created class and property hierarchies accordingly. We did not define the same property twice for different classes, but found the most general (super)class that the property applies to. The discovery of repeating properties for different classes, suggested that they rely on a common, more general concept, causal to the ability to have such a relation in the first place. Finding the single most general concept to describe this common generalization allowed the creation of a general class to which the properties can be applied and from which these relations can be inherited by assigning the originally modelled classes as subclasses of the newly created generalization (like in the case of classes _Money for Service_ and _Legal Object Relationship_). 4. We found classes for the relevant properties, and not properties for relevant classes (e.g. _Voyage_ for the property 'voyages', _Ship Construction_ for 'constructed', etc.). We detected the general classes for which each property is characteristic of. In other terms, we found the one most specific class that generalizes over all classes for which the property applies as domain or range. 5. We defined concepts by finding the identity criteria of them, by distinguishing what is and what is not an instance of these concepts. We identified classes that exist independent from the property, and not "anything that has this property" (e.g. the case of the _Service_ concept). 6. The number of the classes and relationships developed can answer queries of _global_ nature. By global queries we mean those that users would address to more than one database (source) at the same time in order to get a comprehensive answer, in particular including joins across databases. It should also be emphasised that the goal was not to model 'everything' but rather to model the necessary and well understood concepts for this specific domain. The ontology was built following these principles. Its design and development was an iterative process with several repetitions of the steps described above. ## 4. The Sealit Ontology We first provide an overview of the ontology (Sect. 4.1), then we describe an ontology evolution example (Sect. 4.2), and finally we present the specification of the ontology as well as RDFS and OWL implementations (Sect. 4.3). ### Ontology Overview The ontology currently (version 1.1) contains 46 classes, 79 properties and 4 properties of properties, allowing the description of information about _ships_, _ship voyages_, _seafaring people_, _employments_ and _payments_, _teaching activities_, as well as a plethora of other related activities and characteristics. Appendices A and B provide the full class and property hierarchy, respectively. Fig. 2 shows how information about a _ship_ is modelled.12 A Ship (subclass of E22 Human-Made Object) is the result of a Ship Construction activity (subclass of E12 Production) which gave the Ship Name (subclass of E41 Appellation) to the ship. A ship also has some characteristics, like Horsepower and Tonnage (subclasses of E54 Dimension; this allows providing, apart from the value, the corresponding measurement unit, a note, etc.), and is registered through a Ship Registration (subclass of E7 Activity) by a Port of Registry (subclass of E74 Group), with a ship flag of a particular Country (subclass of E53 Place) and with a particular Ship ID (subclass of E42 Identifier). Modeling the ship ID as a class allows including additional information about the identifier, such as which authority provided the identifier, when, etc. (by connecting it to the CIDOC-CRM class E15 Identifier Assignment). Finally, a ship has one or more Ship Ownership Phases (subclass of Legal Object Relationship), each one initialized by a Ship Registration and terminated by a De-flagging activity. Note here that, all classes related to activities (like Ship Construction, Ship Repair, De-flagging, etc.) can make use of the CIDOC-CRM property _'P4 has time-span'_ for providing temporal information. Footnote 12: The classes whose name starts with the letter ‘E’ followed by a number are CIDOC-CRM classes (these are in green boxes in the figures). All other are classes of the SealLT Ontology (in blue boxes). Accordingly, all properties whose name starts with the letter ‘P’ followed by a number are properties of CIDOC-CRM, while all other are properties of the SealLT Ontology. Fig. 3 shows how information about a _ship voyage_ is modelled in the ontology. First, a Voyage (subclass of E7 Activity) concerns a particular Ship, navigated by one or more captains (E39 Actor), and has a _starting from_ place, a _destination_ place, and a _finally arriving at_ place (E53 Place). Then, the main activities during a ship voyage include Loading things, Leaving from a place, Passing by or through a place, Arrival at a place, and Figure 2. Modelling information about a ship. Unloading things. All these activities are linked to a E52 Time-Span through the CIDOC-CRM property _'P4 has time-span'_. Fig. 4 shows how the ontology allows describing information about _employments and payments_. Money for Service (subclass of E7 Activity) is given to an E39 Actor for a particular Service (subclass of E7 Activity).13 The class Money for Service has two specialisations (subclasses): Money for Things and Money for Labour, while the class Employment is a specialisation of the class Service. A Crew Payment concerns a particular Voyage and is a specialisation of Money for Labour. In this context, a Labour Contract (subclass of E29 Design or Procedure) specifies the conditions of Money for Labour. An Employment starts with a Recruitment (subclass of E7 Activity) and ends with a Discharge (subclass of E7 Activity). Footnote 13: We use the term ‘money’ instead of ‘payment’, because we want to indicate that there was a money transaction, e.g. using lira, franc, etc. (in older times, a payment could be conducted without the use of money, e.g. using things). Fig. 5 shows how information about _persons_ (seagoing people, such as captains, crew members, students, etc.) is modelled in the ontology. A person (E21 Person) is registered through a Civil Registration activity and receives an identifier (E42 Identifier). A person has a first name and last name (E62 String), works at an organisation or company (E74 Group), has an age (E60 Number) at a specific time (the time of the information recording), as well as a set of other properties, in particular a Religion Status, a Literacy Status, a Sex Status, a Language Capacity, a Social Status, and a Profession (all subclasses of E55 Type). The use of E55 Type as superclass of these properties/qualities (instead of modeling them as temporal entities) is a good solution when the sources (such as a civil register or a census document) do not provide enough temporal information to infer/observe the corresponding event (this is exactly the case with the archival sources of the SeaLiT project). In addition, a Punishment (subclass of E7 Activity) or Promotion (subclass of E13 Attribute Figure 3: Modelling information about a ship voyage. Assignment) can be given to a person. A Promotion is related either to a Social Status promotion or to a job/career (Profession) promotion. Finally, Fig. 6 shows how the ontology allows describing information about teaching activities related to seafaring. A Teaching Unit is an activity that can be specialised to Course or Section. It is connected to a Subject (subclass of E55 Type), the students (E39 Actor) who participated in the teaching unit, the number of participating students (E60 Number), as well as one or more other teaching units through the CIDOC-CRM Figure 4: Modelling information about employments and payments. Figure 5: Modelling information about persons. property _'P9 consists of'_. The latter allows, in particular, describing the information that a course consists of sections. ### Ontology Evolution Example The ontology development process lasted more than two years, including a large number of intermediate versions, before releasing the first _stable_ version (1.0). In particular, the ontology elements (classes and properties) were revised several times based on (a) new evidence coming from newly-considered archival sources, and (b) new requirements (information needs) by the domain experts (maritime historians). Such new evidence and requirements required either the definition of new elements, such as the creation of a new class or property, or the revision of an existing set of elements that concern a part of the model. Fig. 7 shows how the part of the ontology that concerns _ship ownership_ was revised several times during the ontology development process. A first requirement provided by the historians was the ability to find all ships per owner. The analysed archival material (_crew lists_) only provided the name of the owner, where the value was either the name of a person or the name of a company. Based on this evidence, the property _'has owner'_ was created connecting an instance of Ship with the an instance of the CIDOC-CRM class E39 Actor (v1 in Fig. 7). Another source (_naval ship register lists_) provided information about ships' previous owners, while a new requirement was the ability to find the number of first owners per ship during a period of time. Based on this, as well as on the fact that the binary relationship _has owner_ implies/hides a temporal entity, we defined the class Ship Ownership Phase, the property _'has phase'_ for connecting a ship to a ship ownership phase, the property _'in time'_ for connecting the ownership phase to a E52 Time-Span, while the property _'has owner'_ was revised for connecting the ship ownership phase with an E39 Actor (v2 in Fig. 7). A ship can have many names during its lifespan, while an owner can own more than one ships with the same name (as shown in _logbooks_ and _crew and displacement lists_). According to the historians, ownership usually assigns a name to a ship and a ship changes its name under a new ownership state at a specific time. Based on this historical knowledge, the property _'ownership under name'_ was created for enabling to link the ship ownership phase to a Ship Name (v3 in Fig. 7). Evidence shows that ownership of a ship is a type of information that can be inferred and not directly observed. An ownership phase can be traced by the _ship registration_ activity that initiates it and by the _de-flagging_ activity that terminates it. The documentation of a ship registration in _Austrian Lloyd's fleet lists_, in particular, includes information about the ship's construction place and date, which together with the name given to ship after construction constitute safe criteria to identify a ship. Based on this, the classes Ship Registration Figure 6. Modelling information about teaching activities. (subclass of E72 Activity), De-flagging (subclass of E72 Activity) and Ship Construction (subclass of E12 Production) were defined, together with the properties _'registers'_ (for linking a registration activity to a ship), _'ownership is initialized by'_ (for linking an ownership phase to a registration activity), _'de-flagging of'_ (for linking a de-flagging activity to a ship), _'ownership is terminated by'_ (for linking an ownership phase to a de-flagging activity), _'constructed'_ (for linking a construction activity to a ship), and _'under name'_ (for linking a construction activity to a ship name (v4 in Fig. 7). The ownership of a ship is actually a legal agreement in which an owner holds shares. For example, according to Italian sources (_maritime registers_), the ownership of a ship was structured in 24 parts ("carati"). Sometimes only one ship owner possessed all 24 parts. However, much more frequently the 24 parts were distributed among several ship owners. Based on this evidence, a new class Shareholding was created as a specialisation (subclass) of Ship Ownership Phase, together with the property _'of share'_ for assigning the number of shares to a shareholding phase (v5 in Fig. 7). In the last ontology version (see Fig. 2), Ship Ownership Phase is defined as specialisation (subclass) of the class Legal Object Relationship, together with the class Legal Document with Temporal Validity which comprises official documents or legal agreements that are valid for a specific time-span. The more general class Legal Object Relationship represents kinds of relationships whose state and time-span are not documented and thus cannot be directly observed. We can only observe the relationship through the events that initialise or terminate the state (starting and terminating events). Figure 7: Ontology evolution example for modeling ship ownership information. ### Specification, RDFS and OWL Implementation The specification of the ontology and its RDFS implementation are available through the Zenodo repository (DOI: 10.5281/zenodo.6797750)14, under a Creative Commons Attribution 4.0 license. The (resolvable) namespace of the ontology pointing to the RDFS implementation is: [http://www.sealitproject.eu/ontology/](http://www.sealitproject.eu/ontology/). Footnote 14: [https://zenodo.org/record/6797750](https://zenodo.org/record/6797750) The specification document defines the ontology classes and properties. For each class, it provides: i) its superclasses, ii) its subclasses (if any), iii) a scope note (a textual description of the class's intension), iv) one or more examples of instances of this class, and v) its properties (if any), each one represented by its name and the range class that it links to. For each property, the specification provides: i) its domain, ii) its range, iii) its superproperties (if any), iv) its subproperties (if any), v) a scope note, vi) one or more examples of instances of this property, and vii) its properties (if any). If a property has an inverse property, this is provided in parentheses next to the property name. Scope notes are not formal modelling constructs, but are provided to help explain the intended meaning and application of a class or property. They refer to a conceptualisation common to domain experts (maritime historians) and disambiguate between different possible interpretations. The RDFS implementation provides the scope note of each class or property using _'drfs:comment'_. For producing the class and property URIs, the space character in the name of a class or property is replaced by the underscore character. Inverse properties are provided using _'owl:inverseOf'_. The version of the ontology is provided through the property _'owl:versionInfo'_ and its license through the Dublin Core term _'dclicense'_. For the properties pointing to classes that are represented as literals in RDF (seven properties in total, pointing to the CIDOC-CRM classes E60 Number or E62 String), we define their range as rdfs:Literal. We also provide an OWL implementation of the ontology, containing 71 object properties, 7 datatype properties and 1 symmetric property (the property _'related to'_).15 Footnote 15: [https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl](https://sealitproject.eu/ontology/SeaLiT_Ontology_v1.1.owl) Since RDF does not provide a direct way to express properties of properties, we make use of _property classes_ (as suggested and implemented by CIDOC-CRM), as a reification method for encoding the four properties of properties defined in the SeaLiT Ontology. Using this method, a class is created for each property having a property. This property class can then be instantiated and used together with the properties _'P01 has domain'_ and _'P02 has range'_ provided by the RDFS implementation of CIDOC-CRM.16 For example, Fig. 8 depicts how the property _'in the role of'_ of the property _'works at'_ is implemented using the idea of property classes. First, the property class PC works at is provided for representing the property _'works at'_. During data generation/instantiation, an instance of this property class is created pointing to the domain (an instance of E21 Person) and the range (an instance of E74 Group) of the original property _'works at'_ using the properties _'P01 has domain'_ and _'P01 has range'_, respectively. Then, we can provide the property of property _'in the role of'_ by directly linking it to the property class instance. Footnote 16: [https://cidoc-crm.org/rdfs/7.1.1/CIDOC_CRM_v7.1.1_PC.rdf](https://cidoc-crm.org/rdfs/7.1.1/CIDOC_CRM_v7.1.1_PC.rdf) ## 5. Application ### SeaLiT Knowledge Graphs The SeaLiT Ontology has been used in the context of the SeaLiT project (cf. Section 2.1) for transforming the data transcribed from a set of disparate, localised information sources of maritime history to a rich and coherent semantic network of integrated data (a _knowledge graph_). The objective of this transformation is the ability to run complex questions over the integrated data, like those provided by the historians that require combining information from more than one sources. In particular, the original archival documents are collaboratively transcribed and documented by historians in tabular form (similar to spreadsheets) using the FAST CAT system (Corba et al., 2016). In FAST CAT, data from different sources are transcribed as _records_ belonging to specific _templates_. A _record_ organises the data and metadata of an archival document in a set of tables, while a _template_ represents the structure of a single data source, i.e. it defines the data entry tables. Currently, more than 600 records have been already created and filled in FAST CAT by historians of SeaLiT. An example of a record for each different type of source (template) is provided in Table 2. For transforming the transcribed data to RDF based on the SeaLiT Ontology, schema mappings are created for each distinct FAST CAT template. These mappings define how the data elements of the FAST CAT records (e.g. the columns of a table) are mapped to ontology classes and properties. To create the schema mappings and run the transformations, we make use of the X3ML mapping definition language and framework [12]. The transformed data (RDF triples) are then ingested into a semantic repository (RDF triplestore) which can be accessed by external applications and services using the SPARQL language and protocol. The ResearchSpace application (described below) operates over such a repository for supporting historians in searching and analysing quantitatively the integrated data. The reader can refer to [5] for more information about the FAST CAT system and the data transcription, curation and transformation processes. The generated knowledge graphs are available through the Zenodo repository (DOI: 10.5281/zenodo.6460841)17, under a Creative Commons Attribution 4.0 license. This dataset currently consists of more than 18.5M triples, providing integrated information for about 3,170 ships, 92,240 persons, 935 legal bodies, and 5,530 locations. These numbers might change in a future version since data curation, including instance matching, is still undergoing and new archival documents are transcribed in FAST CAT. Footnote 17: [https://zenodo.org/record/6460841](https://zenodo.org/record/6460841) ### ResearchSpace Application For supporting historians in exploring the SeaLiT Knowledge Graphs (and thus the integrated data), we make use of ResearchSpace [15], an open source platform that offers a variety of functionalities, including a _query building_ interface that supports users in gradually building complex queries through an intuitive (user friendly) interface. The results can then be browsed, filtered, or analysed quantitatively through different visualisations, such as bar charts. The application is accessible at: [http://rs.sealitproject.eu/](http://rs.sealitproject.eu/). The query building interface of ResearchSpace has been configured for the case of the SeaLiT Knowledge Graphs. In particular, the following searching categories have been defined: _Ship_, _Person_, _Legal Body_, _Crew Payment_, _Place_, _Voyage_, _Course_, _Record_, _Source_. By selecting a category (e.g. _Ship_) the user is shown a list with its connected categories. By selecting a connected category (e.g. _Place_) the user can then select a property connecting them (e.g. _arrived at_) as well as an instance/value (e.g. _Marseille_; thus the user is searching for ships that arrived at Marseille). Such a property actually corresponds to a path in the knowledge graph that connects instances of the selected categories. Figure 8. Representing a property of property in RDF using a property class. Fig. 9 shows a screen dump of the system. In this example, the user has searched for _persons that were crew members at ships that arrived at Marseille_,18 and has selected to group the persons by their _residence location_ and visualise the result in a bar chart. From the bar chart we see that the majority of persons had _Camogli_ as their residence location. This query corresponds to a real information need provided by the historians of SeaLiT. Footnote 18: ResearchSpace link to the query: [https://tinyurl.com/2p8ky96e](https://tinyurl.com/2p8ky96e) For retrieving the results and creating the chart, ResearchSpace internally translates the user interactions to SPARQL queries that are executed over the SeaLiT Knowledge Graphs. For instance, the below SPARQL query retrieves the persons that were crew members at ships that had _Marseille_ as their final destination: ``` 1PREFIXcrm:<[http://www.cidoc-crm.org/cidoc-crm/](http://www.cidoc-crm.org/cidoc-crm/)> 2PREFIXsealt:<[http://www.sealitproject.eu/ontology/](http://www.sealitproject.eu/ontology/)> 3 4SELECTDISTINCT?personWHERE{ 5?shipsealt:voyages?voyage. 6?voyagesealt:finally_arriving_at<[https://rs.sealitproject.eu/kb/location/Marseille](https://rs.sealitproject.eu/kb/location/Marseille)>; 7crm:P14_carried_out_by?person} ``` Figure 9: Query building and visualisation of results in the ResearchSpace application. For grouping the persons by their residence location and showing a chart, the below SPARQL is executed for retrieving the relevant data: ``` 1PREFIXcrm:<[http://www.cidoc-crm.org/cidoc-crm/](http://www.cidoc-crm.org/cidoc-crm/)> 2PREFIXsealt:<[http://www.sealitproject.eu/ontology/](http://www.sealitproject.eu/ontology/)> 3PREFIXrdfs:<[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)> 4 5SELECTDISTINCT?location?locationName(COUNT(?person)AS?numOfPersons)WHERE{ 6?shipsealt:voyages?voyage. 7?voyagesealt:finally_arriving_at<[https://rs.sealitproject.eu/kb/location/Marseille](https://rs.sealitproject.eu/kb/location/Marseille)>; 8crm:P14_carried_out_by?person. 9?personcrm:P74_has_current_or_former_residence?location. 10?locationrdfs:label?locationName. 11}GROUPBY?location?locationNameORDERBY?locationName ``` Such queries can also utilise the RDFS inference rules, e.g. those based on the _subClassOf_ and _subPropertyOf_ relations. An example is the use of the CIDOC CRM property _'P9 consists of'_ for getting all voyage-related activities of a particular ship (leaving by a place, arrival at a place, passing by or through a place, loading things, unloading things), as shown in the below SPARQL query: ``` 1PREFIXcrm:<[http://www.cidoc-crm.org/cidoc-crm/](http://www.cidoc-crm.org/cidoc-crm/)> 2PREFIXsealt:<[http://www.sealitproject.eu/ontology/](http://www.sealitproject.eu/ontology/)> 3PREFIXrdfs:<[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)> 4 5SELECTDISTINCT?activity?activityNameWHERE{ 6<SHIP-URI>sealt:voyages?voyage. 7?voyagecrm:P9_consists_of?activity. 8?activityrdfs:label?activityName} ``` In this case, we exploit the fact that the property _'P9 consists of'_ is super-property of the properties _'consists of leaving'_, _'consists of arrival'_, _'consists of passing'_, _'consists of loading'_, and _'consists of unloading'_. The type of historians' research questions / information needs that can be answered (either directly or indirectly) using the ResearchSpace platform over the integrated data mainly depends on the actual archival material that is transcribed and transformed to RDF based on the SeaLiT Ontology, and less on the ontology itself. Specifically, the ontology was designed considering community requirements and material evidence, therefore if the data needed to answer an information need (or to find important information related to the information need) exists in the transcripts (and thus in the transformed data) then the question can be answered either fully, or partially through the retrieval of important relevant information. For example, in the case of SeaLiT, there are transcripts (FAST CAT records) containing tables that are not fully filled, either because some archival documents do not provide the corresponding information, or just because historians did not fill the columns during data transcription (planning to do it at a later stage). In this case, information needs that require this missing information cannot be satisfied. In future, if new types of information (and corresponding information needs) appear that cannot be modelled by the ontology, the ontology will be extended/revised and a new version will be released. With respect to incomplete information, missing entity attributes (e.g. unknown construction location for a particular ship) are in general very common in historical-archival research, but at the same time an important-to-know information for historians because they can affect the interpretation of quantitative analysis results. Our configuration of ResearchSpace considers missing information by representing it as an 'unknown' value, e.g. by showing an 'unknown' column in a bar chart. ## 6. Usage and Sustainability As already stated, the ontology has been created and used in the context of the SeaLiT project for transforming data transcribed from archival documents of maritime history to a rich semantic network. The integrated data of the semantic network allows a large group of maritime historians to perform quantitative and qualitative analysis of the transcribed material (through the user-friendly interface provided by the ResearchSpace platform) and find important information relevant to their research needs. A continuation of the relevant activities is expected after the end of the SeaLiT project through the close collaboration of the two involved institutions of the Foundation for Research and Technology - Hellas (FORTH): the Institute of Mediterranean Studies (coordinator of SeaLiT) and the Institute of Computer Science (data engineering partner in SeaLiT). In particular, the ontology will be extended as soon as a new type of archival material needs to be transcribed and integrated into the SeaLiT Knowledge Graphs. The long-term sustainability of the ontology is assured through our participation in relevant communities, in particular CIDOC-CRM SIG19 and Data for History Consortium20, an international consortium aiming at establishing a common method for modelling, curating and managing data in historical research. There is already an interest on using (and probably extending) the ontology in the context of other (ongoing) projects in the field of historical/archival research. In addition, the part of the model which is about employers and payments is considered for the creation of a new CIDOC-CRM family model about social transactions and bonds (there are relevant discussions on this in the CIDOC-CRM Special Interest Group; see issues 420 and 55721). Footnote 19: [https://www.cidoc-crm.org/sig-members](https://www.cidoc-crm.org/sig-members) Footnote 20: [http://dataforhistory.org/members](http://dataforhistory.org/members) ## 7. Conclusion We have presented the construction and use of the SeaLiT Ontology, an extension of CIDOC-CRM for the modeling and integration of data in the field of maritime history. The ontology aims at facilitating a shared understanding of maritime history information, by providing a common and extensible semantic framework (a _common language_) for evidence-based information integration. We provide the specification of the ontology, an RDFS and an OWL implementation, as well as knowledge graphs that make use of the ontology for integrating a large and diverse set of archival documents into a rich semantic network. We have also presented a real-working application (ResearchSpace deployment) that operates on top of the knowledge graphs and which supports maritime historians in exploring and analysing the integrated data through a user-friendly interface. In the near future, we plan to a) investigate possible extensions of the ontology based on new data modeling requirements, b) improve the scope notes of classes and properties in the specification document and add more examples (and then provide a new ontology version), c) create and make available a JSON-LD context of the ontology for use in Web-based programming environments. ### Acknowledgements This work has received funding from the European Union's Horizon 2020 research and innovation programme under i) the Marie Sklodowska-Curie grant agreement No 890861 (Project ReKnow), and ii) the European Research Council (ERC) grant agreement No 714437 (Project SeaLiT).
2305.19135
Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization
Portrait stylization, which translates a real human face image into an artistically stylized image, has attracted considerable interest and many prior works have shown impressive quality in recent years. However, despite their remarkable performances in the image-level translation tasks, prior methods show unsatisfactory results when they are applied to the video domain. To address the issue, we propose a novel two-stage video translation framework with an objective function which enforces a model to generate a temporally coherent stylized video while preserving context in the source video. Furthermore, our model runs in real-time with the latency of 0.011 seconds per frame and requires only 5.6M parameters, and thus is widely applicable to practical real-world applications.
Doyeon Kim, Eunji Ko, Hyunsu Kim, Yunji Kim, Junho Kim, Dongchan Min, Junmo Kim, Sung Ju Hwang
2023-05-30T15:46:25Z
http://arxiv.org/abs/2305.19135v1
# Context-Preserving Two-Stage Video Domain Translation ###### Abstract Portrait stylization, which translates a real human face image into an artistically stylized image, has attracted considerable interest and many prior works have shown impressive quality in recent years. However, despite their remarkable performances in the image-level translation tasks, prior methods show unsatisfactory results when they are applied to the video domain. To address the issue, we propose a novel two-stage video translation framework with an objective function which enforces a model to generate a temporally coherent stylized video while preserving context in the source video. Furthermore, our model runs in real-time with the latency of 0.011 seconds per frame and requires only 5.6M parameters, and thus is widely applicable to practical real-world applications. ## 1 Introduction _Portrait stylization_, which aims to transform a real human face image into one with an artistic style, is widely used in various fields such as advertising, animation production, or filmmaking. Yet, this process requires a lot of effort and time even for a skilled artist, which led to the introduction of automatic portrait stylization [2, 7, 9, 12] methods based on deep neural networks that can obtain plausible results without human intervention. Especially, StyleGAN [6], which is a generative adversarial network (GAN) based on style latents, has greatly improved the performance of human face generation and extended the range of applications as it allows to modify the style of the generated images with a simple modification of style vectors which modulate the generator. To name a few, [3, 17, 19] have shown impressive results in _image-level_ translation, in which they utilize GAN inversion techniques [1, 8, 14] to map the source image into latent space and decode it with a StyleGAN to generate a stylized image. However, despite their impressive performances in _image-level_ portrait stylization, they show limited capability in the video-level translation as can be seen in Fig. 1, where the result of the baseline is generated by frame-by-frame translation. The reasons for such suboptimal performances are as follows: First, the majority of previous image-level methods require geometric constraints such as facial landmark alignment to perform the translation, which leads to the generation of unnatural videos since the landmarks of the human face should be in a fixed position. Second, the loss of critical information occurs in the GAN inversion process. StyleGAN-based methods heavily rely on GAN inversion techniques to encode source images into the latent feature space, but they might yield images that do not preserve the original content of the source image. For example, previous methods often generate translations with different movements of facial features such as mouth and eyes or the pose of the face from a source video, which are the important properties that the video transfer should preserve. Furthermore, the identity loss frequently occurs due to incomplete inversion. Finally, there is no consideration for temporal dimension in existing methods based on the frame-by-frame translation. Image-level translation generates consecutive output frames by observing the current source frame only. Thus, it is challenging to produce a smooth temporal motion and detect temporal noise. In this work, we propose a novel method to create a stylized video with a real human face video as an input which addresses the aforementioned limitations of previous StyleGAN-based approaches. We suggest a two-stage training scheme to decompose the problem into two manageable sub-problems, domain transfer and video generation. In the first stage, we train the mapping network which not only learns to map the source domain to the target domain but also effectively delivers contextual information from input data to enhance spatial correlation and avoid the usage of GAN inversion. For the second stage, we naturally expand the image generator to the video domain by designing a sequential refiner which incorporates multiple consecutive frames of the network. Our sequential refiner enables the whole network to consider given intermediate output frames from the image generator and produce temporally coherent output frames with the suggested objective function. Moreover, our model is suitable for practical real-world applications as it requires only a small number of parameters and runs in real-time speed. ## 2 Methods Let \(\mathbf{x}=\{x_{1},x_{2},\ldots,x_{T}\}\) be an input portrait video with \(T\) frames where \(\mathbf{x}\) follows the distribution of source domain \(X\). The purpose of this work is to train a network \(G\) to translate the input video \(\mathbf{x}\) to \(\hat{\mathbf{y}}=G(\mathbf{x})\), where \(\hat{\mathbf{y}}=\{\hat{y}_{1},\hat{y}_{2},\ldots,\hat{y_{T}}\}\) follows the distribution of target domain \(Y\). Specifically, we aim to train a model that 1) preserves the identity and facial attributes of source video \(\mathbf{x}\) and 2) enforces temporal consistency in the output video \(\hat{\mathbf{y}}\). **Stage-I: Context-Preserving Domain Translation.** In the first stage, we train the network to translate an image \(x\) from the source domain \(X\) to a stylized image \(\hat{y}\) that follows the distribution of target domain \(Y\). We observe that StyleGAN-based methods work fairly well for the image-level portrait stylization task, but a significant amount of contextual information is lost as illustrated in Fig. 2 due to GAN inversion processes that map the source images into the latent space. Hence, to minimize this loss, we employ the network \(G_{X\to Y}\) as a domain translator which does not require the mapping of an image to the latent space to retain contextual information as much as possible. Specifically, we train two networks, StyleGAN-based network \(G_{Y}\) that learns to generate images of the target domain \(Y\) from the normal distribution \(\mathcal{N}(0,I)\), and U-Net-based [15] network \(G_{X\to Y}\) that translates an input image to be close to the output of \(G_{Y}\). We first train a StyleGAN-based generator \(G_{Y}\) that follows the distribution of the target domain \(Y\) by fine-tuning the pre-trained StyleGAN \(G_{X}\) that generates images of the source domain \(X\). Since \(G_{X}\) and \(G_{Y}\) can generate images that are contextually identical when they are fed with the same random value \(z\) as an input, we can construct the dataset of a pair of real and stylized images \((\hat{x},\hat{y})\) and use as a form pseudo supervision. Then, we train a mapping network \(G_{X\to Y}\) with previously constructed dataset \((\hat{x},\hat{y})\) to perform domain translation. First, we apply adversarial loss \(\mathcal{L}_{adv}\) with \(\hat{x}\) and \(\hat{y}\) to learn to translate in the whole domain level \(X\to Y\). Furthermore, we calibrate the individual data point using Figure 2: Comparison between previous image-level StyleGAN-based portrait stylization and our portrait video style transfer. \(\mathcal{L}_{content}\) to reduce the heavily translated area and adjust the image so that it does not deviate drastically from the source image. We utilize two terms, reconstruction loss and perceptual loss. Reconstruction loss \(\mathcal{L}_{recon}\) is formulated as the L1 distance between \(G_{X\to Y}(\hat{x})\) and \(\hat{y}\) in the image level space, and perceptual loss \(\mathcal{L}_{perc}\) is calculated following [10]. **Stage-II: Sequential Video Frame Generation.** In this stage, we aim to maintain the temporal consistency between a source video \(\mathbf{x}\) and a translated video \(\hat{\mathbf{y}}\) fixing the trained network \(G_{X\to Y}\) in the Stage-I. Here, we propose a sequential refiner \(R\) that refines the image-level outputs from \(G_{X\to Y}\) by rendering them to be connected more naturally in the temporal dimension. The sequential refiner \(R\) receives multiple inputs to extract correlation between consecutive frames from generated sample and source frames. Similar to [18], we assume that an image sequence is generated by Markov process that factorizes the conditional distribution which can be formalized as, \[p(\hat{\mathbf{y}}|\mathbf{x})=\prod_{t=1}^{T}p(\hat{y}_{t}|\{x_{i}\}_{i=t-L}^{t},\{ \hat{y}_{i}\}_{i=t-L}^{t-1},G_{X\to Y}(x_{t})) \tag{1}\] which means that our model generates \(t\)-th stylized final frame \(\hat{y}_{t}\) with consecutive \(L\) prior source input frames, \(L-1\) refined outputs ahead, and the intermediate output of the current frame. We set \(L=2\) for our experiments. With the power of \(R\), we design the training objective for refiner \(R\) to enable the network to revise the observed defects in the intermediate frame \(G_{X\to Y}(x_{t})\) and consider the temporal dimension. Firstly, we attempt to revise empirical defects in the frame-by-frame inference of \(G_{X\to Y}\). We observe that blurring occurs at the face boundary and the model often generates distorted background. We believe that this phenomenon is caused by StyleGAN as a similar issue is reported in previous works. This limitation does not possess a huge hindrance when it comes to image-level translation, but in the video domain, it is easy to see blurry areas around the boundary. We alleviate the problem by introducing a flow-based loss that applies estimated optical flow and parsing map jointly without simply separating object and background at the image level with the goal of naturally connecting frames while detecting background defects. The objective of our suggested warp loss \(\mathcal{L}_{warp}\) is as follows: \[\mathcal{L}_{warp}=||(M\odot Warp(x_{t-1},f_{t})+(1-M)\odot G_{X\to Y}(x_{t}), \hat{y}_{t})||_{2} \tag{2}\] where \(f_{t}\) is estimated flow from \(x_{t-1}\) and \(x_{t}\) and \(M\) is predicted continuous probability parsing map for background area from \(x_{t-1}\). This loss effectively addresses the distortion problem and does not degrade the consistency across the generated frames. We use flow estimation [5] and parsing networks [20] only in the training phase, thus the size and dependence of the network are not affected by auxiliary networks which differs from previous flow-based works. Besides, we use the additional temporal loss to suppress Figure 3: Qualitative comparison results with previous works. Source frames are captured with 20 fps which means that the time interval between each frame is 0.05 seconds. temporal flickering artifacts. As we enable our framework to leverage temporal information from consecutive input frames, we can explicitly facilitate temporal connectivity in the generated video. We use contextual temporal regularization loss \(\mathcal{L}_{temp}\) to train the \(R\) by calculating L1 distance between the outputs of \(\hat{y}_{t-1}\) and \(\hat{y}_{t}\) passed through pretrained VGG network [16]. The final objective to train the sequential refiner \(R\) is : \[\min_{R}\lambda_{warp}\mathcal{L}_{warp}+\lambda_{temp}\mathcal{L}_{temp}. \tag{3}\] ## 3 Experiments We have experimented the model with various style datasets which consists of around a few hundred numbers of images. For the source video dataset, we use famous talking head video datasets VoxCeleb2 [4]. Please refer to the Appendix for further details about the dataset. **Baselines.** We compare our results with those from the following related works. CycleGAN [21] and U-GAT-IT [7] are image-to-image translation-based methods, and Toonify [13] and AgileGAN [17] are StyleGAN-based portrait stylization approaches. Image-to-image translation frameworks are trained with VoxCeleb2 and style images dataset. The results from Toonify and AgileGAN are originally generated in \(1024\times 1024\) resolution, thus we downsample them into \(256\times 256\) to match the resolution. **Qualitative Results.** Fig. 3 shows qualitative results from our model and the comparison methods. We provide generated samples with adjacent frames in video clips. As they are directly connected frames with an interval of 0.05 seconds, the resulting outputs should be connected smoothly without abrupt change. However, we can see the undesired transform which is not only consistent with the source image but also induces unstably connected video frames from the AgileGAN and Toonify. As StyleGAN-based approaches need a mandatory face alignment process, it further causes the incoherent output frame in video generation. Also, we can observe the loss of identity or incorrect facial attributes transfer. U-GAT-IT and CycleGAN produce more stable resulting videos, but they tend to create overfitted stylized images due to the deficiency of the style dataset. We find that our method produces stylized videos that are more context-consistent compared to other baselines. **Quantitative Results.** We conduct a quantitative evaluation with several metrics, and provide the results in Table 1. We randomly select 20 videos from the test set of VoxCeleb2 and each clip contains at least 100 frames. CSIM measures structural similarity between the source and target frames, thus higher CSIM indicates better identity preservation ability of the model. Note that CSIM and eye gaze distance are calculated with aligned frames only which means that we match the constraint on the StyleGAN-based model. As evident in the table, our work outperforms all baselines on CSIM and eye gaze distance. It shows that the proposed model can create a stylized video while maintaining the source identity and well transferring the input video. Moreover, a user study is conducted to measure the user preference over the sample videos from comparison methods and ours. Similar to [11], we ask the users to perform pairwise comparisons against our method to quantitatively evaluate the following aspects. The questions are 1) Stylization + Identity Preservation: which video was better stylized while maintaining the human identity of the input video? 2) Temporal Consistency: which video better maintains the temporal consistency of the input video? Given the source and generated frames from ours and a baseline method in a random order, we ask users to choose the better results between the two based on the provided question. Table 2 reports the user score based on 200 answers obtained from 11 users, which shows that they mostly favor our method over the baselines. This further confirms that our method is able to generate more temporally coherent stylized video than other baselines. **Inference speed and Model size.** Our model has the latency of approximately 0.011 seconds for each frame, which can be considered a real-time inference speed. The model consists of 5.56M parameters while other StyleGAN-based frameworks have at least 30M parameters for the StyleGAN generator excluding the additional encoder for image-to-image translation. ## 4 Conclusion This work introduces a novel video portrait style transfer framework that can generate stylized videos from input frames while preserving the identity and successfully transferring the source attributes to a video of a real person. To effectively tackle the video stylization task, we decompose our goal into two sub-problems, portrait stylization and video generation. First, we train the domain mapping network by indirect utilization of the representation ability of StyleGAN and for image-level stylization. \begin{table} \begin{tabular}{l|c c c c|c} \hline \hline Metrics & AgileGAN & Toonify & U-GAT-IT & CycleGAN & Ours \\ \hline CSIM (\(\uparrow\)) & 0.36 & 0.33 & 0.31 & 0.21 & **0.49** \\ Eye Gaze (\(\downarrow\)) & 16.57 & 16.21 & 25.85 & 21.75 & **13.21** \\ \hline \hline \end{tabular} \end{table} Table 1: Metric evaluation results on ours and baselines. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Criteria & AgileGAN & Toonify & U-GAT-IT & CycleGAN \\ \hline Style+ID (\(\uparrow\)) & 67.86\% & 61.54\% & 96.43\% & 100.00\% \\ Temp (\(\uparrow\)) & 85.71\% & 76.92\% & 96.43\% & 100.00\% \\ \hline \hline \end{tabular} \end{table} Table 2: User study results on ours and baselines. Numbers in the table indicate the percentage of users that prefer our work in the user study. Then, we employ a carefully designed sequential refiner which receives multiple input images to revise image-level intermediate output and suppress the noise of the time dimension. The experimental results demonstrate that our network does generate plausible transferred results with fast inference time while saving computational resources significantly.
2304.01453
Linear stability of compact shrinking Ricci solitons
In this paper, we continue investigating the second variation of Perelman's $\nu$-entropy for compact shrinking Ricci solitons. In particular, we improve some of our previous work in "H.-D. Cao and M. Zhu, Math. Ann. 353 (2012), No. 3, 747-763", as well as the more recent work in "M. Mehrmohamadi and A. Razavi, arXiv:2104.08343", and obtain a necessary and sufficient condition for a compact shrinking Ricci soliton to be linearly stable. Our work also extends similar results of Hamilton, Ilmanen and the first author in "arXiv:math.DG/0404165" (see also "H.-D. Cao and C. He, J. Reine Angew. Math. 2015 (2015), no. 709, 229-246.") for positive Einstein manifolds to the compact shrinking Ricci soliton case.
Huai-Dong Cao, Meng Zhu
2023-04-04T01:52:57Z
http://arxiv.org/abs/2304.01453v4
# Linear stability of compact shrinking Ricci solitons ###### Abstract. In this paper, we continue to investigate the second variation of Perelman's \(\nu\)-entropy for compact shrinking Ricci solitons. In particular, we improve some of our previous work in [11] and the more recent work in [33] and obtain a necessary and sufficient condition for a compact shrinking Ricci soliton to be linearly stable. Our work also extends similar results of Hamilton, Ilmanen and the first author in [8] (see also [9]) for positive Einstein manifolds to the compact shrinking Ricci soliton case. The first author was partially supported by a Simons Foundation Collaboration Grant. The second author was partially supported by NSFC Grant No. 11971168, Shanghai Science and Technology Innovation Program Basic Research Project STCSM 20JC1412900, and the Science and Technology Commission of Shanghai Municipality No. 22DZ229014. Introduction Let \(\mathbb{R}^{3}\) be a smooth manifold with smooth positive Einstein manifolds are unstable. In particular, all product Einstein manifolds and Fano Kahler-Einstein manifolds with Hodge number \(h^{1,1}>1\) are unstable. More recently, a complete description of the linear stability (or instability) of irreducible symmetric spaces of compact type was provided by C. He and the first author [9]. Meanwhile, in [11], we derived the second variation formula of Perelman's \(\nu\)-entropy for compact shrinking Ricci solitons which we now recall. Let \((M^{n},g,f)\) be a compact shrinking Ricci soliton satisfying (1.1) and \(\operatorname{Sym}^{2}(T^{*}M)\) denote the space of symmetric (covariant) 2-tensors on \(M\). For any \(h=h_{ij}\in\operatorname{Sym}^{2}(T^{*}M)\), consider the variation \(g(s)=g+sh\) and let \[\operatorname{div}_{f}h=e^{f}\operatorname{div}(e^{-f}h)=\operatorname{div}h-h( \nabla f,\cdot), \tag{1.2}\] \(\operatorname{div}_{f}^{\dagger}\) be the adjoint of \(\operatorname{div}_{f}\) with respect to the weighted \(L^{2}\)-inner product \[(\cdot,\cdot)_{f}=\int_{M}<\cdot,\cdot>e^{-f}dV, \tag{1.3}\] \[\Delta_{f}h:=\Delta h-\nabla f\cdot\nabla h, \tag{1.4}\] and \[\mathcal{L}_{f}h=\frac{1}{2}\Delta_{f}h+Rm(h,\cdot)=\frac{1}{2}\Delta_{f}h_{ ik}+R_{ijkl}h_{jl}. \tag{1.5}\] Then the second variation \(\delta_{g}^{2}\nu(h,h)\) of the \(\nu\)-entropy is given in [11] by \[\delta_{g}^{2}\nu(h,h)=\left.\frac{d^{2}}{ds^{2}}\right|_{s=0}\nu(g(s))=\frac{ 1}{(4\pi\tau)^{n/2}}\int_{M}<N_{f}h,h>e^{-f}dV,\] where the _Jacobi operator_ (also known as the _stability operator_) \(N_{f}\) is defined by \[N_{f}h:=\mathcal{L}_{f}h+\operatorname{div}_{f}^{\dagger}\operatorname{div}_{ f}h+\frac{1}{2}\nabla^{2}\hat{v}_{h}-Rc\ \frac{\int_{M}<Rc,h>e^{-f}\,dV}{\int_{M}Re^{-f}\,dV}, \tag{1.6}\] and \(\hat{v}_{h}\) is the unique solution of \[\Delta_{f}\hat{v}_{h}+\frac{\hat{v}_{h}}{2\tau}=\operatorname{div}_{f} \operatorname{div}_{f}h,\qquad\int_{M}\hat{v}_{h}e^{-f}\,dV=0.\] For more details, we refer the reader to our previous paper [11] or Section 2 below. Note that \(Sym^{2}(T^{*}M)\) admits the following standard direct sum decomposition: \[\operatorname{Sym}^{2}(T^{*}M)=\operatorname{Im}(\operatorname{div}_{f}^{ \dagger})\oplus\operatorname{Ker}(\operatorname{div}_{f}). \tag{1.7}\] The first factor \[\operatorname{Im}(\operatorname{div}_{f}^{\dagger})=\{\operatorname{div}_{f}^{ \dagger}(\omega)\ |\ \omega\in\Omega^{1}(M)\}=\{\mathscr{L}_{X}g\ |\ X=\omega^{\sharp}\in\mathscr{X}(M)\}\] represents deformations \(g(s)\) of \(g\) by diffeomorphisms. Since the \(\nu\)-entropy is invariant under diffeomorphisms, the second variation vanishes on this factor. In [11], we observed that \(\operatorname{div}_{f}(Rc)=0\) and showed that \(Rc\) is an eigen-tensor of \(\mathcal{L}_{f}\) with eigenvalue2\(1/2\tau\), i.e., \(\mathcal{L}_{f}Rc=\frac{1}{2\tau}Rc\). Moreover, for any linearly stable compact shrinking Ricci soliton, we proved that \(1/2\tau\) is the only positive eigenvalue of \(\mathcal{L}_{f}\) on \(\operatorname{Ker}(\operatorname{div}_{f})\) with multiplicity one. Very recently, Mehrmohamadi and Razavi [33] made some new progress. In particular, they showed that \(N_{f}\) vanishes on \(\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\), extending a similar result in [8, 9] for positive Einstein manifolds to the compact shrinking Ricci soliton case. In addition, in terms of the operator \(\mathcal{L}_{f}\), they showed that (i) if a compact shrinking Ricci soliton \((M^{n},g,f)\) is linearly stable, then the eigenvalues of \(\mathcal{L}_{f}\) on \(\mathrm{Sym}^{2}(T^{*}M)\), other than \(\frac{1}{2\tau}\) with multiplicity one, must be less than or equal to \(\frac{1}{4\tau}\); (ii) if a compact shrinking soliton \((M^{n},g,f)\) has \(\mathcal{L}_{f}\leq 0\) on \(\mathrm{Sym}^{2}(T^{*}M)\), except on scalar multiples of \(Rc\), then \((M^{n},g,f)\) is linearly stable (see Theorem 1.3 and Theorem 1.4 in [33], respectively). Clearly, the nonpositivity of the second variation of \(\nu\), i.e., \(\delta_{g}^{2}\nu(h,h)\leq 0\), is implied by the nonpositivity of the stability operator \(N_{f}\) on the space \(\mathrm{Sym}^{2}(T^{*}M)\) of symmetric \(2\)-tensors. Thus, studying linear stability of compact shrinking Ricci solitons requires a closer look into the eigenvalues and eigenspaces of \(N_{f}\), especially its leading term \(\mathcal{L}_{f}\) defined by (1.5), acting on \(\mathrm{Sym}^{2}(T^{*}M)\). Since \(\mathrm{div}_{f}(Rc)=0\), we can further decompose \(\mathrm{Sym}^{2}(T^{*}M)\) in (1.7) by \[\mathrm{Sym}^{2}(T^{*}M)=\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\oplus \mathbb{R}\cdot\mathrm{Rc}\oplus\mathrm{Ker}(\mathrm{div}_{f})^{\perp}, \tag{1.8}\] where \(\mathbb{R}\cdot\mathrm{Rc}=\{\rho Rc\ |\ \rho\in\mathbb{R}\}\) is the one dimensional subspace generated by the Ricci tensor \(Rc\), and \[\mathrm{Ker}(\mathrm{div}_{f})^{\perp}=\{h\in\mathrm{Ker}(\mathrm{div}_{f})\ |\ \int_{M}<h,Rc>e^{-f}\,dV=0\} \tag{1.9}\] denotes the orthogonal complement of \(\mathbb{R}\cdot\mathrm{Rc}\) in \(\mathrm{Ker}(\mathrm{div}_{f})\) with respect to the weighted inner product (1.3). In this paper, by exploring decomposition (1.8), we are able to further improve our previous work in [11] and the work of Mehrmohamadi and Razavi [33]. Our main results are as follows. **Theorem 1.1**.: _Let \((M^{n},g,f)\) be a compact shrinking Ricci soliton satisfying equation (1.1). Then,_ * _the decomposition of_ \(\mathrm{Sym}^{2}(T^{*}M)\) _in (_1.8_) is both invariant under_ \(\mathcal{L}_{f}\) _and orthogonal with respect to the second variation_ \(\delta_{g}^{2}\nu\) _of the_ \(\nu\)_-entropy._ * _the eigenvalues of_ \(\mathcal{L}_{f}\) _on_ \(\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\) _are strictly less than_ \(\frac{1}{4\tau}\)_._ **Theorem 1.2**.: _A compact shrinking Ricci soliton \((M^{n},g,f)\) is linearly stable if and only if \(\mathcal{L}_{f}\leq 0\) on \(\mathrm{Ker}(\mathrm{div}_{f})^{\perp}\)._ _Remark 1.1_.: Theorem 1.1 and Theorem 1.2 above are extensions of similar results by Hamilton, Ilmanen and the first author in [8] (see also Theorem 1.1 in [9]) for positive Einstein manifolds. While there have been a lot of progress in recent years in understanding geometry of general higher dimensional (\(n\geq 4\)) complete noncompact gradient shrinking Ricci solitons, especially in dimension four, e.g., [10, 13, 14, 16, 31, 34, 35] and [18, 1], very little is known about the geometry of general compact shrinking Ricci solitons in dimension \(n=4\) or higher. On the other hand, for possible applications of the Ricci flow to topology, one is mostly interested in the classification of stable shrinking solitons, since unstable ones could be perturbed away hence may not represent generic singularities of the Ricci flow. Thus, exploring the variational structure of compact Ricci shrinkers becomes rather significant. We point out that Hall and Murphy [24] have proven that compact shrinking Kahler-Ricci solitons with Hodge number \(h^{1,1}>1\) are unstable, thus extending the result of Cao-Hamilton-Ilmanen [8] for Fano Kahler-Einstein manifolds to the shrinking Kahler-Ricci soliton case. In particular, the Cao-Koiso soliton on \(\mathbb{CP}^{2}\#(-\mathbb{CP}^{2})\) and Wang-Zhu soliton on \(\mathbb{CP}^{2}\#(-2\mathbb{CP}^{2})\) are unstable. In addition, Hall-Haslhofer-Siepmann [23] and Hall-Murphy [25] have shown that the Page metric [38] on \(\mathbb{CP}^{2}\#(-\mathbb{CP}^{2})\) is unstable. Furthermore, based on the Bunch-Donaldson numerical approximation [3], [23] also provided strong evidence that the Chen-LeBrun-Weber metric [15] on \(\mathbb{CP}^{2}\#(-2\mathbb{CP}^{2})\) may be unstable too. We hope our new results in this paper will play a significant role in the future study of linear stability of shrinking Ricci solitons. ## 2. Preliminaries In this section, we fix our notation and recall some useful facts that will be used in the proof of Theorem 1.1. First of all, by scaling the metric \(g\), we may assume that \(\tau=1\) in equation (1.1) so that \[Rc+\nabla^{2}f=\frac{1}{2}g. \tag{2.1}\] We also normalize \(f\) so that \[(4\pi)^{-\frac{\pi}{2}}\int_{M}e^{-f}\,dV=1.\] From now on, we shall assume that \((M^{n},g,f)\) is a compact shrinking Ricci soliton satisfying (2.1). As in [11], for any symmetric \(2\)-tensor \(h=h_{ij}\) and \(1\)-form \(\omega=\omega_{i}\), we denote \[\operatorname{div}\omega:=\nabla_{i}\omega_{i},\qquad(\operatorname{div}h)_{i }:=\nabla_{j}h_{ji}.\] Moreover, as done in [6, 11], we define \(\operatorname{div}_{f}(\cdot):=e^{f}\operatorname{div}(e^{-f}(\cdot))\), or more specifically, \[\operatorname{div}_{f}\omega=\operatorname{div}\omega-\omega(\nabla f)= \nabla_{i}\omega_{i}-\omega_{i}\nabla_{i}f, \tag{2.2}\] and \[\operatorname{div}_{f}h=\operatorname{div}h-h(\nabla f,\cdot)=\nabla_{j}h_{ij }-h_{ij}\nabla_{j}f. \tag{2.3}\] We also define the operator \(\operatorname{div}_{f}^{\dagger}\) on functions by \[\operatorname{div}_{f}^{\dagger}u=-\nabla u,\qquad u\in C^{\infty}(M) \tag{2.4}\] and on \(1\)-forms by \[(\operatorname{div}_{f}^{\dagger}\omega)_{ij}=-\frac{1}{2}(\nabla_{i}\omega_{ j}+\nabla_{j}\omega_{i})=-\frac{1}{2}\mathscr{L}_{\omega^{\sharp}}g_{ij}, \tag{2.5}\] where \(\omega^{\sharp}\) is the vector field dual to \(\omega\) and \(\mathscr{L}\) denotes the Lie derivative, so that \[\int_{M}e^{-f}<\operatorname{div}_{f}^{\dagger}\omega,h>dV=\int_{M}e^{-f}< \omega,\operatorname{div}_{f}h>dV. \tag{2.6}\] Clearly, \(\operatorname{div}_{f}^{\dagger}\) is just the adjoint of \(\operatorname{div}_{f}\) with respect to the weighted \(L^{2}\)-inner product \[(\cdot,\cdot)_{f}=\int_{M}<\cdot,\cdot>e^{-f}dV. \tag{2.7}\] _Remark 2.1_.: If we denote by \(\operatorname{div}^{*}\) the adjoint of \(\operatorname{div}\) with respect to the usual \(L^{2}\)-inner product \[(\cdot,\cdot)=\int_{M}<\cdot,\cdot>dV, \tag{2.8}\] then, as pointed out in [6], one can easily verify that \[\operatorname{div}^{\dagger}_{f}=\operatorname{div}^{*}. \tag{2.9}\] Finally, we denote \[\Delta_{f}:=e^{f}\operatorname{div}(e^{-f}\nabla)=\Delta-\nabla f\cdot\nabla, \tag{2.10}\] which is self-adjoint with respect to the weighted \(L^{2}\)-inner product (2.7), \[Rm(h,\cdot)_{ik}:=R_{ijkl}h_{jl},\] and define the operator \[\mathcal{L}_{f}h=\frac{1}{2}\Delta_{f}h+Rm(h,\cdot) \tag{2.11}\] on the space of symmetric 2-tensors. It is easy to see that, like \(\Delta_{f}\), \(\mathcal{L}_{f}\) is a self-adjoint operator with respect to the weighted \(L^{2}\)-inner product (2.7). Now we restate the second variation of the \(\nu\)-entropy derived in [11] with \(\tau=1\). **Theorem 2.1**.: _([11]) Let \((M^{n},g,f)\) be a compact shrinking Ricci soliton satisfying (2.1). For any symmetric 2-tensor \(h=h_{ij}\), consider the variation \(g(s)=g_{ij}+sh_{ij}\). Then the second variation \(\delta^{2}_{g}\nu(h,h)\) is given by_ \[\delta^{2}_{g}\nu(h,h)=\left.\frac{d^{2}}{ds^{2}}\right|_{s=0}\nu(g(s))=\frac {1}{(4\pi)^{n/2}}\int_{M}<N_{f}h,h>e^{-f}dV, \tag{2.12}\] _where the stability operator \(N_{f}\) is given by_ \[N_{f}h:=\mathcal{L}_{f}h+\operatorname{div}^{\dagger}_{f}\operatorname{div}_ {f}h+\frac{1}{2}\nabla^{2}\hat{v}_{h}-Rc\ \frac{\int_{M}<Rc,h>e^{-f}\,dV}{\int_{M}Re^{-f}\,dV}, \tag{2.13}\] _and the function \(\hat{v}_{h}\) is the unique solution of_ \[\Delta_{f}\hat{v}_{h}+\frac{\hat{v}_{h}}{2}=\operatorname{div}_{f} \operatorname{div}_{f}h,\qquad\int_{M}\hat{v}_{h}e^{-f}\,dV=0. \tag{2.14}\] Next, we recall the following facts (see, e.g., Lemma 3.1 and Lemma 3.2 in [11]). **Lemma 2.1**.: **([11])** _Let \((M^{n},g,f)\) be a compact shrinking Ricci soliton satisfying (2.1). Then,_ * \(Rc\in\operatorname{Ker}(\operatorname{div}_{f});\)__ * \(\mathcal{L}_{f}(Rc)=\frac{1}{2}Rc.\)__ We shall also need the following useful identities found by Mehrmohamadi-Razavi [33]; see also Colding and Minicozzi [17], in which they derived more general versions of identities (2.15)-(2.20) that are valid for smooth metric measure spaces. **Lemma 2.2**.: **([33, 17])** _Let \((M^{n},g,f)\) be a compact shrinking Ricci soliton satisfying (2.1). Then, for any function \(u\), 1-form \(\omega\) and symmetric 2-tensor \(h\), the following identities hold_ \[\nabla\Delta_{f}u=\Delta_{f}\nabla u-\frac{1}{2}\nabla u, \tag{2.15}\] \[\operatorname{div}_{f}\Delta_{f}\omega=\Delta_{f}\operatorname{div}_{f}\omega+ \frac{1}{2}\operatorname{div}_{f}\omega, \tag{2.16}\] \[\operatorname{div}_{f}^{\dagger}\Delta_{f}\omega=2\mathcal{L}_{f} \operatorname{div}_{f}^{\dagger}\omega-\frac{1}{2}\operatorname{div}_{f}^{ \dagger}\omega, \tag{2.17}\] \[2\mathcal{L}_{f}(\mathscr{L}_{\omega^{t}}g)=\mathscr{L}_{(\Delta_{f}\omega)^{t }}g+\frac{1}{2}\mathscr{L}_{\omega^{t}}g, \tag{2.18}\] \[2\operatorname{div}_{f}\mathcal{L}_{f}h=\Delta_{f}\operatorname{div}_{f}h+ \frac{1}{2}\operatorname{div}_{f}h, \tag{2.19}\] \[\operatorname{div}_{f}(\mathscr{L}_{\omega^{t}}g)=-2\operatorname{div}_{f} \operatorname{div}_{f}^{\dagger}\omega=\Delta_{f}\omega+\nabla(\operatorname {div}_{f}\omega)+\frac{1}{2}\omega. \tag{2.20}\] For the reader's convenience and the sake of completeness, we provide a quick proof here. Proof.: The above identities follow from direct computations given below. \(\bullet\) For (2.15): \[\nabla_{i}\Delta_{f}u= \nabla_{i}\nabla_{j}\nabla_{j}u-\nabla_{i}\nabla_{j}f\nabla_{j}u -\nabla_{j}f\nabla_{i}\nabla_{j}u\] \[= \Delta\nabla_{i}u+R_{ijjk}\nabla_{k}u-\frac{1}{2}\nabla_{i}u+R_{ ij}\nabla_{j}u-\nabla_{j}f\nabla_{j}\nabla_{i}u\] \[= \Delta_{f}\nabla_{i}u-\frac{1}{2}\nabla_{i}u.\] \(\bullet\) For (2.16): It follows from (2.15) that \[\int_{M}u\operatorname{div}_{f}(\Delta_{f}\omega)\,e^{-f}dV= \int_{M}-<\Delta_{f}\nabla u,\omega>e^{-f}\,dV\] \[= \int_{M}-<\nabla(\Delta_{f}u)+\frac{1}{2}\nabla u,\omega>\,e^{- f}dV\] \[= \int_{M}u(\Delta_{f}\operatorname{div}_{f}\omega+\frac{1}{2} \operatorname{div}_{f}\omega)\,e^{-f}dV.\] \(\bullet\) For (2.17): \[2\mathcal{L}_{f}\operatorname{div}_{f}^{\dagger}\omega=-\frac{1}{2}\Delta_{f} (\nabla_{i}\omega_{j}+\nabla_{j}\omega_{i})-R_{ikjl}(\nabla_{k}\omega_{l}+ \nabla_{l}\omega_{k}).\] Notice that \[\Delta_{f}\nabla_{i}\omega_{j}= \nabla_{k}\nabla_{k}\nabla_{i}\omega_{j}-\nabla_{k}f\nabla_{k} \nabla_{i}\omega_{j}\] \[= \nabla_{k}(\nabla_{i}\nabla_{k}\omega_{j}+R_{kijl}\omega_{l})- \nabla_{k}f(\nabla_{i}\nabla_{k}\omega_{j}+R_{kijl}\omega_{l})\] \[= \nabla_{i}\Delta\omega_{j}+R_{il}\nabla_{l}\omega_{j}+R_{kijl} \nabla_{k}\omega_{l}+\nabla_{k}R_{kijl}\omega_{l}+R_{kijl}\nabla_{k}\omega_{l}\] \[-\nabla_{i}(\nabla_{k}f\nabla_{k}\omega_{j})+\nabla_{i}\nabla_{k} f\nabla_{k}\omega_{j}+R_{kijl}\nabla_{k}f\omega_{l}\] \[= \nabla_{i}\Delta_{f}\omega_{j}-2R_{ikjl}\nabla_{k}\omega_{l}+ \frac{1}{2}\nabla_{i}\omega_{j}.\] \(\bullet\) For (2.18): According to (2.5), (2.18) is equivalent to (2.17). \(\bullet\) For (2.19): Similar to the proof of (2.16), (2.19) is the adjoint of (2.17) with respect to the inner product (2.7). \(\bullet\) For (2.20): \[\begin{split}\operatorname{div}_{f}(\mathscr{L}_{\omega^{t}}g)_{j}=& \nabla_{i}(\nabla_{i}\omega_{j}+\nabla_{j}\omega_{i})-\nabla_{i}f(\nabla_{i} \omega_{j}+\nabla_{j}\omega_{i})\\ =&\Delta_{f}\omega_{j}+\nabla_{j}\nabla_{i}\omega_ {i}+R_{jk}\omega_{k}-\nabla_{j}(\nabla_{i}f\omega_{i})+\nabla_{j}\nabla_{i}f \omega_{i}\\ =&\Delta_{f}\omega_{j}+\nabla_{j}\operatorname{div} _{f}\omega+\frac{1}{2}\omega_{j}.\end{split}\] _Remark 2.2_.: Some of the identities in Lemma 2.2 were first obtained in [9] for positive Einstein manifolds. For positive Einstein manifolds, C. He and the first author also showed in [9] that the restriction of \(N_{f}\) to the subspace \(\operatorname{Im}(\operatorname{div}_{f}^{\dagger})\) is zero, i.e., \(\left.N_{f}\right|_{\operatorname{Im}(\operatorname{div}_{f}^{\dagger})}=0\), a fact first noted in Cao-Hamilton-Ilmanen [8]. By using identities (2.16), (2.18) and (2.20) in Lemma 2.2, Mehrmohamadi and Razavi [33] were able to generalize this to the case of compact shrinking Ricci solitons. **Lemma 2.3**.: **([33])** _Let \((M^{n},g,f)\) be a compact shrinking Ricci soliton satisfying (2.1). Then, we have_ \[\left.N_{f}\right|_{\operatorname{Im}(\operatorname{div}_{f}^{\dagger})}=0.\] Proof.: Notice that, according to (2.20) and (2.16), \[\begin{split}\operatorname{div}_{f}\operatorname{div}_{f}( \mathscr{L}_{\omega^{t}}g)=&\operatorname{div}_{f}(\Delta_{f} \omega+\nabla\operatorname{div}_{f}\omega+\frac{1}{2}\omega)\\ =& 2\Delta_{f}(\operatorname{div}_{f}\omega)+ \operatorname{div}_{f}\omega.\end{split} \tag{2.21}\] Thus, if we denote by \(\xi=\mathscr{L}_{\omega^{t}}g\), then according to (2.14) \[\hat{v}_{\xi}=2\operatorname{div}_{f}\omega. \tag{2.22}\] Now, by (2.5), (2.18), (2.20) and (2.22), we obtain \[\begin{split}-2N_{f}(\operatorname{div}_{f}^{\dagger}\omega)=& \text{N}_{f}(\mathscr{L}_{\omega^{t}}g)\\ =&\mathcal{L}_{f}(\mathscr{L}_{\omega^{t}}g)+ \operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}(\mathscr{L}_{\omega^{t }}g)+\nabla^{2}(\operatorname{div}_{f}\omega)\\ =&\frac{1}{2}\mathscr{L}_{(\Delta_{f}\omega)^{t}}g +\frac{1}{4}\mathscr{L}_{\omega^{t}}g+\operatorname{div}_{f}^{\dagger}( \Delta_{f}\omega)+\operatorname{div}_{f}^{\dagger}\left(d(\operatorname{div} _{f}\omega)\right)\\ &+\frac{1}{2}\operatorname{div}_{f}^{\dagger}\omega+\nabla^{2}( \operatorname{div}_{f}\omega)\\ =& 0.\end{split} \tag{2.23}\] ## 3. Proof of the Main Theorems In this section, we prove Theorem 1.1 and Theorem 1.2 stated in the introduction. Once again, by scaling the metric \(g\), we normalize \(\tau=1\) and assume that \((M^{n},g,f)\) is a compact shrinking Ricci soliton satisfying \[Rc+\nabla^{2}f=\frac{1}{2}g. \tag{3.1}\] First of all, recall that we have the following direct sum decomposition \[\operatorname{Sym}^{2}(T^{*}M)=\operatorname{Im}(\operatorname{div}_{f}^{ \dagger})\oplus\mathbb{R}\cdot\operatorname{Rc}\oplus\operatorname{Ker}( \operatorname{div}_{f})^{\perp}, \tag{3.2}\] where \(\mathbb{R}\cdot\mathrm{Rc}\) is the one dimensional subspace generated by the Ricci tensor \(Rc\) and \(\mathrm{Ker}(\mathrm{div}_{f})^{\perp}\), as defined in (1.9), denotes the orthogonal complement of \(\mathbb{R}\cdot\mathrm{Rc}\) in \(\mathrm{Ker}(\mathrm{div}_{f})\) with respect to the weighted inner product \(\int_{M}<\cdot,\cdot>e^{-f}\,dV\). We divide the proof of Theorem 1.1 into two propositions. **Proposition 3.1**.: _The subspaces \(\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\), \(\mathbb{R}\cdot\mathrm{Rc}\), \(\mathrm{Ker}(\mathrm{div}_{f})^{\perp}\) are invariant subspaces of the linear operator \(\mathcal{L}_{f}\). Moreover, (3.2) is an orthogonal decomposition with respect to the quadratic form \(\delta_{g}^{2}\nu(h,h)\) of the second variation in Theorem 2.1._ Proof.: Firstly, by (2.17), \[\mathcal{L}_{f}(\mathrm{div}_{f}^{\dagger}\,\omega)=\frac{1}{2}\,\mathrm{div}_ {f}^{\dagger}(\Delta_{f}\omega+\frac{1}{2}\omega)\in\mathrm{Im}(\mathrm{div}_ {f}^{\dagger}).\] This shows that \(\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\) is invariant under \(\mathcal{L}_{f}\). Next, from Lemma 2.1 (ii), we have \[\mathcal{L}_{f}Rc=\frac{1}{2}Rc.\] Hence, \(\mathbb{R}\cdot Rc\) is an invariant subspace of \(\mathcal{L}_{f}\). Finally, for any \(h\in\mathrm{Ker}(\mathrm{div}_{f})^{\perp}\), it follows from (2.19) that \[\mathrm{div}_{f}(\mathcal{L}_{f}h)=\frac{1}{2}\left(\Delta_{f}\,\mathrm{div}_{ f}\,h+\frac{1}{2}\,\mathrm{div}_{f}\,h\right)=0.\] Moreover, since \(\mathcal{L}_{f}Rc=\frac{1}{2}Rc\), it follows that \[\int_{M}<\mathcal{L}_{f}h,Rc>\,e^{-f}dV= \int_{M}<h,\mathcal{L}_{f}Rc>\,e^{-f}dV\] \[= \frac{1}{2}\int_{M}<h,Rc>\,e^{-f}dV=0,\] i.e., \(\mathcal{L}_{f}h\in\mathrm{Ker}(\mathrm{div}_{f})^{\perp}\). Therefore, \(\mathrm{Ker}(\mathrm{div}_{f})^{\perp}\) is also invariant under \(\mathcal{L}_{f}\). Moreover, the invariant subspace property just demonstrated together with the fact that \(\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\), \(\mathbb{R}\cdot\mathrm{Rc}\), and \(\mathrm{Ker}(\mathrm{div}_{f})^{\perp}\) are mutually orthogonal to each other (with respect to the weighted inner product) immediately imply that the decomposition (1.8) of \(\mathrm{Sym}^{2}(T^{*}M)\) is also orthogonal with respect to the second variation \(\delta_{g}^{2}\nu(h,h)\) of the \(\nu\)-entropy. **Proposition 3.2**.: _Let \((M^{n},g,f)\) be a compact shrinking Ricci soliton satisfying (3.1). Then the eigenvalues of \(\mathcal{L}_{f}\) on \(\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\) are strictly less than \(\frac{1}{4}\)._ Proof.: Suppose that \(\lambda\) is an eigenvalue of \(\mathcal{L}_{f}\) on \(\mathrm{Im}(\mathrm{div}_{f}^{\dagger})\) and \[\mathcal{L}_{f}(\mathscr{L}_{\omega^{\sharp}}g)=\lambda\mathscr{L}_{\omega^{ \sharp}}g\] for some \(\mathscr{L}_{\omega^{s}}g\equiv-2\operatorname{div}_{f}^{\dagger}\omega\in\operatorname {Im}(\operatorname{div}_{f}^{\dagger})\) with \(\mathscr{L}_{\omega^{s}}g\neq 0\). Since \(N_{f}=0\) on \(\operatorname{Im}(\operatorname{div}_{f}^{\dagger})\) by Lemma 2.3, from (2.23), we have \[\begin{split} 0=& N_{f}(\mathscr{L}_{\omega^{s}}g)\\ =&\mathcal{L}_{f}(\mathscr{L}_{\omega^{s}}g)+ \operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}\mathscr{L}_{\omega^{s}} g+\nabla^{2}\operatorname{div}_{f}w\\ =&\lambda\mathscr{L}_{\omega^{s}}g+\operatorname{ div}_{f}^{\dagger}\operatorname{div}_{f}\mathscr{L}_{\omega^{s}}g+\nabla^{2} \operatorname{div}_{f}\omega\\ =&-2\lambda\operatorname{div}_{f}^{\dagger}\omega-2 \operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}\operatorname{div}_{f}^{ \dagger}\omega-\operatorname{div}_{f}^{\dagger}\nabla\operatorname{div}_{f} \omega\\ =&-\operatorname{div}_{f}^{\dagger}(2\lambda\omega+2 \operatorname{div}_{f}\operatorname{div}_{f}^{\dagger}\omega+\nabla \operatorname{div}_{f}\omega).\end{split} \tag{3.3}\] Let \[\eta=-(2\lambda\omega+2\operatorname{div}_{f}\operatorname{div}_{f}^{\dagger} \omega+\nabla\operatorname{div}_{f}\omega).\] Then (3.3) says that \[\operatorname{div}_{f}^{\dagger}\eta=0. \tag{3.4}\] On the other hand, it follows from (2.20) that \[\eta=-2\lambda\omega+\Delta_{f}\omega+\frac{1}{2}\omega=\Delta_{f}\omega+( \frac{1}{2}-2\lambda)\omega.\] **Claim 1**.: The following identity holds, \[\Delta_{f}\operatorname{div}_{f}\omega=(2\lambda-1)\operatorname{div}_{f}\omega. \tag{3.5}\] Indeed, it follows from (2.18) that \[2\mathcal{L}_{f}(\mathscr{L}_{\omega^{s}}g)=\mathscr{L}_{(\Delta_{f}\omega+ \frac{1}{2}\omega)^{s}}g.\] From (2.21), we know that \[\hat{v}_{\mathscr{L}_{\omega^{s}}g}=2\operatorname{div}_{f}\omega.\] Hence, \[2\hat{v}_{\mathscr{L}_{f}(\mathscr{L}_{\omega^{s}}g)}= \hat{v}_{\mathscr{L}_{(\Delta_{f}\omega+\frac{1}{2}\omega)^{s}}g}\] \[= 2\operatorname{div}_{f}(\Delta_{f}\omega+\frac{1}{2}\omega)\] \[= 2(\Delta_{f}\operatorname{div}_{f}\omega+\operatorname{div}_{f} \omega),\] where in the last step above, we have used (2.16). Since \(\mathcal{L}_{f}(\mathscr{L}_{\omega^{s}}g)=\lambda\mathscr{L}_{\omega^{s}}g\), we get \[\Delta_{f}\operatorname{div}_{f}\omega+\operatorname{div}_{f}\omega= \hat{v}_{\mathscr{L}_{f}(\mathscr{L}_{\omega^{s}}g)}\] \[= \hat{v}_{\lambda\mathscr{L}_{\omega^{s}}g}\] \[= \lambda\hat{v}_{\mathscr{L}_{\omega^{s}}g}=2\lambda\operatorname{ div}_{f}\omega,\] i.e., \[\Delta_{f}\operatorname{div}_{f}\omega=(2\lambda-1)\operatorname{div}_{f}\omega.\] This proves Claim 1. Thus, by using (2.16) and (3.5), we obtain \[\begin{split}\operatorname{div}_{f}\eta&= \operatorname{div}_{f}\Delta_{f}\omega+(\frac{1}{2}-2\lambda)\operatorname{div}_ {f}\omega\\ =&\Delta_{f}\operatorname{div}_{f}\omega+(1-2 \lambda)\operatorname{div}_{f}\omega=0.\end{split} \tag{3.6}\] **Claim 2**.: \(\eta=0\) Indeed, since \[\int_{M}<\eta,\Delta_{f}\eta>\,e^{-f}dV= \int_{M}<\eta,\operatorname{div}_{f}\nabla\eta>\,e^{-f}dV\] \[= \int_{M}<\operatorname{div}_{f}^{\dagger}\eta,\nabla\eta>\,e^{-f}dV\] \[= \int_{M}\frac{1}{2}(\operatorname{div}_{f}^{\dagger}\eta)_{ij}( \nabla_{i}\eta_{j}+\nabla_{j}\eta_{i})\,e^{-f}dV\] \[= -\int_{M}|\operatorname{div}_{f}^{\dagger}\eta|^{2}\,e^{-f}dV,\] by (2.20), we have \[\int_{M}|\operatorname{div}_{f}^{\dagger}\eta|^{2}\,e^{-f}dV= \int_{M}<\eta,\operatorname{div}_{f}\operatorname{div}_{f}^{ \dagger}\eta>\,e^{-f}dV\] \[= -\frac{1}{2}\int_{M}<\eta,\Delta_{f}\eta+\nabla\operatorname{ div}_{f}\eta+\frac{1}{2}\eta>\,e^{-f}dV\] \[= \int_{M}\left[\frac{1}{2}|\operatorname{div}_{f}^{\dagger}\eta|^ {2}+\frac{1}{2}|\operatorname{div}_{f}\eta|^{2}-\frac{1}{4}|\eta|^{2}\right] \,e^{-f}dV,\] that is \[\int_{M}|\eta|^{2}\,e^{-f}dV=2\int_{M}(|\operatorname{div}_{f}\eta|^{2}-| \operatorname{div}_{f}^{\dagger}\eta|^{2})\,e^{-f}dV. \tag{3.7}\] Hence, it follows from (3.4) and (3.6) that \(\eta=0\), proving Claim 2. Notice that \(\eta=0\) means \[\Delta_{f}\omega=(2\lambda-\frac{1}{2})\omega.\] Therefore, to prove \(\lambda<\frac{1}{4}\), it suffices to show that the eigenvalues of \(\Delta_{f}\) on the space of \(1\)-forms are negative. We argue by contradiction. Suppose that \(\sigma\) is a nonzero \(1\)-form, and \(\Delta_{f}\sigma=\mu\sigma\) for some \(\mu\geq 0\). Then \[\frac{1}{2}\Delta_{f}|\sigma|^{2}=<\Delta_{f}\sigma,\sigma>+|\nabla\sigma|^{2} =\mu|\sigma|^{2}+|\nabla\sigma|^{2}.\] Integrating both sides with respect to the measure \(e^{-f}dV\) implies \[\int_{M}(\mu|\sigma|^{2}+|\nabla\sigma|^{2})\,e^{-f}dV=0.\] Hence \(\mu=0\) and \(\nabla\sigma=0\), which imply that \(\Delta_{f}\sigma=0\) and \(\operatorname{div}_{f}^{\dagger}\sigma=0\). On the other hand, from (2.16), \[\Delta_{f}(\operatorname{div}_{f}\sigma)=\operatorname{div}_{f}(\Delta_{f} \sigma)-\frac{1}{2}\operatorname{div}_{f}\sigma=-\frac{1}{2}\operatorname{div }_{f}\sigma.\] But the first eigenvalue \(\lambda_{1}\) of \(\Delta_{f}\) on functions is greater than \(\frac{1}{2}\) (see page 759 in [11] for a proof). Thus, we conclude that \(\operatorname{div}_{f}\sigma=0\). Therefore, it follows from (3.7) that \(\sigma=0\), a contradiction. This shows that \(\lambda<1/4\) and concludes the proof of Proposition 3.2, as well as Theorem 1.1. Finally, we are ready to prove Theorem 1.2. _Proof._ By Theorem 2.1, a compact shrinking Ricci soliton \((M^{n},g,f)\) is linearly stable if and only if \[\delta_{g}^{2}\nu(h,h):=\frac{1}{(4\pi)^{n/2}}\int_{M}<N_{f}h,h>e^{-f}dV\leq 0\] for every \(h\in\operatorname{Sym}^{2}(T^{*}M)=\ \operatorname{Im}(\operatorname{div}_{f}^{ \dagger})\oplus\mathbb{R}\cdot\operatorname{Rc}\oplus\operatorname{Ker}( \operatorname{div}_{f})^{\perp}\). However, by Theorem 1.1(i) (i.e., Proposition 3.1), we have \[\int_{M}<N_{f}h,h>e^{-f}dV= \int_{M}<N_{f}h_{1},h_{1}>e^{-f}dV+\int_{M}<N_{f}h_{2},h_{2}>e^{- f}dV\] \[+\int_{M}<N_{f}h^{\perp},h^{\perp}>e^{-f}dV\] \[= \int_{M}<N_{f}h_{2},h_{2}>e^{-f}dV+\int_{M}<N_{f}h^{\perp},h^{ \perp}>e^{-f}dV,\] where \[h=h_{1}+h_{2}+h^{\perp},\quad\text{with }h_{1}\in\operatorname{Im}( \operatorname{div}_{f}^{\dagger}),\ h_{2}\in\mathbb{R}\cdot\operatorname{Rc}, \ h^{\perp}\in\operatorname{Ker}(\operatorname{div}_{f})^{\perp},\] and, in the last equality, we have used the fact that \(\delta_{g}^{2}\nu(h_{1},h_{1})=0\) for \(h_{1}\in\operatorname{Im}(\operatorname{div}_{f}^{\dagger})\) due to the diffeomorphism invariance of the \(\nu\)-entropy. On the other hand, since \(\operatorname{div}_{f}Rc=0\) and \(\mathcal{L}_{f}Rc=\frac{1}{2}Rc\) by Lemma 2.1, we obtain \[N_{f}(Rc)= \mathcal{L}_{f}Rc-\frac{\int_{M}|Rc|^{2}e^{-f}dV}{\int_{M}R\,e^{- f}dV}Rc\] \[= \mathcal{L}_{f}Rc-\frac{1}{2}Rc=0,\] where we have used the fact that \[\int_{M}|Rc|^{2}e^{-f}dV=\frac{1}{2}\int_{M}R\,e^{-f}dV,\] because the scalar curvature \(R\) satisfies the well-known equation \(\Delta_{f}R=R-2|Rc|^{2}\). Hence, \(N_{f}=0\) on \(\mathbb{R}\cdot\operatorname{Rc}\), and it follows that \[\int_{M}<N_{f}h_{2},h_{2}>e^{-f}dV=0.\] Also, as \(N_{f}=\mathcal{L}_{f}\) on \(\operatorname{Ker}(\operatorname{div}_{f})^{\perp}\), we immediately conclude that \[\int_{M}<N_{f}h,h>e^{-f}dV =\int_{M}<N_{f}h^{\perp},h^{\perp}>e^{-f}dV\] \[=\int_{M}<\mathcal{L}_{f}h^{\perp},h^{\perp}>e^{-f}dV.\] Therefore, \(\delta_{g}^{2}\nu(h,h)\leq 0\) if and only if \[\int_{M}<\mathcal{L}_{f}h^{\perp},h^{\perp}>e^{-f}dV\leq 0.\] This finishes the proof of Theorem 1.2. \(\square\) _Remark 3.1_.: In the proof of Theorem 1.2, if we use Lemma 2.3 instead of Theorem 1.1 (i) then we would get the following more explicit information about the Jacobi operator \(N_{f}\). **Proposition 3.3**.: (3.8) \[N_{f}=\begin{cases}0,&\text{on }\operatorname{Im}(\operatorname{div}_{f}^{ \dagger});\\ 0,&\text{on }\mathbb{R}\cdot\operatorname{Rc};\\ \mathcal{L}_{f}&\text{on }\operatorname{Ker}(\operatorname{div}_{f})^{\perp}. \end{cases}\] _In particular, \(N_{f}\leq 0\) on \(\operatorname{Sym}^{2}(T^{*}M)\) if and only if \(\mathcal{L}_{f}\leq 0\) on \(\operatorname{Ker}(\operatorname{div}_{f})^{\perp}\)._ _Remark 3.2_.: Suppose \(\xi=\mathscr{L}_{\omega^{\sharp}}g\) is an eigen-tensor of \(\mathcal{L}_{f}\) for some 1-form \(\omega\), with \[\mathcal{L}_{f}\xi=\lambda\xi.\] Then one can show that \(\operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}\xi\) and \(\nabla^{2}\operatorname{div}_{f}\omega\) are also eigen-tensors of \(\mathcal{L}_{f}\) with the same eigenvalue, i.e., \[\mathcal{L}_{f}(\operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}\xi)= \lambda(\operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}\xi),\] and \[\mathcal{L}_{f}(\nabla^{2}\operatorname{div}_{f}\omega)=\lambda(\nabla^{2} \operatorname{div}_{f}\omega).\] Indeed, if \(\mathcal{L}_{f}(\xi)=\lambda\xi\) then, by using the identity \[\operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}(\mathcal{L}_{f}h)= \mathcal{L}_{f}(\operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}h) \tag{3.9}\] shown in [33], we have \[\mathcal{L}_{f}(\operatorname{div}_{f}^{\dagger}\operatorname{div} _{f}\xi)= \operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}(\mathcal{L}_{f}\xi)\] \[= \lambda(\operatorname{div}_{f}^{\dagger}\operatorname{div}_{f}\xi).\] On the other hand, by setting \(u=\operatorname{div}_{f}\omega\) and combining (3.5) with (2.18) and (2.15), we get \[2\mathcal{L}_{f}(\nabla^{2}u)= \mathcal{L}_{f}(\mathscr{L}_{\nabla u}g)\] \[= \frac{1}{2}\mathscr{L}_{(\Delta_{f}(du))^{\sharp}}g+\frac{1}{2} \mathscr{L}_{\frac{1}{2}\nabla u}g\] \[= \frac{1}{2}\mathscr{L}_{\nabla(\Delta_{fu}+u)}g\] \[= \frac{1}{2}\mathscr{L}_{2\lambda\nabla u}g\] \[= 2\lambda\nabla^{2}u.\] To conclude our paper, we mention two open problems. **Conjecture 1 (Hamilton; 2004 [5, 6])**\(\mathbb{S}^{4}\) and \(\mathbb{CP}^{2}\) are the only \(\nu\)-stable four dimensional positive Einstein manifolds. **Conjecture 2 (Cao; 2006 [5, 6])** A \(\nu\)-stable compact shrinking Ricci soliton is necessarily Einstein. _Remark 3.3_.: Besides \(\mathbb{S}^{4}\) and \(\mathbb{CP}^{2}\), the other known positive Einstein 4-manifolds are the Kahler-Einstein manifolds \(\mathbb{CP}^{1}\times\mathbb{CP}^{1}\), \(\mathbb{CP}^{2}\#(-k\mathbb{CP}^{2})\) (\(3\leq k\leq 8\)), and the (non-Kahler Einstein but conformally Kahler) Page metric [38] on \(\mathbb{CP}^{2}\#(-\mathbb{CP}^{2})\) and Chen-LeBrun-Weber metric [15] on \(\mathbb{CP}^{2}\#(-2\mathbb{CP}^{2})\). Note that, for \(n>4\), C. He and the first author [9] have found a strictly stable positive Einstein manifold, other than the round sphere \(\mathbb{S}^{n}\), in dimension 8.
2305.16206
Large Reconfigurable Quantum Circuits with SPAD Arrays and Multimode Fibers
Reprogrammable linear optical circuits are essential elements of photonic quantum technology implementations. Integrated optics provides a natural platform for tunable photonic circuits, but faces challenges when high dimensions and high connectivity are involved. Here, we implement high-dimensional linear transformations on spatial modes of photons using wavefront shaping together with mode mixing in a multimode fiber, and measure photon correlations using a time-tagging single-photon avalanche diode (SPAD) array. In order to prove the suitability of our approach for quantum technologies we demonstrate two-photon interferences in a tunable complex linear network -- a generalization of a Hong-Ou-Mandel interference to 22 output ports. We study the scalability of our approach by quantifying the similarity between the ideal photon correlations and the correlations obtained experimentally for various linear transformations. Our results demonstrate the potential of wavefront shaping in complex media in conjunction with SPAD arrays for implementing high-dimensional reconfigurable quantum circuits. Specifically, we achieved $(80.5 \pm 6.8)\%$ similarity for indistinguishable photon pairs and $(84.9 \pm 7.0)\%$ similarity for distinguishable photon pairs using 22 detectors and random circuits. These results emphasize the scalability and reprogrammable nature of our approach.
Adrian Makowski, Michał Dąbrowski, Ivan Michel Antolovic, Claudio Bruschini, Hugo Defienne, Edoardo Charbon, Radek Lapkiewicz, Sylvain Gigan
2023-05-25T16:07:38Z
http://arxiv.org/abs/2305.16206v1
# Large Reconfigurable Quantum Circuits with SPAD Arrays and Multimode Fibers ###### Abstract Reprogrammable linear optical circuits are essential elements of photonic quantum technology implementations. Integrated optics provides a natural platform for tunable photonic circuits, but faces challenges when high dimensions and high connectivity are involved. Here, we implement high-dimensional linear transformations on spatial modes of photons using wavefront shaping together with mode mixing in a multimode fiber, and measure photon correlations using a time-tagging single-photon avalanche diode (SPAD) array. In order to prove the suitability of our approach for quantum technologies we demonstrate two-photon interferences in a tunable complex linear network -- a generalization of a Hong-Ou-Mandel interference to 22 output ports. We study the scalability of our approach by quantifying the similarity between the ideal photon correlations and the correlations obtained experimentally for various linear transformations. Our results demonstrate the potential of wavefront shaping in complex media in conjunction with SPAD arrays for implementing high-dimensional reconfigurable quantum circuits. Specifically, we achieved \((80.5\pm 6.8)\%\) similarity for indistinguishable photon pairs and \((84.9\pm 7.0)\%\) similarity for distinguishable photon pairs using 22 detectors and random circuits. These results emphasize the scalability and reprogrammable nature of our approach. Quantum optics with indistinguishable photons have emerged as a key resource in advancing scientific research, particularly in the fields of quantum information processing[1], communication[2; 3], and metrology[4; 5; 6], owing to their unique properties such as entanglement[7; 8; 9; 10], superposition[11; 12; 13], and non-locality[14; 15]. One area of interest is the study of photonic quantum walks, which explores the behavior of quantum particles in complex environments[16; 17; 18]. This phenomenon has potential applications in fields such as quantum algorithms[19], simulations[20], and metrology[21; 22], as well as quantum computing, communication, and sensing[23]. Several research groups have made remarkable strides in the development of quantum walk, including the first experimental realization of two-dimensional quantum walks on a lattice using single photons[24], and a quantum walk in a 21-waveguide array[25] or an application of a quantum walk to the studies of bound states between systems with different topological properties[26]. However, these experimental setups have stringent limitations regarding reprogrammability and scalability, which are crucial for scaling the system to a higher number of modes for practical implementation on near-term quantum devices[27]. In this letter, we present a reprogrammable and scalable platform for implementing the quantum walk of a two-photon state using a multimode fiber (MMF) as a quantum state mixer[28; 29; 30]. Our platform can generate an arbitrary N-output x 2-input quantum state operations that can be reprogrammed on demand at a 10 Hz frequency rate (see Fig. 1). This provides significant advantages over existing experimental setups[31; 32] and makes it a promising candidate for the future realizations of highly-multimode quantum walk experiments[33; 34; 35; 36]. The wavefronts of the photons are shaped using a spatial light modulator (SLM), and then coupled into an MMF. The MMF is a complex medium that supports around 400 modes at wavelength \(\lambda\) = 800 nm and has low losses[29] which are the essential features for performing multidimensional unitary operations on the single photons efficiently [28; 37; 38; 39]. Previous implementations faced a significant limitation in achieving scalability due to the integration of detection technology[29].This required a large number of separate avalanche photodiodes, rendering the solution of problems like boson sampling [40], very impractical. Several experiments have employed single-outcome projective measurements for sequential analysis of the output state [18; 41]. However, these approaches suffer from inherent limitations, including substantial losses (as only two outputs can be detected simultaneously out of all possibilities) and time-intensive procedures (since detecting \(i\) output modes demands \(i^{K}\) measurements, with \(K\) representing the number of photons involved). Consequently, these methods become impractical for large-scale systems. To overcome this issue, here we use a 23-single photon avalanche detector (SPAD23)[42] array to measure the number of counts and coincidences between photons at the linear network output. The ability to modify the phase pattern on the SLM allows the MMF to performs a specific linear operation on a two-photon state. This operation can be easily and reliably adjusted on demand. Degenerate photon pairs at 810nm are produced by type-II spontaneous parametric down conversion (SPDC) process in a ppKTP crystal pumped by a 405 nm continuous wave laser. To split the photons with orthogonal polarizations, we use a polarisation beam splitter (PBS). Our SPDC setup allows us to adjust the time delay between the photons, and then observe and control Hong-Ou-Mandel (HOM) interference[43] by changing the photons' distinguishability. The measured HOM visibility of photons from our SPDC source is approx. 95%. See _Supplemental Material_ for more details on the source. In our experiment, we utilized a detector array consisting of Single-Photon Avalanche Diodes (SPADs) using CMOS technology[44; 45], specifically the SPAD23 model from Pi Imaging Technology[42]. This detector offers a sub-ns temporal resolution (120 ps jitter FWHM and 20 ps for least-significant bit when using time-to-digital converters as time-taggers), exhibits low dark noise (less than 100 counts per second at 20\({}^{\rm o}\)C), has a high pixel fill factor (80%), and a "dead time" of approximately 50 ns[46]. However, like all SPAD arrays SPAD23 is prone to cross-talk, which occurs when a photon detected by one of the array's detectors is simultaneously counted by a neighboring detector[44; 47]. While the probability of cross-talk is low (approximately 0.1%), it affects the number of coincidences measured in our experiment but not the number of single photon counts[44]. To account for cross-talk, we employed a calibration procedure, which is detailed in the _Supplemental Material_. Figure 2 depicts the experimental setup used in our study. Two separate parts of the spatial light modulator (SLM), labeled as H and V for orthogonal light polarizations, were illuminated with two photons created in our SPDC platform. The SLM shaped the wavefronts of the photons, which were then focused on the MMF of 50\(\mu m\) core. The MMF with SLM induces a specific quantum operation on a 2-photon state, and modifying the phase pattern on the SLM allows for easy adjustment of this operation. The resulting speckle image of the light emerging from the MMF was either imaged on the SPAD23 or the CCD camera after passing through a polarizer to choose just one particular polarization (for which the TM was measured). To calibrate the relative position of the MMF and SPAD23, we used a CCD camera, as depicted in Fig. 2(b). It shows the speckle image captured by the CCD camera, with the positions of the SPAD23 detectors marked with red circles. The CCD camera was used just for the calibration purposes (see _Supplemental Material_ for details). The magnification was choosen to map one speckle mode of the MMF into one single SPAD detector. The photon arrival time was measured using SPAD23 detector and then used to calculate the number of counts \(n_{i}\) for each detector \(i\) and coincidences \(C_{ij}=\langle n_{i}n_{j}\rangle_{t}\) for each pair \((i,j)\). Figure 1: Reprogrammable and scalable platform for the implementation of quantum operations on a 2-photon state where wavefronts of two photons, generated using spontaneous parametric down-conversion (SPDC), are shaped using two separate parts of the spatial light modulator (SLM) before being coupled into the multimode fiber (MMF) used as a quantum states mixer[28; 29; 30]. Usage of a single-photon avalanche 23-detector array (SPAD23)[42] enables for subset \(\mathcal{L}_{2}^{23}\) arbitrary 23-output x 2-input quantum state operations of the general \(\mathcal{U}_{23}^{23}\) unitary transformation performed by MMF. Figure 2: (a) Schematic of the experimental setup. A photon pair of two orthogonal polarizations (H and V) is passed through single-mode fibers. The photons are collimated, and their wavelengths are shaped by the spatial light modulator (SLM). The shaped photons are then coupled into a multimode fiber (MMF). The MMF output is imaged on the SPAD23 or CCD by changing the flip-mirror position to measure the number of counts and coincidences. By knowing the MMF’s transmission matrix, we can select a phase pattern on the SLM to perform an arbitrary operation. (b) Speckle image of the light coming out of the fiber imaged on the camera, and the localization of the SPAD23 detectors. Our experimental platform establishes a connection between input modes displayed on the SLM and corresponding output modes measured using 23 detectors described by unitary operator \(\mathcal{U}_{23}^{23}\), using the well-established technique of a transmission matrix (TM) measurement of the optical system[48]. The measured TM is stable for a few days in normal laboratory conditions. We measure the TM in a Fourier mode basis by displaying phase ramps on the SLM with a varying inclination and orientation[49]. This allows us to scan the different spatial positions at the entrance of the MMF and as a result to address particular output modes of the MMF after TM calibration. If the addressed mode is not an eigenvector of the TM, the light becomes scrambled as it propagates through the MMF. The electric field amplitude at SPAD23 is linearly dependent on the electric field at the SLM and can be represented as: \[E_{out}^{(k)}=T_{k}^{(1)}E_{in}^{(k)},\,\text{for}\,\,k=H,V, \tag{1}\] where \(E_{out}^{(k)}\) is the electric field at SPAD23, \(T_{k}^{(1)}\) is a one-photon transmission matrix for SLM part \(k\), and \(E_{in}^{(k)}\) is the electric field on the SLM part \(k\), corresponding to the SLM part shaping the polarisation \(k=H,V\). During the TM measurement, the SPAD acquires the number of photon counts per 10 ns time-window for each SLM mode, and in order to obtain the electric field on SPAD23 we perform phase-stepping interferometry[48]. We perform this operation separately for both light polarization (SLM parts) and then calculate the transmission matrix for the two-photon state[28]: \[T_{H}^{(1)},T_{V}^{(1)}\to T^{(2)}. \tag{2}\] With this information, we can readily calculate the SLM pattern that gives us the required quantum operation on the two-photon state \(\mathcal{L}\in\mathbb{M}_{2\times N}\), where \(N\) is the number of detectors. The computation of SLM pattern for a given \(\mathcal{L}\) takes only a few seconds. The electric field on SLM can be calculated using the complex conjugate of the transmission matrix for the two-photon state: \[[E_{in}^{(H)},E_{in}^{(V)}]=T^{\dagger(2)}\mathcal{L}. \tag{3}\] Knowing the TM of the MMF, one can modify the phase pattern at the speed of 10Hz on the SLM and obtain different N-output x 2-input quantum linear network operations \(\mathcal{L}_{2}^{N}\). As an example of an all-to-all operator, we emulate a 10x10 Sylvester operation[50] on the two-photon state generated by the experimental platform described above. We measured the TM of the MMF in our setup to calculate the appropriate SLM pattern for performing the 10-dimensional Sylvester operation: \[\mathcal{L}_{S}=\left\{\begin{array}{rl}1&\text{for}\,\,\,k=V\\ (-1)^{i}&\text{for}\,\,\,k=H,\end{array}\right. \tag{4}\] where \(i\) denotes the detector index. We measured the number of coincidence counts for distinguishable and indistinguishable photon pairs. The results of this experiment are presented in Fig. 3. Figure 3(a) shows a HOM interference scan for a 4-dimensional Sylvester operator, which corresponds to the number of coincidences as a function of the relative delay between the two photons. The HOM dip in the scan indicates the presence of interference between the Figure 3: (a) Hong-Ou-Mandel (HOM) interference scan for a 4x4 operation, displaying the number of coincidences as a function of the relative delay between the two photons. (b)-(e) Example results of a 10x10 operation on the 2-photon state. (b) and (c) show the experimental and theoretical coincidence counts for indistinguishable photon pairs, respectively. (d) and (e) show the corresponding experimental and theoretical coincidence counts for distinguishable photon pairs, acquired over a 100-second measurement period. The SPAD23 detectors used in the experiment are marked in yellow in Fig. 4. two photons, which is essential for quantum operations with indistinguishable particles[43]. The HOM visibility \(V\) for different coincidence distributions \(C_{i,j}\), ranging from \(V=0.74\) to \(V=0.92\) deviates from ideality (the 95% indistinguishably of the source) mainly because of cross-talk between different SPAD23 detectors (see _Supplemental Material_ for details) as well as photons spectral dispersion when propagating through the MMF and non-perfect fidelity of linear network operator. Figures 3(b) and 3(d) show the experimental coincidence counts for indistinguishable and distinguishable photon pairs, respectively over 10 output modes. These counts were acquired for 100 sec by measuring the number of coincidences between the SPAD23 detectors when the photons were either indistinguishable or distinguishable. The theoretical coincidence counts for indistinguishable and distinguishable photon pairs[51, 52] are presented in Fig. 3(c) and 3(e), respectively. Comparing the experimental results presented in Figs. 3(b) and 3(d) with the theoretical predictions in Figs. 3(c) and 3(e), we can see that the presented results agree well with the theoretical predictions[51, 52]. The experimental results also demonstrate that the experimental platform can successfully generate two-photon states for quantum operations, as well as measure and characterize their properties through coincidence counting. We finally conducted an investigation into the scalability and reprogrammable nature of our experimental platform by performing random matrix quantum operations with varying numbers of detectors. Figure 4(a) presents coincidence distributions, with the first column showcasing the theoretical coincidence counts for indistinguishable photon pairs utilizing the Sylvester operation[50] across a range of detectors. Meanwhile, the second column demonstrates the corresponding experimental coincidence counts, which we determined by measuring the number of coincidences between the SPAD detectors. Because of a high number of dark counts in one of the detectors, we excluded it from the experiments. Figure 4(b) shows the difference between the similarity trend for distinguishable and indistinguishable photon pairs \(\langle\mathcal{S}_{ET}\rangle_{\mathcal{L}_{R}}\) when performing 100 random \(\mathcal{L}_{R}\) operations (random complex numbers from a uniform distribution) for different number of SPAD detectors. The similarity \(\mathcal{S}_{ET}\) of two coincidence distributions \(C_{i,j}^{(E)}\) and \(C_{i,j}^{(T)}\) (corresponding to particular \(\mathcal{L}_{R}\) operator), representing a generalized fidelity for 2-fold coincidences[25], is defined as: \[\mathcal{S}_{ET}=\frac{\left(\sum_{i,j}\sqrt{C_{i,j}^{(E)}C_{i,j}^{(T)}} \right)^{2}}{\sum_{i,j}C_{i,j}^{(E)}\sum_{i,j}C_{i,j}^{(T)}}. \tag{5}\] In other words, similarly quantifies the extent to which the experimental results align with theoretical predictions, with higher values indicating stronger agreement. We see that the similarity decreases as we increase the number of detectors from 4 to 22, from \(98.3\pm 1.23\%\) Figure 4: (a) Sylvester transformation for different numbers of detectors. The first column exhibits the SPAD23 detector array used in the experiment, with the detectors used for the measurement with a given number of detectors marked in yellow. The second column shows the experimental coincidence counts for various numbers of detectors and indistinguishable photon pairs. The third column shows the theoretical coincidence counts for the corresponding number of detectors. (b) A random operation similarity trend for indistinguishable and distinguishable photon pairs. (\(95.3\pm 5.5\%\)) to \(84.9\pm 7.0\%\) (\(80.5\pm 6.8\%\)) for distinguishable (resp. indistinguishable) photons. The similarity is higher for distinguishable pairs because of more stringent conditions for 2-photon interference (distinguishable photons are not affected by phase errors in quantum interference). For the same reason, the similarity for indistinguishable pairs of photons is reduced due to the limited accuracy of photon wavefront-shaping via SLM. Also, for a larger number of detectors, the similarity decreases because the coincidence distribution becomes more noisy as can be seen in Fig. 4. To conclude, we present an approach to implementing high-dimensional reconfigurable quantum circuits using wavefront shaping and mode mixing in the MMF with a SPAD camera as a detector. It offers advantages in terms of scalability and flexibility compared to other approaches[54, 53, 32]. We demonstrate the feasibility of the presented approach by implementing a complex linear network for two-photon interference. We measure the two-photon correlations between all the output pairs using a time-tagging SPAD array[55, 47], thus advancing towards scalable detection schemes beyond previously proposed solutions[28, 29]. The similarity between the ideal photon correlations and the correlations has been obtained experimentally for up to 22 output modes for various randomly chosen linear transformations for both indistinguishable and distinguishable photon pairs. The current limitation of the setup is the number of available photons preventing us from studying high dimensional linear networks applied to multi-photon states \(\mathcal{L}\)[56, 57], as well as detector cross-talk[44] that affect coincidences between detectors located close to each other thus reducing the measured similarity of the states. Future work in this area could focus on further optimizing the wavefront shaping and mode mixing techniques - along with sources delivering more than two photons enabling the achieving of even higher dimensional transformations[56]. Additionally, the use of more advanced detectors, either superconducting nanowire single-photon detectors with near-unity quantum efficiency[58] or SPAD cameras with more pixels[59, 47] could further improve the measurement capabilities. The scalability and programmable nature of the presented approach make it promising for applications in quantum information processing, such as quantum communication[2, 3] and quantum computing[33, 34, 35, 36], especially in the perspective of using more detectors to test different boson sampling protocols which can overcome the capabilities of existing classical information processing schemes[60, 61, 27, 56]. _Acknowledgments_ We would like to thank Saroch Leedumrongwatthanakun for fruitful discussions and initial guidance with the setup. S.G. acknowledges funding from the European Research Council ERC Consolidator Grant (SMARTIES-724473). H.D. acknowledges funding from the European Research Council ERC Starting Grant (SQIIMIC-101039375). A.M acknowledge Scholarship of French Government - Ph.D. Coutletelle/Codirection and "IV.4.1. A Complex Programme of Support For UW PhD Students". A.M. and R.L acknowledge the support by the Foundation for Polish Science under the FIRST TEAM project 'Spatiotemporal photon correlation measurements for quantum metrology and super-resolution microscopy' co-financed by the European Union under the European Regional Development Fund (POIR.04.04.00-00-3004/17-00). The International Centre for Translational Eye Research (MAB/2019/12) project is carried out within the International Research Agendas Program of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund.
2305.09326
Conservation Laws for a Thermal Reservoir Model in Open Quantum Systems
We construct Lie point symmetries, a closed-form solution and conservation laws using a non-Noetherian approach for a specific case of the Gorini-Kossakowski-Sudarshan-Lindblad equation that has been recast for the study of non-relativistic free particles in a thermal reservoir environment. Conservation laws are constructed subsequently using the Ibragimov method via a solution to the adjoint form of the equation of motion via its corresponding scalaing symmetry. A general computational framework for obtaining all conserved vectors is exhibited some triplets of conserved quantities are calculated in full.
Muhammad Al-Zafar Khan, Mervlyn Moodley, Francesco Petruccione
2023-05-16T10:01:34Z
http://arxiv.org/abs/2305.09326v1
# Conservation Laws for Free Particles Interacting with a Thermal Reservoir in Open Quantum Systems ###### Abstract We construct Lie point symmetries, a closed-form solution and conservation laws using a non-Noetherian approach for a specific case of the Gorini-Kossakowski-Sudarshan-Lindblad equation that has been recast for the study of non-relativistic free particles in a thermal reservoir environment. Conservation laws are constructed subsequently using the Ibragimov method via a solution to the adjoint form of the equation of motion via its corresponding scalaing symmetry. A general computational framework for obtaining all conserved vectors is exhibited some triplets of conserved quantities are calculated in full. _Keywords_: Open quantum systems; Quantum Brownian motion; Lie symmetries; Conservation laws
2306.10327
Capillary nanowaves on surfactant-laden liquid films with surface viscosity and elasticity
Thermal motions of molecules can generate nanowaves on the free surface of a liquid film. As nanofilms are susceptible to the contamination of surfactants, this work investigates the effects of surfactants on dynamics of nanowaves on a bounded film with a finite depth, using both molecular dynamics simulations and analytical theories. In molecular simulations, a bead-spring model is adopted to simulate surfactants, where beads are connected by the finite extensive nonlinear elastic potentials. Fourier transforms of the film surface profiles $h(x,t)$ extracted from molecular simulations are performed to obtain the static spectrum $|h_q|_{\mathrm{rms}}$ and temporal correlations of surface modes $<h_q(0)h_q^*(t)>$. It is shown that the spectral amplitude is increased for the contaminated liquid surface compared to the clean surface because surfactants can decrease surface tension. A higher concentration of surfactants on the surface not only decreases the surface tension but also causes elastic energy to the free surface, as the scaling of spectral amplitude with wavenumbers changes from $|h_q|_{\mathrm{rms}}\sim q^{-1}$ to $|h_q|_{\mathrm{rms}}\sim q^{-2}$ for modes with large wavenumbers. Regarding the temporal correlations of surface modes, it is observed that the presence of surfactants leads to a slower decay, which, however, cannot be predicted by only considering the decreased surface tension. Based on the Boussinesq Scriven model for surface viscosity, a linear stability analysis of Stokes flow for films with arbitrary depth is conducted and the obtained dispersion relation considering surface viscosity can justify the simulation results.
Yixin Zhang, Zijing Ding
2023-06-17T12:06:56Z
http://arxiv.org/abs/2306.10327v1
# Capillary nanowaves on surfactant-laden liquid films with surface viscosity and elasticity ###### Abstract Thermal motions of molecules can generate nanowaves on the free surface of a liquid film. As nanofilms are susceptible to the contamination of surfactants, this work investigates the effects of surfactants on dynamics of nanowaves on a bounded film with a finite depth, using both molecular dynamics simulations and analytical theories. In molecular simulations, a bead-spring model is adopted to simulate surfactants, where beads are connected by the finite extensive nonlinear elastic potentials. Fourier transforms of the film surface profiles \(h(x,t)\) extracted from molecular simulations are performed to obtain the static spectrum \(|h_{q}|_{\text{rms}}\) and temporal correlations of surface modes \(<h_{q}(0)h_{q}^{*}(t)>\). It is shown that the spectral amplitude is increased for the contaminated liquid surface compared to the clean surface because surfactants can decrease surface tension. A higher concentration of surfactants on the surface not only decreases the surface tension but also causes elastic energy to the free surface, as the scaling of spectral amplitude with wavenumbers changes from \(|h_{q}|_{\text{rms}}\sim q^{-1}\) to \(|h_{q}|_{\text{rms}}\sim q^{-2}\) for modes with large wavenumbers. Regarding the temporal correlations of surface modes, it is observed that the presence of surfactants leads to a slower decay, which, however, cannot be predicted by only considering the decreased surface tension. Based on the Boussinesq-Scriven model for surface viscosity, a linear stability analysis of Stokes flow for films with arbitrary depth is conducted and the obtained dispersion relation considering surface viscosity can justify the simulation results. Introduction Surfactants are widely used in a number of industrial applications such as foams and emulsions [1], detergents, inks [2], and oil recovery [3]. The presence of surfactants can significantly alter the behaviors of liquid surfaces. For example, surfactants in water may lead to a Marangoni stress on the bubble surface that can slow the rising of bubbles [4]. Thus, it is of great interest and importance to study the effects of surfactants on various kinds of interfacial fluid dynamics elaborately. Thermal capillary waves (TCW) on a free liquid surface are waves spontaneously excited by thermal motions of molecules. These waves have been used in experiments to measure fluid properties such as surface tension [5] and viscoelasticity [6]. These waves are also critical to the instability of nanofilms [7; 8; 9; 10], coalescence of nanodroplets [11] and bubbles, and rupture of nanojets [12]. The roughness on a liquid surface created by TCW is usually on the scale of nanometres, but micrometer roughness can also be obtained and thus observed optically using ultra-law surface tension mixtures (\(\gamma\sim 10^{-6}\) N/m) [13], since thermal roughness is usually proportional to thermal length \(\sqrt{k_{B}T/\gamma}\) (\(k_{B}\), \(T\), and \(\gamma\) are the Boltzmann constant, temperature, and surface tension respectively). It is worth mentioning that the amplitude of TCW will diverge with the increasing system size if only considering surface tension on the free surface. The introduction of gravity or a certain binding potential, however, can remedy this problem [14]. At thermal equilibrium, the amplitude of TCW, namely the static spectrum, can be described by the famous capillary wave theory [15; 16]. Recently, the capillary wave theory has been extended using a Langevin equation to describe the transient dynamics of non-equilibrium TCW and their approach to thermal equilibrium [17]. When in thermal equilibrium, it is well known that the temporal correlations of overdamped surface modes of TCW show an exponential decay [18; 19], with a decay rate given by the dispersion relation of the system, which provides the basis for measuring the system's property using TCW [5; 6]. This technique has the advantage of being noninvasive since no external forces are applied. As the surface of a nanometric liquid film can often be contaminated by surface-active agents such as surfactants [20; 21], it is crucial to understand how surfactants modify the properties of TCW. Perhaps the most known effect of surfactants is their ability to decrease surface tension. Thus, adding surfactants to a liquid surface may enhance the amplitude of thermal waves and roughness. Apart from decreasing surface tension, surfactants may form a monolayer on the liquid surface and lead to the elastic properties of the liquid surface [22; 23]. For example, the stability of emulsions, found in mixtures of water, oil, and amphiphilic surfactants, may be influenced by the bending rigidity (\(\kappa\)) of the surfactant monolayers [24; 25]. The bending rigidity of emulsions is on the order of \(\kappa\sim k_{B}T\) and may overwhelm the effects of surface tension [25]. It is thus interesting to see how the bending rigidity affects thermal capillary waves. In fact, a recent experiment used the static spectra of TCW in an oil-water-surfactant system to measure the bending rigidity of the water-oil interface of droplets [26]. The adsorption of surfactants on a fluid surface may also increase that surface's resistance to deformations and lead to a new surface property called surface viscosity [27]. The importance of surface viscosity on free surface flows has been generally acknowledged now. For example, Joye _et al._[28] experimentally showed that high surface viscosity can prevent asymmetric drainage of thin films in foams. Ponce-Torres _et al._[29] found surface viscosity is responsible for the accumulation of surfactants in the satellite droplets formed by the breakup of a large surfactant-laden droplet. Surface viscosity can also change the instability and pinch-off profile of viscous threads [30; 31]. For thin liquid films destabilized by the disjoining pressure, surface viscosity is able to decrease the growth of perturbations [32; 33]. Shen _et al._[34] theoretically showed surface viscosity contributes an overall damping effect to the amplitude of the capillary waves on films with infinite depth. However, there are very few studies about the effects of surface viscosity on TCW. Given the importance of surface viscosity on free surface flows, seeking accurate methods to measure surface viscosity is necessary. Though much progress has been made, it is still debatable how to obtain reliable values of surface viscosity, since Marangoni effects and surface viscosity can often coexist during measurements [35]. Experimental measurements of surface viscosity have reported values that are orders of magnitude apart and are controversial [36]. As experiments at the nanoscale are difficult to perform due to the small temporal and spatial scales, molecular dynamics simulations (MD) become a valuable tool to investigate nanoflows and can act as virtual experiments to validate newly developed theories. In terms of nanoflows involving free surfaces, there are increasing molecular studies for nanodroplets, nanobubbles [37], nanofilms [8] and nanojets [12]. MD studies of TCW have been carried out for a variety of problems including TCW for free liquid films [38], TCW for films on no slip [39] and anisotropic-slip [19] substrates, to name a few. The introduction of surfactants to TCW in MD has been implemented in several studies [25; 40] but previous works only considered the effects of surfactants on surface tension and bending rigidity of liquid surface by examining the static spectrum. The dynamics of TCW such as the relaxation of TCW correlations associated with surface viscosity and elasticity have not been studied. In this work, MD simulations are used to study the effects of surfactants on the overdamped thermal capillary waves for liquid films bounded on substrates. A bead-spring model is adopted to simulate surfactants, where beads are connected by the finite extensive nonlinear elastic potentials. The surface concentrations of surfactants are varied to see how they change the behaviors of the liquid interface. We obtain surface modes of surface waves from MD simulations and calculate their static spectra \(|h_{q}|_{\rm rms}\) and temporal correlations \(<h_{q}(0)h_{q}^{*}(t)>\). The static spectrum is used to infer bending rigidity and surface tension. The Boussinesq-Scriven model for the surface viscosity is adopted and we perform a linear stability analysis of a liquid film with arbitrary depth in Stokes limits and obtain the corresponding dispersion relation for the first time, which is validated against the MD temporal correlations. This paper is organized as follows. In Sec. II, we formulate the problem that we are going to solve and derive the dispersion relation for films with any depth and surface viscosity. In Sec. III, the MD model of nanoscale liquid films with surfactants is introduced. Sec. IV, shows the comparison between MD and analytical theories. We conclude our findings and outline future directions of research in Sec. V. ## II Mathematical modeling As shown in Fig. 1, we consider a Newtonian liquid film on a solid surface. The free surface of the liquid film is covered with insoluble surfactants. Initially, the film has a size \((L_{x},L_{y},h_{0})\) and is quasi-two dimensional (2D) by letting \(L_{x}\gg L_{y}\). Due to thermal motions of molecules, the film surface is fluctuating weakly around \(h_{0}\) and the instantaneous surface profile is given by \(h(x,t)\). ### Static spectrum and temporal correlations for surfactant-laden liquid films The static spectrum of capillary waves on a clean film can be determined by the equipartition theorem; this forms the basis of classical capillary wave theory [16]. For a film contaminated with surfactants, the static spectrum has to be modified to consider the contributions of the elastic energy arising from the formed monolayer of surfactants. In terms of a quasi-2D film under small perturbations in Fig. 1, the extra free energy \(f\) due to the change of surface area is [24; 25] \[f\approx\frac{L_{y}}{2}\gamma\int\left(\frac{\partial h}{\partial x}\right)^{2 }\!dx+\frac{L_{y}}{2}\kappa\int\left(\frac{\partial^{2}h}{\partial x^{2}} \right)^{2}\!dx. \tag{1}\] Note that the effect of disjoining pressure is ignored as we are going to study a stable film where the effect of disjoining pressure is weak. As you will see, in our MD simulations, the thickness of the film is much larger than the cut-off distance so that the effect of disjoining pressure is safely neglected. Let us define the Fourier transform of \(\delta h=h(x)-h_{0}\) as \(h_{q}=\int\delta he^{-iqx}dx\). From Parseval's theorem, we can express Eq. (1) in terms of Fourier modes: \[f=\frac{1}{2}\frac{L_{y}}{L_{x}}\gamma\sum q^{2}|h_{q}|^{2}+\frac{1}{2}\frac {L_{y}}{L_{x}}\kappa\sum q^{4}|h_{q}|^{2}. \tag{2}\] As each summand appears quadratically, it has the same energy \(\frac{1}{2}k_{B}T\), from the equipartition theorem, so that \[\frac{1}{2}k_{B}T=\frac{L_{y}}{L_{x}}\left(\frac{1}{2}\gamma q^{2}+\frac{1}{2 }\kappa q^{4}\right)|h_{q}|^{2}. \tag{3}\] Figure 1: Sketch of a (quasi-two dimensional) surfactant-laden liquid film on a plate. The \(h_{0}\) is the initial film thickness and \(h=h(x,t)\) is the film thickness under spontaneous perturbations due to thermal fluctuations. The film has a small depth \(L_{y}\) in the \(y\) direction (into the page). Thus, the static spectrum for a surfactant-laden film is derived: \[S_{s}=\sqrt{\left\langle\left|h_{q}\right|^{2}\right\rangle}=\sqrt{\frac{L_{x}}{L_ {y}}\frac{k_{B}T}{\gamma q^{2}+\kappa q^{4}}}. \tag{4}\] Here \(\left\langle...\right\rangle\) represents an ensemble average. In thermal equilibrium, the temporal correlations of overdamped capillary waves usually decay exponentially to zero [5; 39] \[C_{h_{q}h_{q}^{*}}=\frac{\left\langle h_{q}\left(q,t\right){h_{q}}^{*}\left(q, t^{\prime}\right)\right\rangle}{S_{s}^{2}}=e^{\Omega\left(q\right)\left|t-t^{ \prime}\right|}, \tag{5}\] where the asterisk denotes conjugate values. The decay rate \(\Omega\) (\(\Omega<0\)) is given by the dispersion relation of the system (temporal growth rate of a surface mode), which is derived by a linear stability analysis (LSA) in the next subsection. ### LSA of Stokes flow for films with arbitrary depth and surface viscosity The liquid is assumed to be incompressible \[\nabla\cdot\mathbf{u}=0, \tag{6}\] where \(\mathbf{u}=\left(u_{x},u_{z}\right)\), and \(u_{x},u_{z}\) are the velocities in \(x\) and \(z\) directions, respectively. The momentum equation with the assumption of Stokes flow (the Reynolds number is small), is as follows \[\mu\nabla^{2}\mathbf{u}=\nabla p, \tag{7}\] where \(\mu\) and \(p\) are the viscosity and pressure of the liquid. Note that transient inertia may come into play depending on the frequency of the perturbations, which leads to a critical wavenumber below which the correlations are underdamped instead of being overdamped [38]. However, as one can see from our following molecular simulations, all measured correlations are overdamped. For the dynamic boundary condition at the liquid-vapour interface \(z=h(x,t)\), we have the leading-order Boussinesq-Scriven model for one-dimensional surface [34] \[\mathbf{n}\cdot\mathbf{\tau}=\nabla_{s}\tilde{\gamma}-\tilde{\gamma}\left(\nabla_ {s}\cdot\mathbf{n}\right)\mathbf{n}+\mathbf{f}, \tag{8}\] where \(\mathbf{\tau}\) is the hydrodynamic stress tensor, \(\tau_{ij}=-p\delta_{ij}+\mu\left(\partial u_{i}/\partial x_{j}+\partial u_{j}/ \partial x_{i}\right)\). The \(\tilde{\gamma}\) is the surface tension modified by the surface viscosity, \(\tilde{\gamma}=\gamma+\left(\mu_{d}+\mu_{s}\right)\nabla_{s}\cdot\mathbf{u}\). \(\nabla_{s}\) is the surface gradient. The dilatational (\(\mu_{d}\)) and shear (\(\mu_{s}\)) components of surface viscosity occur in additive pairs in a 2D system and will hereafter be written in terms of a single parameter, \(\eta=\mu_{d}+\mu_{s}\) in the rest of this paper [32; 33]. The \(\mathbf{f}=\kappa\frac{\partial^{t}h}{\partial x^{4}}\mathbf{n}\) is the elastic pressure [41]. Finally, \(\mathbf{n}\) is the outward normal to the free surface: \[\mathbf{n}=\frac{\left(-\partial h/\partial x,1\right)}{\sqrt{1+\left( \partial h/\partial x\right)^{2}}}. \tag{9}\] Under the assumption of small perturbations (\(\partial h/\partial x\ll 1\)) the dynamic boundary condition is reduced to (in the normal direction): \[-p+2\mu\frac{\partial u_{z}}{\partial z}=\gamma\frac{\partial^{2}h}{\partial x ^{2}}+\kappa\frac{\partial^{4}h}{\partial x^{4}}, \tag{10}\] and in the tangential directions to the surface \[\mu\left(\frac{\partial u_{x}}{\partial z}+\frac{\partial u_{z}}{\partial x} \right)=\eta\frac{\partial^{2}u_{x}}{\partial x^{2}}, \tag{11}\] where we have assumed surface tension (concentration of surfactants) is uniform along the surface so that the \(\partial\gamma/\partial x\) term is negligible. This is because, at the nanoscale, surface diffusion can counter advective effects and result in a uniform surfactant distribution [42]. This is also directly found in our MD simulations shown in the appendix. The kinematic condition at the free surface is given by \[u_{z}=\frac{\partial h}{\partial t}+u_{x}\frac{\partial h}{\partial x}. \tag{12}\] At \(z=0\), the no penetration condition and Navier's slip condition are, \[u_{z} =0, \tag{13}\] \[u_{x} =\ell\frac{\partial u_{x}}{\partial z}. \tag{14}\] Equations (6-14) are linearised using normal modes \[\left(u_{x},u_{z},p,h-h_{0}\right)=\left(\tilde{u}_{x},\tilde{u}_{z},\tilde{p },\tilde{h}\right)e^{\Omega t+iqx}. \tag{15}\] With those, the expression for the single variable \(\tilde{u}_{z}\) can be obtained from Eq. (6) and Eq. (7) as: \[\frac{d^{4}\tilde{u}_{z}}{dz^{4}}-2q^{2}\frac{d^{2}\tilde{u}_{z}}{dz^{2}}+q^{ 4}\tilde{u}_{z}=0. \tag{16}\] The general solution for \(\tilde{u}_{z}\) is thus [18] \[\tilde{u}_{z}=C_{1}\cosh\left(qz\right)+C_{2}\sinh\left(qz\right)+C_{3}qz\cosh \left(qz\right)+C_{4}qz\sinh\left(qz\right), \tag{17}\] and one can also obtain the solution for \(\tilde{u}_{x}\), and \(\tilde{p}\): \[\tilde{u}_{x}=\frac{i}{q}\frac{d\tilde{u}_{z}}{dz}, \tag{18}\] \[\tilde{p}=2\mu q\left[C_{3}\cosh\left(qz\right)+C_{4}\sinh\left(qz \right)\right]. \tag{19}\] Here, \(C_{1}-C_{4}\) are four coefficients to be determined by the boundary conditions Eqs. (10-14), whose linearised forms are \[-\tilde{p}+2\mu\frac{d\tilde{u}_{z}}{dz}=-\gamma q^{2}\tilde{h}-\kappa q^{4} \tilde{h},\quad\text{at}\quad\ z=h \tag{20a}\] \[\mu\left(\frac{d\tilde{u}_{x}}{dz}+iq_{x}\tilde{u}_{z}\right)=-\eta q^{2} \tilde{u}_{x},\quad\text{at}\quad\ z=h\] (20b) \[\Omega=\frac{\tilde{u}_{z}}{\tilde{h}},\quad\text{at}\quad\ z=h\] (20c) \[\tilde{u}_{z}=0,\quad\text{at}\quad\ z=0\] (20d) \[\tilde{u}_{x}=\ell\frac{d\tilde{u}_{x}}{dz}.\quad\text{at}\quad\ z=0 \tag{20e}\] Substituting Eqs. (17-19) into Eqs. (20) leads to the required dispersion relation \[\Omega=-\frac{\gamma q^{2}+\kappa q^{4}}{\mu q}\frac{B_{1}}{B_{2}},\] \[B_{1}= \mu\left[\sinh\left(2qh_{0}\right)-2qh_{0}+4ql\sinh^{2}\left(qh_ {0}\right)\right]\] \[+\left\{\sinh^{2}\left(qh_{0}\right)-q^{2}h_{0}^{2}+ql\left[ \sinh\left(2qh_{0}\right)-2qh_{0}\right]\right\}\eta q,\] \[B_{2}= \mu\left\{4q^{2}h_{0}^{2}+4\text{cosh}^{2}\left(qh_{0}\right)+4ql \left[2qh_{0}+\sinh\left(2qh_{0}\right)\right]\right\}\] \[+\left[2qh_{0}+\sinh\left(2qh_{0}\right)+4ql\text{cosh}^{2}\left( qh_{0}\right)\right]\eta q. \tag{21}\] ## III Molecular dynamics simulations Molecular dynamics simulations are used to simulate surfactant-laden liquid films on substrates. These simulations are performed using the open-source code LAMMPS [43]. The molecular system contains fluid atoms (liquid and its vapor), surfactant molecules, and solid atoms, as shown in Fig. 2(a). A surfactant molecule \(H_{m}T_{n}\) consists of \(m+n\) atoms and is amphiphilic with \(m\) hydrophilic atoms (the head group) and \(n\) hydrophobic atoms (the tail group). The non-bonded intermolecular potentials \(U\) between \(i\)-type atoms and \(j\)-type atoms are simulated with the standard Lennard-Jones (LJ) 12-6 potential: \[U(r_{ij})=\begin{cases}4\varepsilon_{ij}\left[\left(\frac{\sigma_{ij}}{r_{ij}} \right)^{12}-\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{6}\right]&\text{if }\,r_{ij}\leq r_{c,ij},\\ 0&\text{if }\,r_{ij}>r_{c,ij},\end{cases} \tag{22}\] where \(r_{ij},\varepsilon_{ij},\sigma_{ij}\) and \(r_{c,ij}\) are the pairwise distance, energy parameter, length parameter, and cut-off distance respectively. The complete lists of parameters among the liquid (l), solid (s), head group of surfactants (h), and tail group of surfactants (t) are given in Table 1. The liquid is simulated as argon whose \(\varepsilon_{ll}\), \(\sigma_{ll}\), and atomic mass are \(1.67\times 10^{-21}\) J, 0.34 nm, and \(6.63\times 10^{-26}\) kg, respectively. The temperature of this system is kept at \(T=85\) K or \(T^{*}=0.7\varepsilon_{ll}/k_{B}\) (* henceforth denotes LJ units and \(k_{B}\) is the Boltzmann constant) using the Nose-Hoover thermostat. At this temperature, the mass density of liquid argon Figure 2: Snapshots of MD simulations of a surfactant-laden liquid film on a substrate. (a) Initial setting. The fluid atoms are colored in orange and the solid atoms are navy blue. The surfactant is made of chained beads with a hydrophobic tail (in bronze) and a hydrophilic head (in red). (b) Equilibrium configurations of a surfactant-laden interface. is \(1.40\times 10^{3}\) kg/m\({}^{3}\) and number density \(n_{l}^{*}=0.83/\sigma_{ll}^{3}\). The cut-off distance for liquid-liquid interactions, beyond which the intermolecular interactions are omitted, is chosen as \({r_{c,ll}}^{*}=5.5\sigma_{ll}\). Under above parameters and conditions, the surface tension of the clean liquid is \(\gamma_{0}=1.52\times 10^{-2}\) N/m obtained by the mechanical route [44]. The dynamic viscosity of liquid is \(\mu=2.87\times 10^{-4}\) kg/(ms) calculated by the Green-Kubo relation. The substrate is platinum with a face-centred cubic (fcc) structure, and we use its isotropic \(\langle 100\rangle\) surface to contact the fluid [19]. The platinum mass density is \(21.45\times 10^{3}\) kg/m\({}^{3}\) with an atomic mass of \(3.24\times 10^{-25}\) kg. The solid substrate is rigid in MD simulations. The liquid-solid interactions are modelled by the same 12-6 LJ potential with \(\varepsilon_{ls}=C\varepsilon_{ll}\) and \(\sigma_{ls}=0.8\sigma_{ll}\). One may vary \(C\) to obtain different amounts of slip. Here we choose \(C=0.65\) so that there is nearly no slip between liquid and solid, since the effects of slip on capillary wave dynamics have been examined elaborately in our previous works [17; 19]. To simulate surfactants, a coarse-grained model called the 'bead-spring' model is adopted [45; 46]. The surfactant molecule \(H_{m}T_{n}\) consists of \(m+n\) atoms connected by the finite extensible nonlinear elastic (FENE) potential [45] \[U=-0.5KR_{0}^{2}{\rm ln}\left[1-\left(\frac{r}{R_{0}}\right)^{2}\right]. \tag{23}\] Here \(K^{*}=12\varepsilon/\sigma^{2}\) and \(R_{0}^{*}=1.4\sigma\) are used. The tail group of a surfactant molecule is \begin{table} \begin{tabular}{c c c c c} \hline Atom type & Atom type & \(\varepsilon_{ij}/\varepsilon_{ll}\) & \(\sigma_{ij}/\sigma_{ll}\) & \(r_{c,ij}/\sigma_{ll}\) \\ \hline L & L & 1 & 1 & 5.5 \\ \hline L & S & 0.65 & 0.8 & 5.5 \\ \hline L & H & 0.80 & 1 & 5.5 \\ \hline L & T & 1 & 1 & \(2^{1/6}\) \\ \hline H & H & 0.5 & 1 & 5.5 \\ \hline H & T & 1 & 1 & \(2^{1/6}\) \\ \hline H & S & 1 & 1 & \(2^{1/6}\) \\ \hline T & T & 0.35 & 1 & \(2^{1/6}\) \\ \hline T & S & 1 & 1 & \(2^{1/6}\) \\ \hline \end{tabular} \end{table} Table 1: Interaction parameters among the liquid (L), solid (S), head group of surfactants (H), and tail group of surfactants (T). hydrophobic and this is achieved by setting the \(r_{c,lt}^{*}=2^{1/6}\sigma_{ll}\) so that interactions between liquid and the tail group are purely repulsive. In our simulations, the type of surfactant \(H_{4}T_{4}\) is used. The initial dimensions of the liquid film \((L_{x},L_{y},h_{0})\) in Fig. 1(a) are chosen as \(L_{x}=31.4\) nm, \(L_{y}=3.14\) nm, and \(h_{0}=3.14\) nm. The solid substrate has the same lateral size as that of the liquid film and has a thickness \(h_{s}=0.78\) nm. As \(L_{y}\ll L_{x}\), the system is quasi-2D. Periodic boundary conditions are applied in both the \(x\) and \(y\) directions of the system whilst vapour particles are reflected specularly in the \(z\) direction at the top boundary of the system. In one single simulation of the surfactant-laden film, the simulation with the initial setting shown in Fig. 2(a) is run for 20 ns with a timestep 8.57 fs to reach its equilibrium state, see Fig. 2(b). After the surface has reached equilibrium, the simulation is run to output the positions of atoms every 2000 steps. The free surface position is defined as the usual equimolar surface and the way to extract the surface profile \(h(x,t)\) from positions of atoms in MD simulations is detailed in our previous work [8]. After obtaining \(h(x,t)\), Fourier transforms are performed to obtain the amplitude of surface modes \(h(q,t)\). ## IV Results and Discussions In this section, we present and discuss the results of MD simulations and their comparison to analytical solutions. ### Enhanced static spectrum, surface roughness and elasticity The symbols in Fig. 3 represent the static spectra \(|h_{q}|_{rms}\) of surface waves obtained from simulations, which are the root mean square (rms) of surface modes \(h_{q}(q,t)\) (averaged over 20000 times). One can see from MD results that the wave amplitude is enhanced by increasing the number of surfactants on the liquid surface, since surfactants can decrease surface tension. The values of surface tension for the film surface contaminated with different numbers (concentrations) of surfactants can be calculated from independent MD simulations through the mechanic route [44] and are given in Fig. 4 (see the black squares). The linear relationship between surface tension and surfactant concentrations is found only be valid for low surfactant concentrations while surface tension decreases more rapidly for high surfactant concentrations. In Fig. 3, for a clean surface (\(N=0\), green squares) and a surface with a low concentration of surfactants (e.g., \(N=54\), blue triangles), their spectra show the conventional scaling \(|h_{q}|_{rms}\sim q^{-1}\) for all wavenumbers, which is the scaling when only considering surface tension on the surface. Using the independently measured values of surface tension as shown in Fig. 4, the MD spectra of \(N=0\) and \(N=54\) agree well with the theory, i.e., Eq. (4) without the elastic term. However, when a surface is deposited with a high concentration Figure 4: Effects of the number of surfactants on surface tension and bending rigidity. The values of surface tension are obtained by independent MD simulations while the values of bending rigidity are obtained by fitting the static spectra with predictions. Figure 3: Static spectrum for capillary waves laden with different numbers (\(N\)) of surfactants. of surfactants (e.g., \(N=250\), blue triangles), the scaling changes from \(-1\) to \(-2\) for waves with large wavenumbers. The new scaling can be justified by the elasticity of the surfactant monolayer formed on the liquid surface, which gives an energy contribution \(\sim\kappa q^{4}|h_{q}|^{2}\) as described by Eq. (4). By fitting the MD results with Eq. (4), the value of bending rigidity is obtained and its relation with surfactant concentrations is given in Fig. 4, which shows that the bending rigidity increases with the number of surfactants. We note that the measurements of TCW spectra at large wavenumbers from MD simulations can depend on how to define the free surface [38], which makes it more difficult to measure the precise value of bending rigidity if it is weak. The competition between elasticity and surface tension defines a length scale \(\lambda_{\kappa}=2\pi\sqrt{\kappa/\gamma}\) and elasticity is dominant at lengths smaller than \(\lambda_{\kappa}\). This can be seen from the case of \(N=250\) in Fig. 3 where the scaling \(-2\) is dominant at large wavenumbers. Using the prediction for the strength of bending rigidity \(\kappa\approx k_{B}T\) based on the thermodynamic arguments [24], one finds \(\lambda_{\kappa}=2\pi\sqrt{k_{B}T/\gamma}\), which is on the order of thermal length and indicates that elasticity plays more important roles at the nanoscale. The rms roughness of a liquid surface is calculated as (in discrete forms): \[W=\sqrt{\left\langle\frac{1}{N_{b}}\sum_{i=1}^{i=N_{b}}\left(h_{i}-\bar{h} \right)^{2}\right\rangle}=\sqrt{\frac{1}{L_{x}^{2}}\sum_{i=1}^{i=N_{b}}\left| h_{q}\right|_{rms}^{2}}, \tag{24}\] where \(N_{b}\) is the number of bins used to extract the surface profile from MD simulations. Note that this sum of modes can be usually integrated by taking \(q=2\pi/L_{x}\) to approach infinitely Figure 5: The relation of surface roughness with (a) number of surfactants and (b) thermal length. small values [17; 25]. However, here our film simulated is short making the integration meaningless. The obtained \(W\) for different numbers of surfactants is shown in Fig. 5(a) and it is found surface roughness is enhanced by adding surfactants. In Fig. 5(b), for \(N<176\), the surface roughness is linearly proportional to thermal length \(\sqrt{k_{B}T/\gamma}\) as predicted [17]. However, due to the strong elasticity of the liquid surface for \(N>176\), this linear relationship breaks down and elasticity has the potential to reduce surface roughness, in competition with the decreased surface tension. ### Surface viscosity of surfactant-laden liquid surface Fig. 6 shows the temporal correlations \(C_{h_{q}h_{q}^{*}}\) of surface modes at two different wavenumbers (\(q=2\pi/L_{x}\) and \(q=4\pi/L_{x}\)) for the case \(N=54\) (see Fig. 6(a)) and the case \(N=176\) (see Fig. 6(b)). As expected, the temporal correlations decay to zero with time and temporal correlations of waves with a larger wavenumber decay much faster [19]. For the case \(N=54\), its MD results (represented by symbols) can be predicted nicely using Eq. (5) with \(\gamma=0.9\gamma_{0}\) and \(\kappa=0\) independently obtained above (see Fig. 4), which suggests vanishing effects of surface viscosity. However, when the number of surfactants is increased to the case \(N=176\), the relaxation of correlations is significantly slowed evidenced by comparing black squares in Fig. 6(b) to the black squares in Fig. 6(a). Such a slow Figure 6: Decay of temporal correlations of surface modes at two different wavenumbers (\(q=2\pi/L_{x}\) and \(q=4\pi/L_{x}\)) for case (a) \(N=54\) and (b) \(N=176\). relaxation can not be explained only considering the decrease of surface tension, since the predictions (see the dash lines in Fig. 6(b)) using \(\gamma=0.57\gamma_{0}\) and \(\kappa=0.4k_{B}T\) (obtained above) still underestimate the MD results a lot. We attribute this inconsistency to surface viscosity and the MD results can be fitted by Eq. (5) and the newly derived dispersion relation Eq. (21) with a value of surface viscosity \(\eta=2.25\times 10^{-8}\) kg/s. The inferred surface viscosity is a reasonable value compared to experimentally measured values of sodium dodecyl sulphate (SDS) solution [36], which ranges from \(10^{-8}\) to \(10^{-6}\) kg/s in different measurement techniques. By fitting temporal correlations with Eq. (5) and Eq. (21) for other surfactant concentrations, we obtain the relation between surface viscosity and the number of surfactants shown in Fig. 7(a), which shows the same trend as experimental results [33; 47]. Fig. 7(b) shows the reduction of the dispersion relation (for \(q=2\pi/L_{x}\)) by increasing the number of surfactants at the free surface. The reduction is due to both the decreased surface tension (blue line) and the increased viscosity(black line). The ability of surface viscosity to slow relaxations of temporal correlations has its maximum, as plotted in Fig. 8(a), where a further increase of surface viscosity beyond \(\eta=1\times 10^{-6}\) kg/s leads to very small changes of the correlations. Simplifications of the newly derived dispersion relation can be done for two asymptotic limits \(qh_{0}\ll 1\) and \(qh_{0}\gg 1\) (Notably \(qh_{0}=0.63\) in our work for \(q=2\pi/L_{x}\)) as shown in Fig. 8(b). In the limits of \(qh_{0}\ll 1\) and Figure 7: (a) The effects of the varying number of surfactants on the values of surface viscosity measured in MD. (b) The effects of the varying number of surfactants on the dispersion relation of the wave with \(q=2\pi/L_{x}\). no slip, the dispersion relation is simplified to \[\Omega=-\frac{\left(\gamma q^{2}+\kappa q^{4}\right)q^{2}h_{0}^{3}}{12\mu}\frac{4 \mu+h_{0}\eta q^{2}}{\mu+h_{0}\eta q^{2}}, \tag{25}\] which is the dispersion relation for thin liquid films obtained earlier by Edwards and Oron [32] though they did not have the elastic term in Eq. (25). In the limits of zero surface viscosity, the constant factor in Eq. (25) is \(1/3\) (suggesting a no-shear boundary condition at the free surface) while it is \(1/12\) (suggesting a tangentially immobile condition at the free surface) in the limits of infinite surface viscosity. Thus, surface viscosity has at most a fourfold ability to decrease the relaxation of waves with \(qh_{0}\ll 1\), as demonstrated by the dispersion relation in Fig. 8(b). In our simulations, due to the reduced surface tension, the decrease of dispersion relation by increasing the number of surfactants is larger than four times (see Fig. 7(b) and the inset of Fig. 8(b) for the case \(N=250\)). However, in the opposite limits \(qh_{0}\gg 1\), Eq. (21) is simplified to \[\Omega=-\frac{\gamma+\kappa q^{2}}{2\mu}q, \tag{26}\] which is independent of surface viscosity. This is can be seen from Fig. 8(b) where all solid lines with different surface viscosity overlap each other for wavenumbers \(qh_{0}\gg 1\). Notably, this counter-intuitive result is due to the assumption of negligible inertia terms in momentum Figure 8: (a) Damping effects of surface viscosity on the relaxation of correlations. The other parameters used are \(\gamma=0.57\gamma_{0}\) and \(\kappa=0.4k_{B}T\). (b) Effects of surface viscosity on the dispersion relation with \(\gamma=\gamma_{0}\) and \(\kappa=0\). The inset shows the dispersion relation with a different number of surfactants. equations, which is usually valid at the nanoscale. For capillary waves on macroscopic films with infinite depth (\(qh_{0}\gg 1\)), Shen _et al._[34] considered both the effects of surface viscosity and inertia and derived a dispersion relation depending on surface viscosity. Equation (26) in our work can be recovered by neglecting the inertia terms of the dispersion relation in Ref. [34]. Finally, we remark that the bending rigidity has the potential to speed up the relaxation of correlations but in our simulations its values are too weak to have noticeable effects on the wavenumber \(q=2\pi/L_{x}\) (see the red line with \(\kappa=0\) and the blue line \(\kappa=1.4k_{B}T\) in Fig. 9). However much larger bending rigidity can be obtained in experiments [48] so that the effects of elasticity can be significant (e.g. the black line with \(100k_{B}T\) in Fig. 9 ). ## V Conclusions In this work, the effects of surfactants on the statics and dynamics of thermal capillary waves are studied both theoretically and numerically. Molecular simulations are used to simulate surfactant-laden free surface flows. The obtained static spectra show surfactants can enhance the elasticity of the liquid surface apart from reducing surface tension. The slower decay of temporal correlations of surface modes is observed in simulations by adding surfactants and can be predicted by a newly derived dispersion relation considering surface Figure 9: The effects of elasticity on the relaxation of temporal correlations. The parameters used are \(\gamma=0.57\gamma_{0}\) and \(\eta=0\). viscosity. We believe the new theories developed here and molecular simulations performed can motivate new experiments using thermal capillary waves to measure surface viscosity. It is worth mentioning that our current molecular system is ideal, in the future, it is interesting to use more realistic molecular models or map our LJ system to water or other simple globular solvents in colloidal science. We also remark that the choice of the model for surfactants or parameters in the FENE model adopted here is not unique. For example, Laradji and Mouritsen [25] use a different model for the surfactant. The original FENE model [45] use the same cutoff for all beads unlike us. In the future, a systematic test of models and their parameters is needed to see how they change the elasticity and surface viscosity. Thermal fluctuations of fluid surface have been shown to increase the instability of nanofilms and nanojets [8; 12]. As surfactants can decrease surface tension and increase thermal roughness, the presence of surfactants may enhance the instability and rupture of nanofilms. For nanobubbles, one may intuitively think surfactants can stabilize surface nanobubbles by reducing surface tension and the pressure-driven diffusion of gas from bubbles. However, experiments have shown the opposite effect of surfactants [20; 49] and this may be due to the instability of the bubble surface caused by extremely small surface tension and large surface fluctuations. ## VI Acknowledgments Zhang wishes to thank for the discussions with Detlef Lohse, Yibo Chen and Hongguang Zhang. Ding wishes to thank the financial support from National Natural Science Foundation of China (Grant No. 12102109). ## Appendix: Distribution of surfactants Fig. A1(a) shows the distribution of surfactants along the surface at three consecutive time while Fig. A1(b) shows the change of the number of surfactants in the middle region of the surface. Though the number of surfactants fluctuates with time due to the nature of molecules, it can be seen that the distribution of surfactants is uniform along the surface.
2308.00498
Slow graph bootstrap percolation I: Cycles
Given a fixed graph $H$ and an $n$-vertex graph $G$, the $H$-bootstrap percolation process on $G$ is defined to be the sequence of graphs $G_i$, $i\geq 0$ which starts with $G_0:=G$ and in which $G_{i+1}$ is obtained from $G_i$ by adding every edge that completes a copy of $H$. We are interested in the maximum number of steps, over all $n$-vertex graphs $G$, that this process takes to stabilise. In the first of a series of papers exploring the behaviour of this function, denoted $M_H(n)$, and its dependence on certain properties of $H$, we investigate the case when $H$ is a cycle. We determine the running time precisely, giving the first infinite family of graphs $H$ for which an exact solution is known. The maximum running time of the $C_k$-bootstrap process is of the order $\log_{k-1}(n)$ for all $3\leq k\in \mathbb{N}$. Interestingly though, the function exhibits different behaviour depending on the parity of $k$ and the exact location of the values of $n$ for which $M_H(n)$ increases is determined by the Frobenius number of a certain numerical semigroup depending on $k$.
David Fabian, Patrick Morris, Tibor Szabó
2023-08-01T12:34:26Z
http://arxiv.org/abs/2308.00498v1
# Slow graph bootstrap percolation I: cycles ###### Abstract. Given a fixed graph \(H\) and an \(n\)-vertex graph \(G\), the \(H\)_-bootstrap percolation process_ on \(G\) is defined to be the sequence of graphs \(G_{i}\), \(i\geq 0\) which starts with \(G_{0}:=G\) and in which \(G_{i+1}\) is obtained from \(G_{i}\) by adding every edge that completes a copy of \(H\). We are interested in the maximum number of steps, over all \(n\) -vertex graphs \(G\), that this process takes to stabilise. In the first of a series of papers exploring the behaviour of this function, denoted \(M_{H}(n)\), and its dependence on certain properties of \(H\), we investigate the case when \(H\) is a cycle. We determine the running time precisely, giving the first infinite family of graphs \(H\) for which an exact solution is known. The maximum running time of the \(C_{k}\)-bootstrap process is of the order \(\log_{k-1}(n)\) for all \(3\leq k\in\mathbb{N}\). Interestingly though, the function exhibits different behaviour depending on the parity of \(k\) and the exact location of the values of \(n\) for which \(M_{H}(n)\) increases is determined by the Frobenius number of a certain numerical semigroup depending on \(k\). \({}^{1}\) Institut fur Quantenphysik, Universitat Ulm, Germany \({}^{2}\) Department de Matematiques, Universitat Politecnica de Catalunya (UPC), Barcelona, Spain. \({}^{3}\) Institute of Mathematics, Freie Universitat Berlin, Germany _E-mail addresses_: [email protected], [email protected], [email protected]. \({}^{*}\) Research supported by the Deutsche Forschungsgemeinschaft (DFG) Graduiertenkolleg "Facets of Complexity" (GRK 2434). \({}^{\dagger}\) Research supported by the DFG Walter Benjamin program - project number 504502205. \({}^{\ddagger}\) Research supported by the DFG under Germany's Excellence Strategy - The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). process. Note also that a graph \(G\) is weakly \(H\)-saturated if and only if the \(H\)-bootstrap process with initial graph \(G\) has final graph \(K_{n}\), in which case we say the process _percolates_. Viewing weak saturation in this way links the notion to the study of _cellular automata_, a deep topic introduced by von Neumann (see [21]) following a suggestion of Ulam [20]. Indeed, the general setup of a cellular automaton is to study the spread of a virus through a (hyper-)graph where some vertices are initially infected and the virus is passed on to other vertices at each time step according to some local homogeneous rule. By considering the hypergraph whose vertex set is \(E(K_{n})\) and whose edge set encodes copies of \(H\), one can view the \(H\)-bootstrap process as a cellular automata. The literature on cellular automata is vast and the topic has been studied from many different perspectives as the concept can describe important processes occurring in physics, sociology and computer science (see for example [1]). In recent years, this study has become prominent in extremal and probabilistic combinatorics with techniques from these areas being successfully applied to address key problems from other areas (see for example the very nice survey of Morris [17]) as well as new lines of research in combinatorics being motivated from this connection. In particular, inspired by analogous questions for similar automata studied in physics [9], Balogh, Bollobas and Morris [3] initiated the study of the \(H\)-bootstrap process (and coined this terminology) when the starting graph \(G\) is the random graph \(G_{n,p}\) and asked for the threshold probability at which the process with initial graph \(G_{n,p}\) percolates. ### The running time of bootstrap processes As discussed above, most of the research on the graph bootstrap percolation process has focused on whether or not the process _percolates_. Adopting the cellular automata view of a virus spreading, this translates to asking whether or not the virus will reach the whole population, which is certainly a natural line of investigation. Here, we will rather be interested in _how long_ the virus will spread for, a question which one could also imagine being important in applications. This perspective, however, has been considerably less explored until recently. We mention results of Benevides and Przykucki [6, 19] studying the running time of graph neighbourhood percolation, a cellular automata closely related to the \(H\)-bootstrap process, and work of Gunderson, Koch and Przykucki [14] studying how long the \(H\)-bootstrap process lasts when the initial graph is random. Here, we define the _running time_ of the \(H\)-bootstrap process \((G_{i})_{i\geq 0}\) with initial graph \(G\) to be \(\tau_{H}(G):=\min\{t\in\mathbb{N}:G_{t}=G_{t+1}\}\), the time at which the process stabilises. Recently, Bollobas posed the natural extremal question of determining the _maximum_ running time of the \(H\)-bootstrap process. **Definition 1.1**.: For \(n\in\mathbb{N}\), we define \(M_{H}(n)\) to be \[M_{H}(n):=\max_{|V(G)|=n}\tau_{H}(G),\] the maximum running time of the \(H\)-bootstrap process over all choices of starting graph \(G\) with \(n\) vertices. The initial focus of research into maximum running times has been the case when \(H\) is a clique. When \(H=K_{3}\) and \(G\) is a path with \(n\) vertices, one can see that \(\tau_{H}(G)=\lceil\log_{2}(n)\rceil\) as the distance between any pair of non-adjacent vertices halves at each step. Moreover, this happens for any pair of vertices in each connected component of _any_ initial graph and as the \(n\)-vertex path maximises the diameter of an \(n\)-vertex graph, we have \(\tau_{H}(G)\leq\lceil\log_{2}(n)\rceil\) for all \(n\)-vertex \(G\) and hence \(M_{H}(n)=\lceil\log_{2}(n)\rceil\). For \(K_{4}\), the maximum running time is much larger. Indeed, Bollobas, Przykucki, Riordan and Sahasrabudhe [7] and, independently, Matzke [16] showed that \(M_{K_{4}}(n)=n-3\), for all \(n\geq 3\). Bollobas, Przykucki, Riordan and Sahasrabudhe [7] also realised that the running times could be even longer for \(K_{r}\)-processes as \(r\) grows and they gave constructions showing that \(M_{K_{r}}(n)\geq n^{2-\lambda_{r}-o(1)}\) for \(r\geq 5\), where \(\lambda_{r}\) is some explicit constant such that \(\lambda_{r}\to 0\) as \(r\to\infty\). However the same authors believed that there was a limit to how long the \(K_{r}\)-bootstrap process could last and conjectured that for all \(r\geq 5\), \(M_{K_{r}}(n)=o(n^{2})\). It turns out that this conjecture was in fact false. Indeed, Balogh, Kronenberg, Pokrovskiy and the third author of the current paper [4] proved that \(M_{K_{r}}(n)=\Omega(n^{2})\) for all \(r\geq 6\). In contrast to the construction of Bollobas et al., which was probabilistic in nature, the authors of [4] used an explicit construction. Interestingly, the construction could not be pushed to give quadratic time for \(K_{5}\), but they could show that \(M_{K_{5}}(n)\geq n^{2-o(1)}\). Intriguingly, this lower bound comes from a connection to additive combinatorics and in fact they show that \(M_{K_{5}}(n)\geq\Omega(nr_{3}(n))\), where \(r_{3}(n)\) denotes the size of the largest set in \([n]\) which contains no 3-term arithmetic progressions (configurations of the form \(a,a+b,a+2b\) for \(a,b\in\mathbb{N}\setminus\{0\}\)). A famous construction of Behrend [5] gives that \(r_{3}(n)\geq n^{1-O(1/\sqrt{\log n})}\), whilst it is well-known that \(r_{3}(n)=o(n)\), as was originally shown by Roth in 1953. Determining the asymptotics of \(M_{K_{5}}(n)\), and in particular, if it can be quadratic or not, remains a very interesting open problem. Recently, Noel and Ranganathan [18], Hartarsky and Lichev [15], and Espuny-Diaz, Janzer, Kronenberg and Lada [11] extended the study of \(M_{H}(n)\) to hypergraphs, their main focus being the case in which \(H\) is a clique. ### The cycle bootstrap percolation process In a series of papers, we will initiate a systematic study of the maximum running time of \(H\)-bootstrap percolation processes as \(H\) varies. Whilst our focus will predominantly be to study the asymptotics of the function \(M_{H}(n)\) and its dependence on different properties of \(H\), we begin our explorations by studying one family of graphs \(H\) in detail, namely cycles. Indeed, in this paper we determine the the precise value of \(M_{C_{k}}(n)\) for all \(k\geq 3\) and all \(n\) sufficiently large, providing the first infinite family of graphs \(H\) for which an exact running time \(M_{H}(n)\) is known. In fact before this work, the only nontrivial functions \(M_{H}(n)\) that have been determined are when \(H=K_{3}\), \(H=K_{4}\)[16, 19] as discussed above and the 3-uniform clique on 4 vertices minus an edge in the hypergraph setting [11]. **Theorem 1.2**.: _Let \(k\geq 3\). For sufficiently large \(n\in\mathbb{N}\) we have_ \[M_{C_{k}}(n)=\begin{cases}\left\lceil\log_{k-1}(n+k^{2}-4k+2)\right\rceil& \text{if $k$ is odd};\\ \left\lceil\log_{k-1}\left(2n+k^{2}-5k\right)\right\rceil&\text{if $k$ is even}.\end{cases} \tag{1.1}\] **Remark 1.3**.: In both the odd and the even case (1.1) only holds when \(n\) is larger than roughly \(k^{k/2}\). For smaller \(n\) the behaviour is different as a single \(k\)-cycle with a well-placed chord achieves a longer running time than the general constructions. As discussed above, the bootstrap process of the triangle on an \(n\)-vertex graph has a running time of at most \(\left\lceil\log_{2}(n-1)\right\rceil\). The key observation is that in the \(K_{3}\)-bootstrap process the distance between two vertices is roughly halved in every step. With this in mind it should not be surprising that the maximum running time of the bootstrap process of \(C_{k}\) for \(k\geq 3\) is asymptotically of the order \(\log_{k-1}n\). What is perhaps unexpected is that the jumps of the monotone increasing function \(M_{C_{k}}(n)\), i.e. those \(n\in\mathbb{N}\) with \(M_{C_{k}}(n+1)=M_{C_{k}}(n)+1\), behave differently in terms of \(k\) depending on the parity of \(k\). For odd \(k\) the jumps are close to powers of \(k-1\) while for even \(k\) the jumps are near one half times powers of \(k-1\). Moreover, the precise location of the jumps is determined by a quadratic function of \(k\) in both the even and odd cases. This function turns out to be controlled by the _Frobenius number_ of the numerical semi-group generated by \(k-2\) and \(k\), that is, the largest natural number that cannot be expressed as an integral linear combination of \(k-2\) and \(k\) with non-negative coefficients. This link to an arithmetic problem of determining the largest gap in a numerical semi-group presents itself naturally in the analysis of the \(C_{k}\)-process, as the last edge to be added before stabilising will correspond to this gap. As shown in [13] (see also [12]), examples of \(H\) such that \(M_{H}(n)\) is asymptotically sublinear must have strong restrictions on their degree sequence. Indeed, we show there that if every component of a graph \(H\) has minimum degree at least \(2\) and maximum degree at least \(3\), then \(M_{H}(n)=\Omega(n)\). In the case where both the maximum and minimum degrees are \(2\), that is, when \(H\) is a disjoint union of cycles, our second result shows that the maximum running time is controlled by the largest cycle in \(H\). **Theorem 1.4**.: _If \(s\geq 2\) and \(H:=C_{k_{1}}\sqcup\ldots\sqcup C_{k_{s}}\) is the disjoint union of cycles of lengths \(k_{1}\geq\ldots\geq k_{s}\), then for sufficiently large \(n\) we have that_ \[\log_{k_{1}-1}(n)-1\leq M_{H}(n)<\log_{k_{1}-1}(n)+k_{1}^{3}s^{4}.\] ### Organisation of the paper Necessary notation is introduced in Section 2. We prove Theorem 1.2 in Sections 4 and and 5 after giving an outline of the proof in Section 3. The individual roles of the three sections in the proof of Theorem 1.2 are described at the beginning of Section 3. Finally, in Section 6, we establish Theorem 1.4. ## 2. Notation and Preliminaries If \(H\) is a graph and \(v\) is a vertex of \(H\) then \(H-v\) denotes the graph obtained from \(H\) by removing \(v\) and all edges incident to it, i.e. \[V(H-v)=V(H)\setminus\{v\}\qquad,\qquad E(H-v)=E(H)\setminus\{e\in E(H):v\in e\}.\] For an edge \(e\in E(H)\) we define \(H-e\) as the graph obtained by removing \(e\) from the edge set. Given graphs \(G=(V,E)\), \(G^{\prime}=(V^{\prime},E^{\prime})\) their _union_ is \(G\cup G^{\prime}:=(V\cup V^{\prime},E\cup E^{\prime})\) and their _intersection_ is \(G\cap G^{\prime}:=(V\cap V^{\prime},E\cap E^{\prime})\). We denote their (external) _disjoint union_ by \(G\sqcup G^{\prime}\), that is, \[G\sqcup G^{\prime}:=\left((V\times\{1\})\cup(V^{\prime}\times\{2\})\;,\;\{(x,1)(y,1):xy\in E(G)\}\cup\{(x,2)(y,2):xy\in E(G^{\prime})\}\right).\] If \(G\subseteq G^{\prime}\) and \(X,Y\) are disjoint subsets of \(V(G^{\prime})\) we write \[E_{G}(X,Y)=\{xy\in E(G):x\in X,y\in Y\}\,.\] We write \(N_{G}(v)\) for the set of neighbours of \(v\) in \(G\). When dealing with nested graphs \(H_{0}\subseteq G\) we sometimes write \(G\)-neighbour or \(H_{0}\)-neighbour to emphasize the set of neighbours we are referring to. We say that \(v\) is _universal_ in \(G\) if \(N_{G}(v)=V(G)\setminus\{v\}\). Whenever the process \((G_{i})_{i\geq 0}\) is clear from context, we say that a property of a graph holds _at time_\(i\) if \(G_{i}\) has that property. ### Stable graphs All graphs of an \(H\)-process on a given \(n\)-vertex graph \(G\) will be considered as subgraphs of \(K_{n}\). We say that a graph \(G\) is _\(H\)-stable_ if \(n_{H}(G+e)=n_{H}(G)\) for every \(e\in\binom{V(G)}{2}\). For any graph \(G\) we define \(\langle G\rangle_{H}\) to be the final graph of the \(H\)-process on \(G\). A short induction shows that every \(H\)-stable graph containing \(G\) must also contain every graph of the \(H\)-process on \(G\). Therefore \(\langle G\rangle_{H}\) is the smallest \(H\)-stable graph in which \(G\) appears as a subgraph. **Graph homomorphisms.** Given two graphs \(G,G^{\prime}\), a map \(\phi:V(G)\to V(G^{\prime})\) is a _graph homomorphism_ if for any \(e=uv\in E(G)\), we have that \(\phi(e)=\phi(u)\phi(v)\in E(G^{\prime})\). In order to signify the added condition that edges should map to edges we will write graph homomorphisms as \(\phi:G\to G^{\prime}\). We say that a graph homomorphism \(\phi\) is injective if \(\phi\) is injective on \(V(G)\) and we say a graph homomorphism \(\phi:G\to G^{\prime}\) is a graph _automorphism_ if \(G=G^{\prime}\) and \(\phi\) is injective (and hence bijective). Finally, we let \(\operatorname{Hom}(G,G^{\prime})\) denote the set of homomorphisms from \(G\) to \(G^{\prime}\) and let \(\operatorname{Aut}(G)\) denote the set of automorphisms of \(G\). The following observation shows that injective graph homomorphisms are preserved through the graph bootstrap process. **Observation 2.1**.: Let \(\varphi:G\to G^{\prime}\) be an injective graph homomorphism, and let \((G_{i})_{i\geq 0}\), \((G^{\prime}_{i})_{i\geq 0}\) be the respective \(H\)-processes on \(G\) and \(G^{\prime}\). Then \(\varphi\in\operatorname{Hom}(G_{i},G^{\prime}_{i})\) for every \(i\geq 0\). Proof.: The claim holds for \(i=0\) because \(G_{0}=G\), \(G^{\prime}_{0}=G^{\prime}\). Let \(i\geq 0\) and suppose that \(\varphi\in\operatorname{Hom}(G_{i-1},G^{\prime}_{i-1})\). Let \(e\in E(G_{i})\setminus E(G_{i-1})\). Therefore there exists a copy \(H_{i}\subseteq G_{i}\) of \(H\) in \(G_{i}\) such that \(H_{i}-e\subseteq G_{i-1}\). We have \(\varphi(H_{i})-\varphi(e)=\varphi(H_{i}-e)\) because \(\varphi\) is injective, and \(\varphi(H_{i}-e)\subseteq G^{\prime}_{i-1}\) since \(\varphi\in\operatorname{Hom}(G_{i-1},G^{\prime}_{i-1})\). Thus, \(\varphi(e)\in E(G^{\prime}_{i})\) by definition of the \(H\)-process on \(G^{\prime}\). Two immediate consequences of Observation 2.1 are that \(G\subseteq G^{\prime}\) implies \(G_{i}\subseteq G^{\prime}_{i}\) for all \(i\geq 0\), and that \(\operatorname{Aut}(G_{i})\subseteq\operatorname{Aut}(G_{i+1})\), for all \(i\geq 0\). ### Paths We denote the path on \(n\) vertices by \(P_{n}\), i.e. \[V(P_{n})=\{0,\ldots,n-1\}\qquad,\qquad E(P_{n})=\left\{\{i,i+1\}:0\leq i<n-1 \right\}.\] The _length_ of a path is its number of edges. ### Frobenius numbers The _Frobenius number_\(F(x,y)\) of two positive, coprime integers \(x\), \(y\) is the largest natural number that cannot be expressed as an integral linear combination of \(x\) and \(y\) with non-negative coefficients, i.e. \[F(x,y):=\max\left(\mathbb{Z}\setminus\{\alpha x+\beta y:\alpha,\beta\in \mathbb{N}_{0}\}\right)\] where \(\mathbb{N}_{0}\) denotes the set of non-negative integers. The precise formula \(F(x,y)=xy-x-y\) is well-known. A thorough treatise of Frobenius numbers and their generalisations can be found in [2]. We are interested in \(F(k-2,k)\) for odd integers \(k\geq 3\), in which case the above formula gives \[F(k-2,k)=k^{2}-4k+2. \tag{2.1}\] If \(k\) is even we set \(F^{\prime}(k-2,k)\) to be the largest multiple of \(\gcd(k-2,k)=2\) that cannot be written as an integral linear combination of \(k-2\) and \(k\) with non-negative coefficients, i.e. \[F^{\prime}(k-2,k):=2\cdot F\left(\frac{k-2}{2},\frac{k}{2}\right)=\frac{k^{2}} {2}-3k+2. \tag{2.2}\] ### Sumsets and Dilates Given \(h\in\mathbb{N}\) and a set \(A\) of integers, \(hA\) denotes the \(h\)-fold sumset \[hA:=\{a_{1}+\ldots+a_{h}:a_{1},\ldots,a_{h}\in A\}.\] and \(h\cdot A\) denotes the dilate \(\{h\cdot a:a\in A\}\). ## 3. Proof outline We split the proof of Theorem 1.2 into an upper bound part (Theorem 3.1), where we show that for any graph \(G\) the \(C_{k}\)-process on \(G\) stabilises after at most the number of steps given in (1.1), and a lower bound part (Theorem 3.2) where we give starting graphs whose running times attain the values in (1.1). The proof of the lower bounds are given in Section 4 whilst the upper bounds are shown in Section 5. In this section we also provide a few auxiliary statements that are necessary to prove Theorems 3.1 and 3.2 and to combine them into a proof of Theorem 1.2. **Theorem 3.1** (Upper bound part).: _Let \(k\geq 3\), and let \(G\) be a connected graph on at least \(k+1\) and at most \(n\) vertices with \(C_{k}\)-process \((G_{i})_{i\geq 0}\) such that \(\langle G\rangle_{C_{k}}\neq G\). Define_ \[r=r(n,k):=\begin{cases}\left\lceil\log_{k-1}(n+k^{2}-4k+2)\right\rceil&\text{ if $k$ is odd};\\ \left\lceil\log_{k-1}\left(2n+k^{2}-5k\right)\right\rceil&\text{ if $k$ is even}.\end{cases} \tag{3.1}\] _If \(n\) is sufficiently large the following hold:_ 1. _If_ \(k\) _is odd, then_ \(xy\in E(G_{r})\) _for every distinct_ \(x,y\in V(G)\)_._ 2. _If_ \(k\) _is even and_ \(G\) _is bipartite with parts_ \(X,Y\subset V(G)\) _then_ \(xy\in E(G_{r})\) _for each_ \(x\in X\)_,_ \(y\in Y\)_._ 3. _For even_ \(k\) _and non-bipartite_ \(G\)_, we have_ \(xy\in E(G_{r})\) _for any distinct_ \(x,y\in V(G)\)_._ By the definition of \(r\), (2.1) and (2.2), \(r\) is the unique natural number satisfying \[(k-1)^{r-1}-F(k-2,k)\ \leq\ n-1\ <\ (k-1)^{r}-F(k-2,k), \tag{3.2}\] when \(k\) is odd. Likewise \[\frac{(k-1)^{r-1}-(k-1)}{2}-F^{\prime}(k-2,k)+2\leq n<\frac{(k-1)^{r}-(k-1)}{2 }-F^{\prime}(k-2,k)+2, \tag{3.3}\] when \(k\) is even. To obtain a lower bound of the form \(M_{C_{k}}(n)\geq r\) we need to specify a starting graph \(G\) and an edge \(e\in\binom{V(G)}{2}\) such that \(e\) is present at time \(r\) but not at time \(r-1\). In view of Theorem 3.1 it suffices to give a pair of vertices (from different partite sets if \(G\) is bipartite and \(k\) is even) that are not adjacent at time \(r-1\). **Theorem 3.2** (Lower bound part).: _Let \(k\geq 3\), and let \(G\) be a graph with \(C_{k}\)-process \((G_{i})_{i\geq 0}\). Define \(r\) as in (3.1), and set_ \[\ell=\ell(n,k):=\frac{(k-1)^{r-1}-(k-1)}{2}-F^{\prime}(k-2,k)-1 \tag{3.4}\] _when \(k\) is even. Then the following hold for \(n\) sufficiently large:_ 1. _If_ \(k\) _is odd and_ \(G=P_{n}\)_, then_ \(\{0,(k-1)^{r-1}-F(k-2,k)\}\notin E(G_{i})\) _for_ \(i<r\)_._ 2. _If_ \(k\) _is even and_ \(G=P^{\Delta}\) _(see Figure_ 1_) on_ \(\ell+3\leq n\) _vertices then for the vertices_ \(v_{\ell},w_{\ell}\in V(P^{\Delta})\) _we have that_ \(\{v_{\ell},w_{\ell}\}\notin E(G_{i})\) _for_ \(i<r\)_._ Figure 1. A visualisation of \(P^{\Delta}\). The upper bound part requires the starting graph \(G\) to be connected. However, in general \(G\) might be disconnected. The following observation reduces the running times on disconnected \(G\) to running times on connected starting graphs. **Observation 3.3**.: Let \(G\) be a graph with connected components \(G^{(1)},\ldots,G^{(s)}\), and let \((G_{i})_{i\geq 0}\) be its \(C_{k}\)-process. Then \(G_{i}=G_{i}^{(1)}\cup\ldots\cup G_{i}^{(s)}\), and hence \[\langle G\rangle_{C_{k}}=\langle G^{(1)}\rangle_{C_{k}}\cup\ldots\cup\langle G ^{(s)}\rangle_{C_{k}}\] and \[\tau_{C_{k}}(G)=\max\left\{\tau_{C_{k}}(G^{(1)}),\ldots,\tau_{C_{k}}(G^{(s)}) \right\}.\] Proof.: Suppose that at some step in the process the number of components decreases. Take the smallest \(i\) for which there exists an edge \(e\in E(G_{i})\) whose endpoints lie in distinct components of \(G\). At time \(i-1\) there must be path of length \(k-1\) between the endpoints of \(e\), a contradiction. Any component with less than \(k\) vertices is \(C_{k}\)-stable and thus does not affect the process. Therefore, \[M_{C_{k}}(n)=\max\{\tau_{C_{k}}(G):G\text{ connected},k\leq v(G)\leq n\}. \tag{3.5}\] For even \(k\) another graph property that is preserved throughout the process is bipartiteness. **Lemma 3.4**.: _Let \(k\geq 4\) be even. If \(G\) is a bipartite graph with partite sets \(X,Y\subset V(G)\), so is \(\langle G\rangle_{C_{k}}\)._ Proof.: Let \((G_{i})_{i\geq 0}\) be the \(C_{k}\)-process on \(G\), and suppose for a contradiction that the final graph was not bipartite. Pick the smallest \(i\) for which \(G_{i}\) contains an edge \(e\) whose endpoints lie in the same part. Then there exists a path of length \(k-1\) between the endpoints of \(e\) at time \(i-1\), a contradiction as \(k-1\) is odd. Given Theorems 3.1 and 3.2, Observation 3.3, and Lemma 3.4 we deduce Theorem 1.2. Proof of Theorem 1.2.: We begin by proving the upper bound and so we assume that \(n\) is sufficiently large so that Theorem 3.1 holds and so that \(r(n,k)\geq\binom{k}{2}\). Therefore we certainly have that any \(k\)-vertex graph \(G\) stabilises in at most \(r\) steps and so (3.5) tells us that we can restrict ourselves to connected starting graphs on at most \(n\) and at least \(k+1\) vertices. When \(k\) is odd or the starting graph is non-bipartite, then the desired upper bound follows from parts (i) and (iii) of Theorem 3.1, which state that by round \(r\) our process reaches the complete graph, which is \(C_{k}\)-stable. If \(k\) is even and the starting graph is bipartite with parts \(X\) and \(Y\), then part (ii) tells us that at time \(r\) there is a complete bipartite graph between \(X\) and \(Y\), which by Lemma 3.4 must be the final graph of the process. To obtain the lower bounds observe that \(\ell\leq n-3\) by definition of \(\ell\) and \(r\) and that the edges specified in parts (1) and (2) of Theorem 3.2 are not present at time \(r-1\), but will be added eventually by Theorem 3.1 (in fact in the next step). So the process is not finished after \(r-1\) steps. It remains to prove Theorem 3.2 (which we do in Section 4) and Theorem 3.1 (which we do in Section 5). Before embarking on these proof, we give a crucial ingredient of both the lower and the upper bound part, which is the aforementioned decrease of the diameter by a factor of \(k-1\) in each step. **Lemma 3.5**.: _Let \((G_{i})_{i\geq 0}\) be the \(C_{k}\)-process on a graph \(G\), and let \(x,y\in V(G)\). For each \(i\geq 1\), the distance \(\operatorname{dist}_{G_{i}}(x,y)\) satisfies_ \[\operatorname{dist}_{G_{0}}(x,y)\leq(k-1)^{i}\operatorname{dist}_{G_{i}}(x,y),\] _and_ \[\operatorname{dist}_{G_{i}}(x,y)\leq\left\lfloor\frac{\operatorname{dist}_{G_ {0}}(x,y)}{(k-1)^{i}}\right\rfloor+k-2.\] _When \(\operatorname{dist}_{G_{0}}(x,y)\) is a multiple of \((k-1)^{i}\) the above can be improved to_ \[\operatorname{dist}_{G_{i}}(x,y)\leq\frac{\operatorname{dist}_{G_{0}}(x,y)}{(k -1)^{i}}. \tag{3.6}\] Proof.: Observe that for any edge \(e\in E(G_{i})\setminus E(G_{i-1})\) one can find a path of length \(k-1\) between its endpoints in \(G_{i-1}\). Given a shortest \(xy\)-path in \(G_{i}\), replacing every edge on the path which is not present at time \(i-1\) by a suitable path of length \(k-1\) yields an \(xy\)-walk of length at most \((k-1)\cdot\operatorname{dist}_{G_{i}}(x,y)\) in \(G_{i-1}\). From this we deduce \[\operatorname{dist}_{G_{i-1}}(x,y)\leq(k-1)\operatorname{dist}_{G_{i}}(x,y)\] and thus \[\operatorname{dist}_{G_{0}}(x,y)\leq(k-1)^{i}\operatorname{dist}_{G_{i}}(x,y).\] To obtain the upper bound on \(\operatorname{dist}_{G_{i}}(x,y)\), write \(\operatorname{dist}_{G_{i-1}}(x,y)=q\cdot(k-1)+r\) for suitable \(q,r\in\mathbb{N}_{0}\), \(0\leq r\leq k-2\), and choose a path \(u_{0}\ldots u_{q(k-1)+r}\) from \(x\) to \(y\) in \(G_{i-1}\). In \(G_{i}\), \(u_{0}u_{k-1}\ldots u_{q(k-1)}u_{q(k-1)+1}\ldots u_{q(k-1)+r}\) is a path of length \(q+r\) from \(x\) to \(y\). Since \(r\leq k-2\), we obtain \[\operatorname{dist}_{G_{i}}(x,y)\leq q+r\leq\frac{q\cdot(k-1)+r-(k-2)}{k-1}+k- 2=\frac{\operatorname{dist}_{G_{i-1}}(x,y)-(k-2)}{k-1}+k-2. \tag{3.7}\] We can bound the left hand side by just \(q\) whenever \(\operatorname{dist}_{G_{i}}(x,y)\) is divisible by \(k-1\). An inductive application of (3.7) yields the claim. ## 4. Lower bounds In this section we prove Theorem 3.2, treating each part individually. Proof of part (1).: We begin by considering the \(C_{k}\)-process on paths. Let \((P^{i})_{i\geq 0}\) be the \(C_{k}\)-process on \(P_{n}\). The following set will be convenient to get a handle on when a pair \(xy\) is an edge of the \(i\)th graph \(P^{i}\). \[A_{i}:=\left\{(k-1)^{i}-\alpha\cdot(k-2)-\beta\cdot k:\alpha,\beta\in\mathbb{ N}_{0}\right\} \tag{4.1}\] Note that when \(k\) is even, \(A_{i}\) consists of odd numbers while for odd \(k\) there is no restriction on the parity. The \(A_{i}\) form a increasing sequence because for any \(\alpha,\beta\in\mathbb{N}_{0}\), \[(k-1)^{i}-\alpha(k-2)-\beta k=(k-1)^{i+1}-(\alpha+(k-1)^{i})\cdot(k-2)-\beta k \in A_{i+1}.\] **Lemma 4.1**.: _If \(xy\in E(P^{i})\) for some \(x,y\in V(P_{n}),i\geq 1\) then \(y-x\in A_{i}\)._ Proof.: We prove the claim by induction on \(i\geq 1\). \(i=1\): All edges in \(P^{0}\) are of the form \(\{x,x+1\}\) or \(\{x,x+k-1\}\) and we can write \[1=(k-1)^{1}-(k-2)-0\cdot k,\qquad\qquad-1=(k-1)^{1}-0\cdot(k-2)-k.\] \[k-1=(k-1)^{1}-0\cdot(k-2)-0\cdot k,\qquad\qquad-(k-1)=(k-1)^{1}-(k-2)-k.\] \(i\geq 2\): Let \(xy\in E(P^{i})\). If \(xy\) was already present at time \(i-1\), the induction hypothesis and the inclusion \(A_{i-1}\subset A_{i}\) give \(y-x\in A_{i}\). Suppose \(xy\notin E(P^{i-1})\). Let \(v_{0},\ldots,v_{k-1}\) be a path from \(v_{0}:=x\) to \(v_{k-1}:=y\) in \(P^{i-1}\). By the induction hypothesis there exist \(\alpha_{1},\ldots,\alpha_{k-1},\beta_{1},\ldots,\beta_{k-1}\) such that \(v_{j}-v_{j-1}=(k-1)^{i-1}-\alpha_{j}\cdot(k-2)-\beta_{j}\cdot k\) for \(j\in[k-1]\). Then \[y-x=\sum_{j=1}^{k-1}v_{j}-v_{j-1}=\sum_{j=1}^{k-1}\left((k-1)^{i-1}-\alpha_{j} \cdot(k-2)-\beta_{j}\cdot k\right)=\left(k-1\right)^{i}-\sum_{j=1}^{k-1}\alpha_ {j}\cdot(k-2)-\sum_{j=1}^{k-1}\beta_{j}\cdot k,\] completing the inductive step. Lemma 4.1 assures that whenever \(d\in\mathbb{N}\) is an integer that cannot be expressed as \(d=\alpha(k-2)+\beta k\) for suitable \(\alpha,\beta\in\mathbb{N}_{0}\) then \((k-1)^{i}-d\) does not lie in \(A_{i}\) and hence any edge \(xy\) with \(y-x=(k-1)^{i}-d\) cannot be present at time \(i\). Therefore the edge \(\{0,(k-1)^{r-1}-F(k-2,k)\}\) cannot be present in \(P^{r-1}\) by the definition of the Frobenius number \(F(k-2,k)\) (see (2.1)). This shows part (1) of Theorem 3.2. ### Proof of part (2) To show part (2) we recall the graph \(P^{\Delta}\) defined by Figure 1 and assume that \(n\), and thus \(\ell\), is sufficiently large so we do not run into degenerate cases, say \(\ell\geq 3\). An important feature of the graph \(P^{\Delta}\) for us is that it is a non-bipartite graph that maximises the length of a shortest odd walk between two vertices for fixed \(n\). Let \((P^{\Delta,i})_{i\geq 0}\) be the \(C_{k}\)-process on \(P^{\Delta}\). Recall that our goal is to show \(v_{\ell}w_{\ell}\notin E(P^{\Delta,r-1})\). To do so we will set up an analogue of Lemma 4.1 for \(P^{\Delta}\). Call an edge \(v_{j}v_{j^{\prime}}\), \(v_{j}w_{j^{\prime}}\) or \(v_{j}w_{j^{\prime}}\)_even_ is \(j-j^{\prime}\) is even, and _odd_ if \(j-j^{\prime}\) is odd. Lemma 4.3 below is the analogue of Lemma 4.1 dealing with odd edges, while Lemma 4.4 deals with the even edges. Both rely on the following auxiliary statement: **Lemma 4.2**.: _For every \(i\geq 0\), the largest \(j\in[\ell]\) such that \(v_{j}\) or \(w_{j}\) is an endpoint of an even edge in \(P^{\Delta,i}\) is at most \((k-1)^{i}-1\)._ Proof.: The only even edge in \(P^{\Delta,0}\) is \(v_{0}w_{0}\) so the claim holds for \(i=0\). Let \(i\geq 1\) and suppose the claim holds for \(i-1\). Since \((k-1)^{i}-1>(k-1)^{i-1}-1\) it suffices to show that whenever \(v_{j}\) or \(w_{j}\) is the endpoint of an even edge in \(E(P^{\Delta,i})\setminus E(P^{\Delta,i-1})\) one has \(j\leq(k-1)^{i}-1\). Let \(u_{j_{0}}u_{j_{k-1}}\) be an even edge in \(E(P^{\Delta,i})\setminus E(P^{\Delta,i-1})\) and let \(u_{j_{0}}\ldots u_{j_{k-1}}\) be a path in \(P^{\Delta,i-1}\) such that \(u_{j_{t}}\in\{v_{j_{t}},w_{j_{t}}\}\) for \(0\leq t\leq k-1\). For parity reasons there exists at least one even edge on that path. Let \(s\in[k-1]\) such that \(j_{s}-j_{s-1}\equiv 0\mod 2\). The first part of Lemma 3.5 gives \[\operatorname{dist}_{P^{\Delta}}(u_{j_{t}},u_{j_{t-1}})\leq(k-1)^{i-1} \operatorname{dist}_{P^{\Delta,i-1}}(u_{j_{t}},u_{j_{t-1}})=(k-1)^{i-1}\] for \(s+1\leq t\leq k-1\). In \(P^{\Delta}\) we have \(\operatorname{dist}_{P^{\Delta}}(u_{j_{t}},u_{j_{t-1}})=|j_{t}-j_{t-1}|\) whenever \(j_{t}\neq j_{t-1}\). Now the inductive hypothesis implies \[j_{k-1}=j_{s}+\sum_{t=s+1}^{k-1}j_{t}-j_{t-1}\leq(k-1)^{i-1}-1+(k-1-s)\cdot(k -1)^{i-1}\leq(k-1)^{i}-1.\] Recall the definition of \(A_{i}\) (4.1). **Lemma 4.3**.: _Let \(i\geq 1\) and \(j,j^{\prime}\in[\ell]\) with \(j\not\equiv j^{\prime}\mod 2\). If \(u_{j}\in\{v_{j},w_{j}\}\), \(u_{j^{\prime}}\in\{v_{j^{\prime}},w_{j^{\prime}}\}\) and \(u_{j}u_{j^{\prime}}\in E(P^{\Delta,i})\), then \(j-j^{\prime}\in A_{i}\)._ Proof.: We induct on \(i\) with \(i\in\{1,2\}\) being our base cases. \(i=1\): Any path of length \(k-1\) in \(P^{\Delta,0}\) whose endpoints form an odd edge in \(P^{\Delta,1}\) must not use \(v_{0}w_{0}\) because of parity, and thus misses at least one of \(v_{0}\),\(w_{0}\). This implies \(j-j^{\prime}\in\{-(k-1),-1,1,(k-1)\}\subset A_{1}\) (cf. base case of Lemma 4.1). \(i=2\): If \(u_{j}u_{j^{\prime}}\) is present at time \(1\) we are done because \(A_{1}\subset A_{2}\) and \(j-j^{\prime}\in A_{1}\) by the induction hypothesis. Suppose that the edge does not lie in \(E(P^{\Delta,1})\). Let \(Q=u_{j_{0}}\ldots u_{j_{k-1}}\) be a path in \(P^{\Delta,1}\) with \(j_{0}=j^{\prime}\), \(j_{k-1}=j\) and \(u_{j_{t}}\in\{v_{j_{t}},w_{j_{t}}\}\) for \(1\leq t\leq k-2\). There has to be an even number of even edges in \(Q\) because \(j-j^{\prime}\) is odd and \(Q\) has odd length, and there cannot be more than two because the only even edges of \(P^{\Delta,1}\) are \(v_{0}w_{0}\), \(v_{0}v_{k-2}\) and \(w_{0}v_{k-2}\). If all edges of \(Q\) are odd we proceed as in the inductive step of Lemma 4.1. Otherwise there are precisely two even edges on \(Q\). These two edges must share a common endpoint considering that the even edges in \(P^{\Delta,1}\) form a triangle. Let \(s\in[k-2]\) such that \(u_{j_{s-1}}u_{j_{s}}\) and \(u_{j_{s}}u_{j_{s+1}}\) are the even edges. We have either \(j_{s-1}=j_{s+1}=0\) or \(\{j_{s-1},j_{s+1}\}=\{0,k-2\}\). For \(t\in[k-1]\setminus\{s,s+1\}\), by induction choose \(\alpha_{t},\beta_{t}\in\mathbb{N}_{0}\) such that \(j_{t}-j_{t-1}=(k-1)-\alpha_{t}(k-2)-\beta_{t}k\). This allows us to express \(j_{k-1}-j_{0}\) as follows: \[j_{k-1}-j_{0} =\sum_{t\in[k-1]\setminus\{s,s+1\}}j_{t}-j_{t-1}\;+\;j_{s}-j_{s-1 }+j_{s+1}-j_{s}\] \[=\sum_{t\in[k-1]\setminus\{s,s+1\}}((k-1)-\alpha_{t}(k-2)-\beta_ {t}k)\;+\;j_{s+1}-j_{s-1}\] \[=(k-3)\cdot(k-1)^{1}-\sum_{t\in[k-1]\setminus\{s,s+1\}}\alpha_{t} (k-2)-\sum_{t\in[k-1]\setminus\{s,s+1\}}\beta_{t}k\;+\;j_{s+1}-j_{s-1}\] \[=\begin{cases}(k-1)^{2}-\sum_{t}\alpha_{t}(k-2)-\sum_{t}\beta_{t} k-2(k-2)-k&,\text{ if }j_{s+1}-j_{s-1}=-(k-2);\\ (k-1)^{2}-\sum_{t}\alpha_{t}(k-2)-\sum_{t}\beta_{t}k-(k-2)-k&,\text{ if }j_{s+1}-j_{s-1}=0;\\ (k-1)^{2}-\sum_{t}\alpha_{t}(k-2)-\sum_{t}\beta_{t}k-k&,\text{ if }j_{s+1}-j_{s-1}=k-2.\end{cases}\] Therefore \(j_{k-1}-j_{0}=j-j^{\prime}\in A_{2}\), as required. \(i\geq 3\): We handle the case \(u_{j}u_{j^{\prime}}\in E(P^{\Delta,i-1})\) as before and so assume that \(u_{j}u_{j^{\prime}}\) is an odd edge not in \(P^{\Delta,i-1}\). Let \(u_{j_{0}}\ldots u_{j_{k-1}}\) be a \(u_{j}u_{j^{\prime}}\)-path in \(P^{\Delta,i-1}\) where \(u_{j_{t}}\in\{v_{j_{t}},w_{j_{t}}\}\) for \(1\leq t\leq k-2\), and let \(J:=\{t\in[0,k-2]:j_{t}\equiv j_{t+1}\mod 2\}\). Since \(j-j^{\prime}\) and \(k-1\) are odd, \(|J|\) must be even. If \(J\) is empty, that is, if \(Q\) consists of odd edges we can again proceed as in Lemma 4.1. Suppose that \(|J|\geq 2\) and let \(s:=\min J\). Lemma 4.2 yields \(j_{s}\leq(k-1)^{i-1}-1\) while the induction hypothesis guarantees \(j_{t}-j_{t+1}\leq\max A_{i-1}=(k-1)^{i-1}\) for \(t\notin J\). Therefore, \[j-j^{\prime}\leq j_{0} =j_{s}+\sum_{t=0}^{s-1}j_{t}-j_{t+1}\] \[\leq(k-1)^{i-1}-1+s\cdot(k-1)^{i-1}\] \[\leq(k-1)^{i-1}-1+(k-3)\cdot(k-1)^{i-1}\] \[=(k-1)^{i}-(k-1)^{i-1}-1\] \[<(k-1)^{i}-F^{\prime}(k-2,k).\] The last inequality uses (2.2) and \(i\geq 3\). We now have \(j-j^{\prime}\in A_{i}\) by the definition of Frobenius numbers and (2.2) and because \((k-1)^{i}\) and \(j-j^{\prime}\) are odd. **Lemma 4.4**.: _Let \(1\leq i<r\), and let \(j,j^{\prime}\in\{0,\ldots,\ell\}\) such that \(j\equiv j^{\prime}\mod 2\) and \(j+j^{\prime}\geq(k-1)^{i}-(k-1)-2\cdot F^{\prime}(k-2,k)-2\). If \(u_{j}\in\{v_{j},w_{j}\}\), \(u_{j^{\prime}}\in\{v_{j^{\prime}},w_{j^{\prime}}\}\) and \(u_{j}u_{j^{\prime}}\in E(P^{\Delta,i})\setminus E(P^{\Delta,i-1})\) _then there exist \(\alpha,\gamma\in\mathbb{Z}_{\geq-1}\), \(\beta,\delta,\lambda,\mu\in\mathbb{N}_{0}\) with \(\lambda+\mu=(k-1)^{i-1}-1\) such that_ \[j=\lambda(k-1)-\alpha(k-2)-\beta k\qquad\text{ and }\qquad j^{\prime}=\mu(k-1)- \gamma(k-2)-\delta k.\] Proof.: We induct on \(i\geq 1\). Base case \(i=1\): The only even edges in \(E(P^{\Delta,1})\setminus E(P^{\Delta,0})\) are \(v_{0}v_{k-2}\) and \(w_{0}v_{k-2}\). Both of them satisfy the hypothesis \(j+j^{\prime}\geq(k-1)^{1}-(k-1)-2\cdot F^{\prime}(k-2,k)-2\). The claim now holds with either \(\alpha=-1\) and \(\beta,\gamma,\delta,\lambda,\mu\) equal to zero or \(\gamma=-1\) and \(\alpha,\beta,\delta,\lambda,\mu\) equal to zero. Inductive step: Let \(Q=u_{j_{0}}\dots u_{j_{k-1}}\) be a path in \(P^{\Delta,i-1}\) such that \(j_{0}=j\), \(j_{k-1}=j^{\prime}\), and \(u_{j_{t}}\in\{v_{j_{t}},w_{j_{t}}\}\) for \(1\leq t\leq k-2\). We first show that \(Q\) has exactly one even edge. The number of even edges in \(Q\) is odd for otherwise we have \(j\not\equiv j^{\prime}\mod 2\). If \(i=2\) the only three even edges in \(P^{\Delta,i-1}\) are \(v_{0}v_{k-2}\), \(v_{k-2}w_{0}\) and \(v_{0}w_{0}\). A path cannot contain all three of them so \(Q\) has precisely one even edge. If \(i\geq 3\), suppose there are at least three even edges in \(Q\) and let \(s,s^{\prime}\in[k-1]\) such that \(u_{j_{s}}u_{j_{s+1}}\) is the first and \(u_{j_{s^{\prime}-1}}u_{j_{s^{\prime}}}\) is the last even edge in \(Q\). Then \(s+(k-1-s^{\prime})\leq k-4\). By Lemma 4.2 \[j_{s}\leq(k-1)^{i-1}-1\qquad,\qquad j_{s^{\prime}}\leq(k-1)^{i-1}-1.\] Combining this with Lemma 4.3 and \(\max A_{i-1}=(k-1)^{i-1}\) gives us \[j+j^{\prime}=j_{0}+j_{k-1} =\sum_{t=0}^{s-1}(j_{t}-j_{t+1})+j_{s}+j_{s^{\prime}}+\sum_{t=s^{ \prime}+1}^{k-1}(j_{t}-j_{t-1})\] \[\leq 2(k-1)^{i-1}-2+(s+k-1-s^{\prime})\cdot(k-1)^{i-1}\] \[\leq(k-2)\cdot(k-1)^{i-1}-2\] \[=(k-1)^{i}-(k-1)^{i-1}-2\] \[<(k-1)^{i}-(k-1)-2\cdot F^{\prime}(k-2,k)-2,\] which contradicts the assumption \(j+j^{\prime}\geq(k-1)^{i}-(k-1)-2\cdot F^{\prime}(k-2,k)-2\). Here we used that \(i\geq 3\) and so \((k-1)^{i-1}>2\cdot F^{\prime}(k-2,k)+(k-1)\) by (2.2). We have thus shown that \(Q\) has precisely one even edge. Take the unique \(s^{*}\in[k-1]\) for which \(u_{j_{s^{*}-1}}u_{j_{s^{*}}}\) is an even edge. We claim that there exist \(\alpha^{*},\gamma^{*}\in\mathbb{Z}_{\geq-1}\), \(\beta^{*},\delta^{*},\lambda^{*},\mu^{*}\in\mathbb{N}_{0}\) such that \(\lambda^{*}+\mu^{*}=(k-1)^{i-2}-1\) and \[j_{s^{*}-1}=\lambda^{*}(k-1)-\alpha^{*}(k-2)-\beta^{*}k\qquad,\qquad j_{s^{*}} =\mu^{*}(k-1)-\gamma^{*}(k-2)-\delta^{*}k. \tag{4.2}\] Indeed, this follows if \(i=2\) and \(j_{s^{*}-1}=j_{s^{*}}=0\) by setting all parameters to be \(0\). For all other cases, this follows by the induction hypothesis. In order to appeal to the induction hypothesis, we need to establish the required lower bound on \(j_{s^{*}-1}+j_{s^{*}}\) and show that the edge \(u_{j_{s^{*}-1}}u_{j_{s^{*}}}\in E(P^{\Delta,i-1})\setminus E(P^{\Delta,i-2})\), which we now do. We have \[j_{s^{*}-1}+j_{s^{*}} =j+j^{\prime}-\sum_{t=1}^{s^{*}-1}(j_{t}-j_{t-1})-\sum_{t=s^{*}+1 }^{k-1}(j_{t-1}-j_{t})\] \[\geq(k-1)^{i}-(k-1)-2F^{\prime}(k-2,k)-2-(k-2)\cdot(k-1)^{i-1}\] \[=(k-1)^{i-1}-(k-1)-2F^{\prime}(k-2,k)-2,\] as required. Now suppose for a contradiction that \(u_{j_{s^{*}-1}}u_{j_{s^{*}}}\) already appeared at time \(i-2\). If \(i=2\), then the only even edge at time \(0\) is \(v_{0}w_{0}\) and as we are not appealing to the induction hypothesis for this case, we can assume that \(i\geq 3\). Then by Lemma 4.2 \[j_{s^{*}-1},j_{s^{*}}\leq(k-1)^{i-2}-1\] and \[j_{s^{*}-1}+j_{s^{*}}\leq 2(k-1)^{i-2}-2<(k-1)^{i-1}-(k-1)-2F^{\prime}(k-2,k)-2,\] contradicting our lower bound above, using that \(i\geq 3\) and (2.2) here. Therefore \(u_{j_{s^{*}-1}}u_{j_{s^{*}}}\in E(P^{\Delta,i-1})\setminus E(P^{\Delta,i-2})\) and the induction hypothesis gives (4.2). Now by Lemma 4.3 we have that we can find \(\alpha_{t},\beta_{t}\in\mathbb{N}_{0}\) such that \[j_{t}-j_{t-1}=(k-1)^{i-1}-\alpha_{t}(k-2)-\beta_{t}k\] for \(s^{*}<t\leq k-1\) and \[j_{t}-j_{t+1}=(k-1)^{i-1}-\alpha_{t}(k-2)-\beta_{t}k\] for \(0\leq t<s^{*}-1\). Therefore, \[j =\sum_{t=0}^{s^{*}-2}(j_{t}-j_{t+1})+j_{s^{*}-1}=\lambda(k-1)- \alpha(k-2)-\beta k,\] \[j^{\prime} =\sum_{t=s^{*}+1}^{k-1}(j_{t}-j_{t-1})+j_{s^{*}}=\mu(k-1)-\gamma (k-2)-\delta k,\] where \[\lambda :=(s^{*}-1)\cdot(k-1)^{i-2}+\lambda^{*}, \mu :=(k-1-s^{*})\cdot(k-1)^{i-2}+\mu^{*},\] \[\alpha :=\alpha_{0}+\ldots+\alpha_{s^{*}-2}+\alpha^{*}, \beta :=\beta_{0}+\ldots+\beta_{s^{*}-2}+\beta^{*},\] \[\gamma :=\alpha_{s^{*}+1}+\ldots+\alpha_{k-1}+\gamma^{*}, \delta :=\beta_{s^{*}+1}+\ldots+\beta_{k-1}+\delta^{*}.\] Moreover, \[\lambda+\mu=(k-2)(k-1)^{i-2}+\lambda^{*}+\mu^{*}=(k-1)^{i-1}-1,\] which completes the induction. Take the smallest \(i_{0}\in\mathbb{N}\) for which the even edge \(v_{\ell}w_{\ell}\) lies in \(E(P^{\Delta,i_{0}})\) and suppose that \(i_{0}\leq r-1\). Lemma 4.2 and (3.4) yield \[2(k-1)^{i_{0}}-2\geq\ell+\ell=(k-1)^{r-1}-(k-1)-2\cdot F^{\prime}(k-2,k)-2,\] and so \(i_{0}\geq r-1\) when \(n\) and thus \(r\) is sufficiently large. It remains to rule out the case \(i_{0}=r-1\). Suppose that \(i_{0}=r-1\). By Lemma 4.4 there exist \(\alpha,\gamma\in\mathbb{Z}_{\geq-1}\), \(\beta,\delta,\lambda,\mu\in\mathbb{N}_{0}\) with \(\lambda+\mu=(k-1)^{r-2}-1\) such that \[\ell=\lambda(k-1)-\alpha(k-2)-\beta k=\mu(k-1)-\gamma(k-2)-\delta k. \tag{4.3}\] By symmetry we can assume that \(\lambda\leq\mu\). From (4.3) and the definition of \(\ell\) in the statement of Theorem 3.2, we obtain \[F^{\prime}(k-2,k)=\left(\frac{(k-1)^{r-2}-1}{2}-\lambda-1\right)\cdot(k-1)+( \alpha+1)\cdot(k-2)+\beta k. \tag{4.4}\] Using that \(F^{\prime}(k-2,k)\) is even (2.2), if we take (4.4) modulo \(2\) we can see that \[\Lambda:=\frac{(k-1)^{r-2}-1}{2}-\lambda\equiv 1\mod 2. \tag{4.5}\] The condition \(\lambda+\mu=(k-1)^{r-2}-1\) implies \[\lambda\leq\frac{(k-1)^{r-2}-1}{2}\leq\mu. \tag{4.6}\] We cannot have equality in (4.6) because of (4.5). Therefore \(\Lambda\geq 1\). Since \(2(k-1)\) can be written as \((k-2)+k\), by (4.4) we have \[F^{\prime}(k-2,k)=\left(\alpha+1+\frac{1}{2}\left(\Lambda-1\right)\right)\cdot(k -2)+\left(\beta+\frac{1}{2}\left(\Lambda-1\right)\right)\cdot k.\] However, this contradicts the definition of \(F^{\prime}(k-2,k)\) (2.2). Consequently, \(v_{\ell}w_{\ell}\notin E(P^{\Delta,r-1})\). ## 5. Upper bounds We start with some general results on \(C_{k}\)-processes in Section 5.1, followed by another investigation of the \(C_{k}\)-process on paths in Section 5.2. We will prove parts (i) and (ii) of Theorem 3.1 in Section 5.3. Part (iii) of Theorem 3.1 will be shown in Section 5.4 ### General results **Lemma 5.1**.: _Let \(\tilde{G}\) be a connected graph with \(\tau_{C_{k}}(\tilde{G})\geq 2\). Then in the \(C_{k}\)-process on \(\tilde{G}\) every vertex is contained in a \(k\)-cycle at time \(2\)._ Proof.: Suppose that \(\tau_{C_{k}}(\tilde{G})\geq 2\), and let \((\tilde{G}_{i})_{i\geq 0}\) be the \(C_{k}\)-process on \(\tilde{G}\). Since \(\tau_{C_{k}}(\tilde{G})\neq 0\), there exists a \(k\)-cycle \(C\) in \(\tilde{G}_{1}\). Let \(x\in V(\tilde{G})\setminus V(C)\), and let \(Q\) be a shortest path from \(x\) to \(V(C)\) in \(\tilde{G}_{1}\). If \(Q\) has length at least \(k-1\) the first \(k\) vertices of \(Q\) starting from \(x\) form a path of length \(k-1\) with endpoint \(x\), hence \(x\) lies in a cycle at time \(2\). If the length of \(Q\) is smaller than \(k-1\) we can extend \(Q\) to a path of length \(k-1\) using vertices along \(C\). The vertices of this extended path, one of which is \(x\), form a \(k\)-cycle in \(\tilde{G}_{2}\). **Lemma 5.2**.: _Let \(k\geq 3\), and let \(z,z^{\prime}\in V(K_{\lfloor k/2\rfloor,\lceil k/2\rceil})\) be vertices from the same partite set of \(K_{\lfloor k/2\rfloor,\lceil k/2\rceil}\). Then \(\tau_{C_{k}}(K_{\lfloor k/2\rfloor,\lceil k/2\rceil}+\{zz^{\prime}\})\leq 2\) and \(\langle K_{\lfloor k/2\rfloor,\lceil k/2\rceil}+\{zz^{\prime}\}\rangle_{C_{k} }=K_{k}\)._ Proof.: Let \(\tilde{G}:=K_{\lfloor k/2\rfloor,\lceil k/2\rceil}+\{zz^{\prime}\}\) and denote the partite sets of \(K_{\lfloor k/2\rfloor,\lceil k/2\rceil}\) by \(X\) and \(Y\) such that \(|X|=\lceil k/2\rceil\) and \(|Y|=\lfloor k/2\rfloor\). If \(k\) is odd, then for any two distinct \(x,x^{\prime}\in X\) we can find a Hamilton path, which has length \(k-1\), from \(x\) to \(x^{\prime}\) in \(K_{\lfloor k/2\rfloor,\lceil k/2\rceil}\). Thus \(X\) is a clique after one step in the \(C_{k}\)-process on \(\tilde{G}\). At time \(1\), \(X\setminus\{x\}\) and \(Y\cup\{x\}\) are partite sets of a complete bipartite graph of size \(\lfloor k/2\rfloor\) and \(\lceil k/2\rceil\), respectively. Therefore \(Y\cup\{x\}\) is a clique at time \(2\). This shows the claim for odd \(k\). Now assume that \(k\) is even, in particular, \(k\geq 4\) so both \(|X|\geq 2\) and \(|Y|\geq 2\). Since \(|X|=|Y|\) we may further assume that \(z,z^{\prime}\in X\). For any distinct \(y,y^{\prime}\in Y\) we can pick a Hamilton path from \(y\) to \(z\) in the complete bipartite graph \(\tilde{G}-y^{\prime}-z^{\prime}\) and extend that path to a \(yy^{\prime}\)-path of length \(k-1\) in \(\tilde{G}\) by \(zz^{\prime}\) and \(z^{\prime}y^{\prime}\). Then \(Y\) must be a clique at time \(1\). Analogous arguments show that \(X\) is a clique after one more step and hence the claim follows. **Lemma 5.3**.: _Let \(\tilde{G}\) be a connected graph of order at least \(k+1\) which contains a copy of \(C_{k}\). The final graph \(\langle\tilde{G}\rangle_{C_{k}}\) is a clique if \(k\) is odd or \(\tilde{G}\) is non-bipartite, and a complete bipartite graph if \(k\) is even and \(\tilde{G}\) is bipartite._ Proof.: In \(\langle\tilde{G}\rangle_{C_{k}}\) the endpoints of any path of length \(k-1\) are adjacent. Therefore the shortest path between any two vertices has length less than \(k-1\). Choose vertices \(v_{j}\), \(j\in[0,k-1]\), in \(\tilde{G}\) that form a \(k\)-cycle \(C\) with edges \(v_{j}v_{j+1}\). Here and for the rest of this proof addition and subtraction in the subscript are always performed modulo \(k-1\). Every \(x\in V(\tilde{G})\setminus\{v_{0},\ldots,v_{k-1}\}\) has a \(\langle\tilde{G}\rangle_{C_{k}}\)-neighbour on \(C\) because a shortest path from \(x\) to \(C\) in \(\tilde{G}\) can always be extended to a path of length \(k-1\) by vertices of \(C\). If \(xv_{j}\in E(\langle\tilde{G}\rangle_{C_{k}})\) then \(xv_{j}v_{j-1}\ldots v_{j+2}\) is a path of length \(k-1\) so \(xv_{j+2}\in E(\langle\tilde{G}\rangle_{C_{k}})\). In case that \(k\) is odd the above implies that every vertex of \(C\) is adjacent to every other vertex of \(\tilde{G}\) in \(\langle\tilde{G}\rangle_{C_{k}}\). Thus for any two distinct vertices \(x,y\) we can find a \(k\)-cycle containing \(x\) but not \(y\). Repeating the above argument for such a cycle gives \(xy\in E(\langle\tilde{G}\rangle_{C_{k}})\). Now assume that \(k\) is even. Let \[X:=\{v_{j}:j\equiv 0\mod 2\}\qquad,\qquad Y:=\{v_{j}:j\equiv 1\mod 2\}.\] Then every vertex outside \(C\) is adjacent in \(\langle\tilde{G}\rangle_{C_{k}}\) to all vertices in \(X\) or all vertices in \(Y\). Define \[X^{\prime}:=\left\{z\in V(\tilde{G})\setminus V(C):Y\subseteq N_{\langle\tilde {G}\rangle_{C_{k}}}(z)\right\}\qquad,\qquad Y^{\prime}:=\left\{z\in V(\tilde{G })\setminus V(C):X\subseteq N_{\langle\tilde{G}\rangle_{C_{k}}}(z)\right\}.\] One of these two sets, say \(X^{\prime}\), must be non-empty. For any \(x\in X^{\prime}\), \(y\in Y^{\prime}\), \(yv_{0}v_{1}\ldots v_{k-3}x\) is an \(xy\)-path of length \(k-1\) in \(\langle\tilde{G}\rangle_{C_{k}}\). Furthermore for any \(j,j^{\prime}\in[0,k-1]\), with \(v_{j}\in X\), \(v_{j^{\prime}}\in Y\setminus\{v_{j-1},v_{j+1}\}\) and any \(x\in X^{\prime}\) \[v_{j^{\prime}}v_{j^{\prime}+1}\ldots v_{j-1}xv_{j^{\prime}-2}\ldots v_{j+1}v_ {j}\] is an \(v_{j}v_{j^{\prime}}\)-path of length \(k-1\). Therefore \(\langle\tilde{G}\rangle_{C_{k}}\) contains a complete bipartite graph whose partite sets are \(X\cup X^{\prime}\) and \(Y\cup Y^{\prime}\). If \(\tilde{G}\) is bipartite we are done by Lemma 3.4. Otherwise the claim follows from Lemma 5.2. We remark that Lemmas 3.5 and 5.3 already suffice to establish an upper bound of the form \(\log_{k-1}(n)+c_{k}\) for some constant \(c_{k}>0\). ### Results on paths Let \(n^{\prime}\in\mathbb{N}\), and \((P^{i}_{n^{\prime}})_{i\geq 0}\) be the \(C_{k}\)-process on \(P_{n^{\prime}}\). We write \(P^{i}\) instead of \(P^{i}_{n^{\prime}}\) when \(n^{\prime}\) is clear from context. The sets \[D_{i}=D_{i}(n^{\prime}):=\{\ell\in[n^{\prime}-1]:xy\in E(P^{i})\text{ whenever }y-x=\ell\}\] play a central role in proving upper bounds on \(\tau_{C_{k}}(P_{n^{\prime}})\). Clearly \(D_{i}\subseteq D_{i+1}\). If \(D_{i}=[n^{\prime}-1]\), then the percolation process is over by the \(i^{\text{th}}\) step. When \(k\) is even then the process is already over when \(D_{i}\) contains just the odd integers up to \(n^{\prime}-1\), since then \(P^{i}\) has stabilised at \(K_{\lfloor n^{\prime}/2\rfloor,\lceil n^{\prime}/2\rceil}\). Flipping the vertices of \(P_{n^{\prime}}\), i.e. the map \(\sigma:V(P_{n^{\prime}})\to V(P_{n^{\prime}})\), \(x\mapsto n^{\prime}-1-x\), is an automorphism of \(P_{n^{\prime}}\) and hence of \(P^{i}\) for all \(i\geq 0\) by Observation 2.1. For any \(x,y\in V(P_{n^{\prime}})\) one has \(x+y\leq n^{\prime}-1\) or \(\sigma(x)+\sigma(y)\leq n^{\prime}-1\). This allows us to write \(D_{i}\) as follows: \[D_{i}=\left\{\ell\in[n^{\prime}-1]:xy\in E(P^{i})\text{ whenever }y-x=\ell\text{ and }x+y\leq n^{\prime}-1\right\} \tag{5.1}\] The next lemmas state further simple properties about how \(D_{i}\) develops during the \(C_{k}\)-process. Recall the definition of a sumset \(hA\) from Section 2. **Lemma 5.4**.: _For every \(i\geq 0\), \((k-1)D_{i}\cap[n^{\prime}-1]\subseteq D_{i+1}\)._ Proof.: Let \(\ell\in(k-1)D_{i}\cap[n^{\prime}-1]\). Choose \(d_{1},\ldots,d_{k-1}\in D_{i}\) such that \(\ell=d_{1}+\ldots+d_{k-1}\). For any \(x\in V(P_{n^{\prime}})\) with \(x+\ell\in V(P_{n^{\prime}})\), \[x,x+d_{1},\ldots,x+(d_{1}+\ldots+d_{k-1})\] is a path of length \(k-1\) from \(x\) to \(x+\ell\) in \(P^{i}\) since \(d_{1},\ldots,d_{k-1}\in D_{i}\). Thus \(x\) and \(x+\ell\) are adjacent in \(P^{i+1}\) and the claim follows by the definition of \(D_{i+1}\). **Lemma 5.5**.: _If \(n^{\prime}\geq 3(k-1)\), then \(\{\ell\in[k]:\ell\) odd \(\}\subseteq D_{2}\)._ Proof.: We have \(D_{0}=\{1\}\) and \(D_{1}=\{1,k-1\}\). For every odd \(3\leq\ell\leq k\) and \(x\in V(P_{n^{\prime}})\) with \(x+(x+\ell)\leq n^{\prime}-1\), \[x,\ldots,x+\frac{\ell-1}{2},x+(k-1)+\frac{\ell-1}{2},x+(k-1)+\frac{\ell-1}{2}-1,\ldots,x+\ell\] is a path of length \(k-1\) from \(x\) to \(x+\ell\) in \(P^{1}\) due to the hypothesis \(3(k-1)\leq n^{\prime}\). Indeed, a quick analysis of two cases \(x\leq(k-1)\) and \(x>(k-1)\) gives that \(x+(k-1)+(\ell-1)/2\leq n^{\prime}-1\). Therefore (5.1) gives \(\{\ell\in[k]:\ell\text{ odd }\}\subset D_{2}\). Recall that Lemma 4.1 in the proof of Theorem 3.2 indicated how differences that occur as edges at time \(i\) look like and thereby helped us to obtain lower bounds on the running time. For upper bounds we need a converse statement telling us for which parameters \(\alpha,\beta\) the differences \((k-1)^{i}-\alpha\cdot(k-2)-\beta\cdot k\)_do_ appear. To this end we define a subset \[A^{\prime}_{i}:=\left\{(k-1)^{i}-\alpha\cdot(k-2)-\beta\cdot k:\alpha,\beta \in\mathbb{N}_{0},\alpha+\beta\leq(k-1)^{i-2}\cdot(k-2)\right\}\] of \(A_{i}=\left\{(k-1)^{i}-\alpha\cdot(k-2)-\beta\cdot k:\alpha,\beta\in\mathbb{ N}_{0}\right\}\). This set lies in the intersection of \(A_{i}\) and the interval \([(k-1)^{i-2},(k-1)^{i}]\). The upper end of the interval is attained when \(\alpha=\beta=0\) whereas the lower end is achieved by \(\alpha=0\), \(\beta=(k-1)^{i-2}\cdot(k-2)\). The next lemma states that a slightly smaller interval piece of \(A_{i}\) is fully contained in \(A^{\prime}_{i}\). **Lemma 5.6**.: _For every \(i\geq 3\),_ \[[(k-1)^{i-2}+2(k-1),(k-1)^{i}]\cap A_{i}\subseteq A^{\prime}_{i}.\] Proof.: Let \(i\geq 3\) and \(\ell\in[(k-1)^{i-2}+2(k-1),(k-1)^{i}]\cap A_{i}\). Then there exist \(\alpha,\beta\in\mathbb{N}_{0}\) satisfying \(\ell=(k-1)^{i}-\alpha(k-2)-\beta k\). We may assume that \(\alpha\leq k-1\) because \((\alpha-k)\cdot(k-2)+(\beta+k-2)k=\alpha(k-2)+\beta k\). From \[(\alpha+\beta)k-2\alpha=(k-1)^{i}-\ell\leq(k-1)^{i}-(k-1)^{i-2}-2(k-1)=(k-1)^ {i-2}\cdot(k-2)k-2(k-1)\] we infer \[\alpha+\beta\leq(k-1)^{i-2}\cdot(k-2)+\frac{2\alpha}{k}-\frac{2(k-1)}{k}\leq( k-1)^{i-2}\cdot(k-2),\] hence \(\ell\in A^{\prime}_{i}\). The next lemma ensures that the relevant piece of \(A^{\prime}_{i}\) is contained in \(D_{i}\). This fact will play a crucial role in us showing that a bootstrap process has ended. In the proof the advantage of the somewhat technical choice of the upper bound on \(\alpha+\beta\) in the inductive proof becomes visible. **Lemma 5.7**.: _Given \(n^{\prime}\geq 3(k-1)\) we have that_ \[A^{\prime}_{i}\cap[n^{\prime}-1]\subseteq D_{i}.\] _for every \(i\geq 0\)._ Proof.: We induct on \(i\geq 0\). We have that \(A^{\prime}_{0}=\{1\}=D_{0}\) and \(A^{\prime}_{1}=\{k-1\}\subset D_{1}\). Let \(i=2\), and let \(x,y\in V(P_{n^{\prime}})\) with \(y-x=(k-1)^{2}-\alpha(k-2)-\beta k\) for some \(\alpha+\beta\leq(k-2)\). Write \(s:=k-1-\alpha-\beta\) and note that \(s\geq 1\). If \(x\geq\beta\) then \[x,x-1,\ldots,x-\beta,x-\beta+(k-1),\ldots,x-\beta+s(k-1),x-\beta+s(k-1)+1, \ldots,y\] is a path of length \(k-1\) from \(x\) to \(y=x-\beta+s(k-1)+\alpha\) in \(P^{1}\). If \(x\leq\beta\) consider the \(xy\)-path \[x,\ldots,x+\alpha,x+\alpha+(k-1),x+\alpha+(k-1)-1,\ldots,x+\alpha+(k-1)-\beta,x+\alpha+2(k-1)-\beta,\ldots,y\] This path is well-defined because \(x+\alpha+(k-1)\leq\beta+\alpha+(k-1)\leq 2k-3\leq n^{\prime}-1\) and \(s\geq 1\). Thus, \(xy\in E(P^{2})\). Since \(x\) and \(y\) were arbitrary, \(A^{\prime}_{2}\cap[n^{\prime}-1]\subseteq D_{2}\). For every \(i\geq 3\), the induction hypothesis and Lemma 5.4 imply \[A^{\prime}_{i}\cap[n^{\prime}-1]\subseteq(k-1)A^{\prime}_{i-1}\cap[n^{\prime} -1]\subseteq(k-1)D_{i-1}\cap[n^{\prime}-1]\subseteq D_{i},\] where the inclusion \(A^{\prime}_{i}\subseteq(k-1)A^{\prime}_{i-1}\) follows from the fact that for any \(\alpha,\beta\in\mathbb{N}_{0}\) with \(\alpha+\beta\leq(k-1)^{i-2}\cdot(k-2)\) we can find \[\alpha_{1},\ldots,\alpha_{k-1}\in\left\{\left\lfloor\frac{\alpha}{k-1}\right \rfloor,\left\lceil\frac{\alpha}{k-1}\right\rceil\right\}\quad,\quad\beta_{1},\ldots,\beta_{k-1}\in\left\{\left\lfloor\frac{\beta}{k-1}\right\rfloor,\left \lceil\frac{\beta}{k-1}\right\rceil\right\}\] such that \(\alpha=\alpha_{1}+\ldots+\alpha_{k-1}\), \(\beta=\beta_{1}+\ldots+\beta_{k-1}\) and \(\alpha_{s}+\beta_{s}\leq(k-1)^{i-3}\cdot(k-2)\), \(1\leq s\leq k-1\). **Proposition 5.8**.: _If \(k\geq 3\) is odd and \(3(k-1)\leq n^{\prime}\leq(k-1)^{\rho}-F(k-2,k)\) for some integer \(\rho\geq 4\) then \(P^{\rho}_{n^{\prime}}\) is the complete graph on \(n^{\prime}\) vertices._ Proof.: Our goal is to show \(D_{\rho}=[n^{\prime}-1]\). To do so we write \([n^{\prime}-1]\) as the union \[[n^{\prime}-1]=\left([n^{\prime}-1]\cap[3(k-1)]\right)\;\cup\;[3(k-1),n^{ \prime}-1]\] and show \([n^{\prime}-1]\cap[3(k-1)]\subseteq D_{4}\) and \([3(k-1),n^{\prime}-1]\subseteq D_{\rho}\). Lemma 5.5 yields \(k-2\in D_{2}\) because \(k-2\) is odd. Then for each even \(2\leq\ell\leq k-1\) and each vertex \(x\) with \(x+(x+\ell)\leq n^{\prime}-1\), \[x,\ldots,x+\frac{\ell}{2},x+(k-2)+\frac{\ell}{2},x+(k-2)+\frac{\ell}{2}-1, \ldots,x+\ell\] is a path of length \(k-1\) from \(x\) to \(x+\ell\) in \(P^{2}_{n^{\prime}}\). Here we used that \(x+(k-2)+\ell/2\leq n^{\prime}-1\), which follows from a quick case analysis of \(x\leq k-2\) and \(x\geq k-1\), so all vertices of the path indeed belong to \(P_{n^{\prime}}\). Now (5.1) implies \([k]\subset D_{3}\). Applying Lemma 5.4 gives \[D_{4}\supseteq[(k-1)\cdot k]\cap[n^{\prime}-1]\supseteq[3(k-1)]\cap[n^{\prime }-1].\] The inclusion \([3(k-1),n^{\prime}-1]\subset D_{\rho}\) follows from Lemmas 5.6 and 5.7. Indeed, by the definition of the Frobenius number \(F(k-2,k)\) (2.1), and as \(k\) is odd we have \(A_{i}\supseteq(-\infty,(k-1)^{i}-F(k-2,k)-1]\) for all \(i\geq 0\). This allows us to write \[[3(k-1),n^{\prime}-1] =[n^{\prime}-1]\cap[3(k-1),(k-1)^{\rho}-F(k-2,k)-1]\] \[=[n^{\prime}-1]\cap\bigcup_{i=3}^{\rho}[(k-1)^{i-2}+2(k-1),(k-1)^ {i}-F(k-2,k)-1]\] \[\subseteq[n^{\prime}-1]\cap\bigcup_{i=3}^{\rho}[(k-1)^{i-2}+2(k-1 ),(k-1)^{i}]\cap A_{i}\] \[\subseteq[n^{\prime}-1]\cap\bigcup_{i=3}^{\rho}A^{\prime}_{i} \subseteq\bigcup_{i=3}^{\rho}D_{i}=D_{\rho}.\] The second equality holds since \((k-1)^{3-2}+2(k-1)=3(k-1)\) and \((k-1)^{i}-F(k-2,k)\geq(k-1)^{i+1-2}+2(k-1)\) for \(i\geq 3\) by (2.1). We also used Lemma 5.6 followed by Lemma 5.7 in the fourth line. We have shown that \(D_{\rho}=[n^{\prime}-1]\), so \(P^{\rho}_{n^{\prime}}\) is a complete graph. The bipartite version of Proposition 5.8 reads as follows. **Proposition 5.9**.: _If \(k\geq 4\) is even and \(3(k-1)\leq n^{\prime}\leq(k-1)^{\rho}\) for some \(\rho\in\mathbb{N}\) then any \(x,y\in V(P_{n^{\prime}})\) with \(|x-y|\in A_{\rho}\) are adjacent in \(P^{\rho}_{n^{\prime}}\). This implies that if \(n^{\prime}\leq(k-1)^{\rho}-F^{\prime}(k-2,k)\), \(P^{\rho}_{n^{\prime}}\) is copy of \(K_{[n^{\prime}/2],\lceil n^{\prime}/2\rceil}\)._ Proof.: Since \(n^{\prime}\geq 3(k-1)\) we may invoke Lemma 5.5 to obtain \[\{\ell\in[k]:\ell\text{ odd}\}\subseteq D_{2}.\] Lemma 5.4 then gives \[\{\ell\in[3(k-1)]:\ell\text{ odd }\}\cap[n^{\prime}-1]\ \subseteq\ \{\ell\in[k(k-1)]:\ell\text{ odd}\}\cap[n^{\prime}-1]\subseteq D_{3} \subseteq D_{\rho}.\] This takes care of the case \(|x-y|\leq 3(k-1)\). Suppose that \(|x-y|>3(k-1)\). The intervals \([(k-1)^{i-2}+2(k-1),(k-1)^{i}-F^{\prime}(k-2,k)]\) and \([(k-1)^{i-1}+2(k-1),(k-1)^{i+1}-F^{\prime}(k-2,k)]\) intersect whenever \(i\geq 3\). Therefore by (2.2) \[[3(k-1),(k-1)^{\rho}]=[(k-1)^{\rho}-F^{\prime}(k-2,k),(k-1)^{\rho}]\cup\bigcup _{i=3}^{\rho}[(k-1)^{i-2}+2(k-1),(k-1)^{i}-F^{\prime}(k-2,k)-1].\] If \((k-1)^{\rho}-F^{\prime}(k-2,k)<|x-y|\) we can use Lemma 5.6 because \(|x-y|\in A_{\rho}\) and thereby obtain \(|x-y|\in A_{\rho}^{\prime}\). Lemma 5.7 then tells us that \(|x-y|\in D_{\rho}\). If not, there exists \(3\leq i\leq\rho\) such that \(|x-y|\in[(k-1)^{i-2}+2(k-1),(k-1)^{i}-F^{\prime}(k-2,k)-1]\) so we have \(|x-y|\in A_{i}\) by definition of \(F^{\prime}(k-2,k)\) and we can apply Lemmas 5.6 and 5.7 to conclude \(|x-y|\in D_{i}\subseteq D_{\rho}\). In the case that \(n^{\prime}\leq(k-1)^{\rho}-F^{\prime}(k-2,k)\) a copy of \(K_{[n^{\prime}/2],[n^{\prime}/2]}\) is present at time \(\rho\). Since \(P_{n^{\prime}}\) is bipartite, so is \(\langle P_{n^{\prime}}\rangle_{C_{k}}\) due to Lemma 3.4. Thus the process has stabilised. This completes our preliminary investigation of the \(C_{k}\)-process on paths. ### Proof of parts (i) and (ii) of Theorem 3.1 Suppose that \(k\) is odd, and let \(x,y\in V(G)\). If there exists an \(xy\)-path \(Q\) of length at least \(3(k-1)-1\) it satisfies the hypotheses of Proposition 5.8 with \(s=r\) and \(n^{\prime}\) being the length of \(Q\). In that case the vertices of \(Q\) must form a clique in \(G_{r}\). In particular \(x\) and \(y\) are adjacent. If every path from \(x\) to \(y\) in \(G\) has length less than \(3(k-1)-1\), invoke Lemma 5.1 to fix a cycle \(C\subset G_{2}\) containing \(x\), a shortest path \(Q\) from \(y\) to \(V(C)\) in \(G\) and an arbitrary vertex \(z\in V(G)\setminus V(C)\) with \(N_{G}(z)\cap V(C)\neq\varnothing\). The latter vertex exists because \(G\) was assumed to be connected, and is needed because we cannot rule out that \(Q\subset C\) and Lemma 5.3 requires a \((k+1)\)-vertex graph. Apply Lemma 5.3 to \(G_{2}[V(C)\cup V(Q)\cup\{z\}]\). As \(\tau_{C_{k}}(G_{2}[V(C)\cup V(Q)\cup\{z\}])\) is trivially bounded by \((|V(C)|+|V(Q)|+1)^{2}/2\) we obtain that \(xy\in E(G_{r})\) when \(n\) and hence \(r\) is sufficiently large. We will see that part (ii) is analogous to part (i) with the roles of cliques being played by complete bipartite graphs. So suppose now that \(k\) is even and \(G\) is bipartite with partite sets \(X\), \(Y\), and let \(x\in X\), \(y\in Y\). If there is a path of length at least \(3(k-1)-1\) from \(x\) to \(y\), then Proposition 5.9 with \(\rho=r\) and \(n^{\prime}\) the length of that path implies \(xy\in E(G_{r})\). Should there be no such path, Lemma 5.1 again allows us to choose a cycle \(C\subset G_{2}\) containing \(x\), a shortest path \(Q\) from \(y\) to \(V(C)\) in \(G\), and \(z\in V(G)\setminus V(C)\) such that \(N_{G}(z)\cap V(C)\neq\varnothing\). Lemma 5.3 applied to \(G_{2}[V(C)\cup V(Q)\cup\{z\}]\) tells us that at time \(r\) each vertex of \(X\cap(V(C\cup Q)\cup\{z\})\) neighbours each vertex of \(Y\cap(V(C\cup Q)\cup\{z\})\). ### Proof of part (iii) of Theorem 3.1 Assume that in the following \(k\) is even and \(G\) is not bipartite. When we dealt with the upper bound for odd cycles and wanted to show that an edge \(xy\) from the final graph occurs at a certain time in the process it was sufficient to restrict ourselves to an \(xy\)-path in the starting graph. In the case of even \(k\) and non-bipartite \(G\) one has to modify the approach since all \(xy\)-paths in \(G\) could have even length while the final graph of the \(C_{k}\)-process on a path is not a clique but a complete bipartite graph (cf. Proposition 5.9), and thus the restricted \(C_{k}\)-process does not yield the desired edge. To deal with this issue we consider carefully chosen odd walks instead of odd paths. These odd walks will be chosen so that they contain sufficiently long subwalks without repeated vertices. More precisely we restrict our attention to odd walks which can be expressed as the union of two paths as specified in the next claim: **Claim 5.10**.: _Let \(e\in E(\langle G\rangle_{C_{k}})\). Then there exist \(\ell,\ell^{\prime}\in\mathbb{N}_{0}\) with \(\ell^{\prime}\leq\ell\leq n-2\), \(\ell^{\prime}\leq n-3\), and vertices \(v_{0},\ldots,v_{\ell},w_{0},\ldots,w_{\ell^{\prime}}\in V(G)\) such that \(w_{0}v_{0}\ldots v_{\ell}\) and \(v_{0}w_{0}\ldots w_{\ell^{\prime}}\) are paths, \(v_{\ell}\ldots v_{0}w_{0}\ldots w_{\ell^{\prime}}\) is a shortest odd walk between the endpoints of \(e\) in \(G\), and \(v_{j}\neq w_{j^{\prime}}\) for \(j\neq j^{\prime}\)._ Proof.: Take a shortest odd walk \(u_{0}\ldots u_{m}\) between the endpoints of \(e\) in \(G\). The claim is clearly satisfied with \(\ell^{\prime}=0\) and \(\ell=m-1\) when the shortest odd walk is already a path. We therefore assume that the walk has at least one repeated vertex. Observe that for any \(0\leq j<j^{\prime}\leq m\) with \(u_{j}=u_{j^{\prime}}\), \(j-j^{\prime}\) must be odd for otherwise \(u_{0}\ldots u_{j}u_{j^{\prime}+1}\ldots u_{m}\) would be a shorter odd walk. This implies that no vertex occurs more than twice on \(u_{0}\ldots u_{m}\). Set \[j_{0}:=\max\{j\in[m]\mid\exists j^{\prime}>j:u_{j}=u_{j^{\prime}}\}\qquad\text { and }\qquad j_{1}\quad:=\min\{j\in[m]\mid\exists j^{\prime}<j:u_{j}=u_{j^{ \prime}}\},\] and let \(j^{\prime}_{0},j^{\prime}_{1}\in[m]\) be the unique integers satisfying \(j^{\prime}_{0}>j_{0},u_{j_{0}}=u_{j^{\prime}_{0}}\) and \(j^{\prime}_{1}<j_{1},u_{j_{1}}=u_{j^{\prime}_{1}}\). Then \(j_{0}<j_{1}\) since otherwise \(u_{0}\ldots u_{j^{\prime}_{1}-1}u_{j_{1}}\ldots u_{j_{0}}u_{j^{\prime}_{0}+1} \ldots u_{m}\) would be an odd walk of length less than \(m\). By the extremality of \(j_{0}\) and \(j_{1}\) we have \(j^{\prime}_{1}\leq j_{0}\) and \(j_{1}\leq j^{\prime}_{0}\). In fact, we have that equality is attained in both of the last two inequalities. Indeed, if one of them was strict the walk \(u_{0}\ldots u_{j^{\prime}_{1}-1}u_{j_{1}}\ldots u_{j_{0}}u_{j^{\prime}_{0}+1} \ldots u_{m}\) would have length \[j^{\prime}_{1}+(j_{1}-j_{0})+m-j^{\prime}_{0}<j^{\prime}_{1}+(j^{\prime}_{0}- j^{\prime}_{1})+m-j^{\prime}_{0}=m\] so \[0\equiv j^{\prime}_{1}+(j_{1}-j_{0})+m-j^{\prime}_{0}\equiv(j_{1}-j^{\prime}_{ 1})+(j^{\prime}_{0}-j_{0})+m\mod 2\] by the minimality of \(m\). But this would contradict the fact that \(j_{1}-j^{\prime}_{1}\), \(j^{\prime}_{0}-j_{0}\) and \(m\) are all odd. Therefore \(j^{\prime}_{1}=j_{0}\) and \(j^{\prime}_{0}=j_{1}\), which implies \(u_{j_{0}}=u_{j_{1}}\). We conclude \(j_{1}-j_{0}\geq 3\) and \(j_{1}+j_{0}=j_{1}-j_{0}+2j_{0}\) is odd because \(j_{1}-j_{0}\) must be odd. By definition of \(j_{0}\) and \(j_{1}\), both \(u_{0}\ldots u_{j_{1}-1}\) and \(u_{j_{0}+1}\ldots u_{m}\) are paths and each of the vertices \(u_{j_{0}+1},\ldots,u_{j_{1}-1}\) occurs precisely once on \(u_{0}\ldots u_{m}\). Define \(v_{j}\), \(0\leq j\leq\ell:=\lfloor(j_{0}+j_{1})/2\rfloor\), and \(w_{j^{\prime}}\), \(0\leq j^{\prime}\leq\ell^{\prime}:=m-\lceil(j_{0}+j_{1})/2\rceil\) by \[v_{j}:=u_{\left\lfloor\frac{j_{0}+j_{1}}{2}\right\rfloor-j}\quad,\quad w_{j^{ \prime}}:=u_{\left\lceil\frac{j_{0}+j_{1}}{2}\right\rceil+j^{\prime}}.\] Then both \(v_{\ell}\ldots v_{0}w_{0}\) and \(v_{0}w_{0}\ldots w_{\ell^{\prime}}\) are paths. Therefore \(\ell,\ell^{\prime}\leq n-2\). Now we check that \(\ell^{\prime}\leq\ell\) and \(v_{j}\neq w_{j^{\prime}}\) whenever \(j\neq j^{\prime}\). Suppose there were \(j\neq j^{\prime}\) with \(v_{j}=w_{j^{\prime}}\). Due to the definition of \(v_{j},w^{\prime}_{j}\) and the minimality of \(m\) we have \[\left(\left\lfloor\frac{j_{0}+j_{1}}{2}\right\rfloor-j\right)-\left(\left\lceil \frac{j_{0}+j_{1}}{2}\right\rceil+j^{\prime}\right)\equiv 1\mod 2,\] and thus, as \(j_{0}-j_{1}\equiv 1\mod 2\), \[j_{0}-\left\lfloor\frac{j_{0}+j_{1}}{2}\right\rfloor+j\equiv\left\lceil\frac{j _{0}+j_{1}}{2}\right\rceil+j^{\prime}-j_{1}\mod 2.\] But then replacing the longer of the two walks \(u_{j_{0}}\cdots u_{\left\lfloor\frac{j_{0}+j_{1}}{2}\right\rfloor-j}\) and \(u_{j_{1}}\ldots u_{\left\lceil\frac{j_{0}+j_{1}}{2}\right\rceil+j^{\prime}}\) by the shorter one creates an odd walk between the endpoints of \(e\) whose length is less than \(m\). Here we used these walks are not the same size due to the fact that \(j\neq j^{\prime}\). If \(\ell^{\prime}\leq\ell\) we are done. Otherwise we simply relabel the path by interchanging the roles of \(\ell\) and \(\ell^{\prime}\) and turning \(v_{i}\) into \(w_{i}\) and vice versa. Finally, we cannot have \(\ell^{\prime}=\ell=n-2\) as in that case \(w_{\ell^{\prime}}\notin\{v_{0},\ldots,v_{\ell}\}\) gives \(|\{v_{0},\ldots,v_{\ell},w_{0},w_{\ell^{\prime}}\}|=n+1\). **Remark 5.11**.: The property \(v_{j}\neq w_{j^{\prime}}\) for \(j\neq j^{\prime}\) in Claim 5.10 guarantees that \(\{v_{j}:j\in J\}\cap\{w_{j^{\prime}}:j^{\prime}\in J^{\prime}\}=\emptyset\) whenever \(J\subset[\ell]\) and \(J^{\prime}\subset[\ell^{\prime}]\) are disjoint. Let \(x,y\in V(G)\) be distinct vertices. If the length of a shortest odd walk between them is smaller than \(3(k-1)^{2}\) we can fix a subgraph of \(G_{2}\) on at least \(k+1\) vertices that contains \(x\), \(y\) and a \(k\)-cycle (the existence of such a subgraph is guaranteed by Lemma 5.1) and apply Lemma 5.3. From now on let the length of a shortest odd walk from \(x\) to \(y\) in \(G\) be at least \(3(k-1)^{2}\). We are done once we have shown \(xy\in E(G_{r})\). Let \(v_{\ell}\dots v_{0}w_{0}\dots w_{\ell^{\prime}}\) be a shortest odd walk from \(x\) to \(y\) or \(y\) to \(x\) as given by Claim 5.10. Note that \(\ell\equiv\ell^{\prime}\mod 2\). The remaining proof is divided into the Claims 5.12, 5.13 and 5.14. **Claim 5.12**.: _If \(\ell+\ell^{\prime}\leq(k-1)^{r}-(k-1)\cdot F^{\prime}(k-2,k)-3(k-1)^{2}\), we have \(xy\in E(G_{r})\)._ Proof.: Write \(\ell^{\prime}=q^{\prime}(k-1)+s^{\prime}\) and \(\ell=q(k-1)+s\) and where \(q,q^{\prime},s,s^{\prime}\in\mathbb{N}_{0}\), \(0\leq s,s^{\prime}\leq k-2\). Recall that \(\ell\geq\ell^{\prime}\) and \(\ell+\ell^{\prime}\geq 3(k-1)^{2}-1\), hence \(q\geq q^{\prime}\) and \(q>1\). At time \(1\), \[P:=w_{0}\dots w_{s^{\prime}}w_{s^{\prime}+(k-1)}\dots w_{s^{\prime}+q^{\prime} (k-1)}\] is a path of length \(q^{\prime}+s^{\prime}\) from \(w_{0}\) to \(w_{\ell^{\prime}}\). It does not contain \(v_{\ell}\) as \(\ell^{\prime}\leq\ell\). If \(s^{\prime}=0\), \[Q_{0}:=v_{0}v_{1}v_{1+(k-1)}\dots v_{1+q(k-1)}v_{1+q(k-1)+1}\dots v_{1+q(k-1)+s-1}\] is a path of length \(q+s\) from \(v_{0}\) to \(v_{\ell}\) which is vertex-disjoint from \(P\) as the indices of the vertices on \(P\) are multiples of \(k-1\) whereas the indices of vertices on \(Q_{0}-v_{0}-v_{\ell}\) are not. The union of \(P\), \(Q_{0}\) and \(v_{0}w_{0}\) is an \(xy\)-path in \(G_{1}\) of length \(q^{\prime}+(q+s)+1\). Note that \[q^{\prime}+(q+s)+1\equiv q^{\prime}(k-1)+q(k-1)+s+1\equiv(\ell-\ell^{\prime}+ 1)\equiv 1\mod 2\] and \[q^{\prime}+q+s+1\leq\frac{\ell^{\prime}+\ell}{k-1}+k-1\leq(k-1)^{r-1}-F^{ \prime}(k-2,k)-2(k-1)<(k-1)^{r-1}-F^{\prime}(k-2,k).\] If \(s^{\prime}>0\), recall that \(q>1\) and consider the path \[Q_{s^{\prime}}:=\begin{cases}v_{0}v_{k-1}\dots v_{q(k-1)}v_{q(k-1)-1}\dots v_{ (q-1)(k-1)+s}v_{\ell}&,\text{ if }s>s^{\prime};\\ v_{0}v_{k-1}\dots v_{q(k-1)}v_{q(k-1)+1}\dots v_{\ell}&,\text{ if }s\leq s^{ \prime}.\end{cases}\] This path has length \(q+(k-1)-s+1\) or \(q+s\). It is vertex-disjoint from \(P\) since \(w_{j}\equiv s^{\prime}\mod k-1\) for all \(j\geq s^{\prime}\) with \(w_{j}\in V(P)\), whereas all \(j\in[0,\ell^{\prime}]\) with \(v_{j}\in V(Q_{s^{\prime}})\setminus\{v_{0},v_{\ell}\}\) satisfy \(j>s^{\prime}\) and \(j\not\equiv s^{\prime}\mod k-1\). For the case \(\ell=\ell^{\prime}\) it is important that \(v_{\ell}\) never lies on \(P\) because \(\ell\geq\ell^{\prime}\) and \(v_{\ell}\neq w_{\ell^{\prime}}\) by assumption. As \(v_{0}w_{0}\in E(G_{1})\), the union of the paths \(P\),\(Q_{s^{\prime}}\) and the edge \(v_{0}w_{0}\) is an \(xy\)-path of length \(q^{\prime}+s^{\prime}+q+k-s+1\) or \(q^{\prime}+s^{\prime}+q+s+1\), where \[q^{\prime}+s^{\prime}+q+k-s+1\;\equiv\;q^{\prime}+s^{\prime}+q+s+1\;\equiv\; \ell+\ell^{\prime}+1\mod 2.\] The length of \(P\cup Q_{s^{\prime}}\cup\{v_{0}w_{0}\}\) is bounded from above by \[\frac{\ell+\ell^{\prime}}{k-1}+2(k-2)+1<(k-1)^{r-1}-F^{\prime}(k-2,k).\] In all cases we have an odd \(xy\)-path of length at least \(q+q^{\prime}\) and less than \((k-1)^{r-1}-F^{\prime}(k-2,k)\). Using \[q+q^{\prime}=\left\lfloor\frac{\ell}{k-1}\right\rfloor+\left\lfloor\frac{\ell} {k-1}\right\rfloor\geq\left\lfloor\frac{\ell+\ell^{\prime}}{k-1}\right\rfloor \geq 3(k-1)\] we can apply Proposition 5.9 with \(\rho=r-1\) and Observation 2.1 to either \(P\cup Q_{0}\cup\{v_{0}w_{0}\}\) or \(P\cup Q_{s^{\prime}}\cup\{v_{0}w_{0}\}\) to deduce \(xy\in E(G_{1+r-1})=E(G_{r})\) **Claim 5.13**.: _Set \(w_{-1}:=v_{0}\). Then_ \[v_{\frac{(k-1)^{i}-(k-1)}{2}}w_{\frac{(k-1)^{i}-(k-1)}{2}+k-2}\in E(G_{i}),\] _whenever \(i\geq 1\) with \(\frac{(k-1)^{i}-(k-1)}{2}\leq\ell\) and \(\frac{(k-1)^{i}-(k-1)}{2}+k-2\leq\ell^{\prime}\), and_ \[v_{\frac{(k-1)^{i}-(k-1)}{2}+k-1}w_{\frac{(k-1)^{i}-(k-1)}{2}-1}\in E(G_{i})\] _whenever \(i\geq 1\) with \(\frac{(k-1)^{i}-(k-1)}{2}+(k-1)\leq\ell\) and \(\frac{(k-1)^{i}-(k-1)}{2}-1\leq\ell^{\prime}\)._ Proof.: The size constraints are only necessary to guarantee that the vertices occurring in the statement actually exist. We induct on \(i\geq 1\). When \(i=1\) the claim reads \(v_{0}w_{k-2},v_{k-1}w_{-1}\in E(G_{1})\), which holds since \(v_{0}\) and \(w_{k-2}\), and similarly \(v_{k-1}\) and \(v_{0}\), are endpoints of paths of length \(k-1\) in \(G\). Suppose that \(i\geq 2\) and the above size constraints are satisfied. Set \[j_{s}:=\frac{(k-1)^{i-1}-(k-1)}{2}+s\cdot(k-1)^{i-1}\quad,\quad 0\leq s\leq(k- 2)/2.\] The induction hypothesis gives \[v_{j_{0}}w_{j_{0}+k-2},v_{j_{0}+k-1}w_{j_{0}-1}\in E(G_{i-1})\] and Lemma 3.5 assures that any two vertices of distance \((k-1)^{i-1}\) in \(G\) are adjacent at time \(i-1\). We have \[j_{s}+k-2\equiv k-2\not\equiv 0\equiv j_{s^{\prime}}\mod k-1\] for any \(0\leq s,s^{\prime}\leq(k-2)/2\) and hence \(w_{j_{s}+k-2}\neq v_{j_{s^{\prime}}}\). Similarly, \(w_{j_{s}-1}\neq v_{j_{s^{\prime}}+k-1}\). Therefore \[v_{j_{(k-2)/2}}\ldots v_{j_{1}}v_{j_{0}}w_{j_{0}+k-2}w_{j_{1}+k-2}\ldots w_{j_ {(k-2)/2}+k-2}\] and \[v_{j_{(k-2)/2}+k-1}\ldots v_{j_{1}+k-1}v_{j_{0}+k-1}w_{j_{0}-1}w_{j_{1}-1} \ldots w_{j_{(k-2)/2}-1}\] are paths of length \(k-1\) in \(G_{i-1}\). The claim now follows from the observation that \[j_{(k-2)/2}=\frac{(k-1)^{i}-(k-1)}{2}.\] **Claim 5.14**.: _Suppose that \(\ell+\ell^{\prime}>(k-1)^{r}-(k-1)\cdot F^{\prime}(k-2,k)-3(k-1)^{2}\). Then \(xy\in E(G_{r})\)._ Proof.: Recall that \(\ell\equiv\ell^{\prime}\mod 2\) since \(v_{\ell}\ldots v_{0}w_{0}\ldots w_{\ell^{\prime}}\) is an odd walk. Our plan is to find an \(xy\)-path of length \(k-1\) in \(G_{r-1}\). By the conditions on \(\ell,\ell^{\prime}\) in Claim 5.10 and the upper bound in (3.3), we have \[\ell\geq\ell^{\prime} \geq\ell^{\prime}+\ell-(n-2)\] \[>(k-1)^{r}-(k-1)\cdot F^{\prime}(k-2,k)-3(k-1)^{2}-\frac{(k-1)^{r }-(k-1)}{2}+F^{\prime}(k-2,k) \tag{5.2}\] \[=\frac{(k-1)^{r}}{2}-(k-2)\cdot F^{\prime}(k-2,k)-3(k-1)^{2}+ \frac{k-1}{2}\] Now choose \[j_{0}\in\left\{\frac{(k-1)^{r-1}-(k-1)}{2},\frac{(k-1)^{r-1}-(k-1)}{2}+k-1\right\}\] and \[j_{0}^{\prime}\in\left\{\frac{(k-1)^{r-1}-(k-1)}{2}-1,\frac{(k-1)^{r-1}-(k-1)}{ 2}+k-2\right\}\] such that \(\ell-j_{0}\equiv\ell^{\prime}-j_{0}^{\prime}\equiv(k-2)/2\mod 2\). The congruences \(\ell-j_{0}\equiv\ell^{\prime}-j_{0}^{\prime}\mod 2\) and \(\ell\equiv\ell^{\prime}\mod 2\) together imply \(j_{0}\equiv j_{0}^{\prime}\mod 2\). Thus \(v_{j_{0}}w_{j_{0}^{\prime}}\) is one of the edges whose presence at time \(r-1\) is guaranteed by Claim 5.13 with \(i=r-1\). We note here that \(v_{j_{0}},w_{j_{0}^{\prime}}\) indeed exist as \(\ell,\ell^{\prime}\geq j_{0},j_{0}^{\prime}\) due to (5.2) and the fact that \(r\) is sufficiently large. We will now construct a \(v_{j_{0}}v_{\ell}\)-path \(Q\subset G_{r-1}\) and a \(w_{j_{0}^{\prime}}w_{\ell^{\prime}}\)-path \(P\subset G_{r-1}\), both of length \((k-2)/2\) and such that \(V(P)\cap V(Q)=\emptyset\). Then the union of these paths along with the edge \(w_{j_{0}^{\prime}}v_{j_{0}}\) gives an \(xy\)-path of length \(k-1\) in \(G_{r-1}\), and hence \(xy\in E(G_{r})\) as required. To this end we define \[j_{s}^{\prime}:=j_{0}^{\prime}+s\cdot(k-1)^{r-1}\qquad\text{ and }\qquad j_{s}:=j_{0}+s\cdot(k-1)^{r-1},\] for \(1\leq s\leq t:=\frac{k-2}{2}-1=\frac{k-4}{2}\), and claim that \(Q:=v_{j_{0}}v_{j_{1}}\dots v_{j_{t}}v_{\ell}\) and \(P:=w_{j_{0}^{\prime}}w_{j_{1}^{\prime}}\dots w_{j_{t}^{\prime}}w_{\ell^{\prime}}\) are the required paths. First let us check that the vertices used in the path actually exist. That is, we need to check that \(\ell>j_{t}\) and \(\ell^{\prime}>j_{t}^{\prime}\). This follows because \[\ell^{\prime}-j_{t} =\ell^{\prime}-j_{0}-t(k-1)^{r-1}\] \[\geq\ell^{\prime}-\left(\frac{(k-1)^{r-1}-(k-1)}{2}+k-1\right)- \left(\frac{k-4}{2}\right)(k-1)^{r-1}\] \[=\ell^{\prime}-\left(\frac{k-3}{2}\right)(k-1)^{r-1}-\frac{k-1}{2} \tag{5.3}\] \[>(k-1)^{r-1}-(k-2)\cdot F^{\prime}(k-2,k)-3(k-1)^{2}>k,\] where we used (5.2) in the second last inequality and the fact that \(r\) is sufficiently large in the final inequality. This shows that \(\ell>j_{t}\) and \(\ell^{\prime}>j_{t}^{\prime}\) as \(\ell\geq\ell^{\prime}\) and \(j_{t}^{\prime}\leq j_{t}+(k-2)\). Next we show that the paths \(P,Q\) indeed exist in \(G_{r-1}\). The existence of the edges \(v_{j_{s-1}}v_{j_{s}}\in E(G_{r-1})\) for \(s\in[t]\) is guaranteed by Lemma 3.5 and likewise for the edges \(w_{j_{s-1}^{\prime}}w_{j_{s}^{\prime}}\) with \(s\in[t]\). It remains to establish that \(v_{j_{t}}v_{\ell},w_{j_{t}^{\prime}}w_{\ell^{\prime}}\in E(G_{r})\). For this we note that \[\ell-j_{t}\equiv(\ell-j_{0})-(j_{t}-j_{0})\equiv\left(\frac{k-2}{2}\right)- \left(\frac{k-4}{2}\right)\equiv 1\mod 2, \tag{5.4}\] from our choice of \(j_{0}\). Similarly, we have that \(\ell^{\prime}-j_{t}^{\prime}\equiv 1\mod 2\). Moreover, \(\ell,\ell^{\prime}+1\leq n-2\) (Claim 5.10) and \(j_{t},j_{t}^{\prime}+1\geq\frac{k-3}{2}(k-1)^{r-1}-\frac{k-1}{2}\). Therefore, appealing to the upper bound of (3.3), we get \[\ell-j_{t},\ell^{\prime}-j_{t}^{\prime}<\frac{(k-1)^{r}-(k-1)}{2}-F^{\prime}( k-2,k)-\left(\frac{k-3}{2}(k-1)^{r-1}-\frac{k-1}{2}\right)=(k-1)^{r-1}-F^{ \prime}(k-2,k).\] Hence Proposition 5.9 gives that both \(v_{j_{t}}v_{\ell}\) and \(w_{j_{t}^{\prime}}w_{\ell^{\prime}}\) are present in \(G_{r-1}\). Finally then, we need to establish that \(P\) and \(Q\) are disjoint. Recall that we chose \(j_{0},j_{0}^{\prime}\) such that \(j_{0}\equiv j_{0}^{\prime}\mod 2\) and hence we obtain either \(j_{0}=j_{0}^{\prime}-(k-2)\) or \(j_{0}=j_{0}^{\prime}+k\). Therefore \[j_{s}\equiv j_{0}\not\equiv j_{0}^{\prime}\equiv j_{s^{\prime}}^{\prime}\mod k-1\] for \(0\leq s,s^{\prime}\leq t\) and so \(Q\setminus\{v_{\ell}\}\) and \(P\setminus\{w_{\ell^{\prime}}\}\) do not intersect by Remark 5.11. Moreover \(v_{\ell}\neq w_{\ell^{\prime}}\) as \(\{v_{\ell},w_{\ell^{\prime}}\}=\{x,y\}\) and \(\ell\geq\ell^{\prime}>j_{t}\) using (5.3) gives that \(v_{\ell}\notin V(P)\) and \(w_{\ell^{\prime}}\notin V(Q)\). This shows that \(P\cup Q\cup\{w_{j_{0}^{\prime}}v_{j_{0}}\}\) is indeed a path of length \(k-1\) in \(G_{r-1}\) and \(xy\in E(G_{r})\) as required. ## 6. Multiple cycles In this section, we prove Theorem 1.4. ### Lower bound The lower bound of the first part is obtained by the starting graph which is disjoint union of cycles of lengths \(k_{2},\ldots,k_{s}\) and a path of length \(n-(k_{2}+\ldots+k_{s})\). That is, let \(G:=C_{k_{2}}\sqcup\ldots\sqcup C_{k_{s}}\sqcup P_{n-(k_{2}+\ldots+k_{s})}\) with \(H\)-process \((G_{i})_{i\geq 0}\), and let \((P^{i})_{i\geq 0}\) be the \(C_{k_{1}}\)-process on \(P_{n-(k_{2}+\ldots+k_{s})}\subset G\). Every copy of \(C_{k_{1}}\) in \(P^{i}\) can be extended to a copy of \(H\) by the cycles of \(G\). Therefore \(P^{i}\subset G_{i}\) for every \(i\geq 0\), so any two vertices of odd distance on \(P^{0}\) will eventually be adjacent in the \(H\)-process on \(G\) by Theorem 3.1 and Observation 2.1. The following is an analogue of the first part of Lemma 3.5 for multiple cycles: **Claim 6.1**.: _For any \(x,y\in V(P^{0})\) and \(i\geq 0\), the distance \(\operatorname{dist}_{G_{i}}(x,y)\) satisfies_ \[\operatorname{dist}_{G_{0}}(x,y)\leq(k_{1}-1)^{i}\operatorname{dist}_{G_{i}}(x,y).\] Proof.: Let \(Q\) be a shortest \(xy\)-path in \(G_{i}\). Any edge \(uv\) of \(Q\) that is not present at time \(i-1\) yields a \(uv\)-path \(Q_{uv}\) of length \(k_{j}-1\) in \(G_{i-1}\) for some \(j\in[s]\). We can build an \(xy\)-walk in \(G_{i-1}\) by replacing every \(uv\in E(Q)\cap E(G_{i})\setminus E(G_{i-1})\) by \(Q_{uv}\). That walk has length at most \((k_{1}-1)\cdot\operatorname{dist}_{G_{i}}(x,y)\) because \(k_{1}\geq k_{j}\) for \(j\in[s]\). Therefore, \[\operatorname{dist}_{G_{i-1}}(x,y)\leq(k_{1}-1)\operatorname{dist}_{G_{i}}(x,y)\] and iterating gives the desired claim. Let \(x\) be an endpoint of \(P^{0}\). Let \(y\) be the other endpoint if the length of \(P^{0}\) is odd and the unique \(P^{0}\)-neighbour of the other endpoint otherwise. With these choices \(\operatorname{dist}_{P^{0}}(x,y)\) is odd and at least \(n-(k_{2}+\ldots+k_{s})-2\). Recall that \(\operatorname{dist}_{P^{0}}(x,y)=\operatorname{dist}_{G_{0}}(x,y)\). Thus, for \(i_{0}:=\lceil\log_{k_{1}-1}(n-(k_{2}+\ldots+k_{s})-2)\rceil-1\), appealing to Claim 6.1 gives \[\operatorname{dist}_{G_{i}}(x,y)\geq\frac{\operatorname{dist}_{G_{0}}(x,y)}{(k _{1}-1)^{i_{0}}}\geq\frac{n-(k_{2}+\ldots+k_{s})-2}{(k_{1}-1)^{i_{0}}}>\frac{ n-(k_{2}+\ldots+k_{s})-2}{n-(k_{2}+\ldots+k_{s})-2}=1,\] which implies that \(x\) and \(y\) cannot be adjacent at time \(i\), hence \(\tau_{H}(G)\geq i_{0}+1\). The lower bound now follows from the simple estimate \(\log_{k_{1}-1}(n)\leq\lceil\log_{k_{1}-1}(n-(k_{2}+\ldots+k_{s})-2)\rceil+1\), which holds when \(n\) is sufficiently large. ### Upper bound To obtain the upper bound suppose that \(G\) is an arbitrary \(n\)-vertex graph with \(\tau_{H}(G)=M_{H}(n)\) and \(H\)-process \((G_{i})_{i\geq 0}\). At time \(1\) there exists disjoint copies of \(C_{k_{1}},\ldots,C_{k_{s}}\). Fix any such copies and denote them by \(C_{1}^{\prime},\ldots,C_{s}^{\prime}\) where \(C_{j}^{\prime}\) has length \(k_{j}\) for \(j\in[s]\). Note that, as in the proof of Observation 3.3, the vertex sets of components in \(G\) are fixed throughout the process and at no point in the process will an edge between two different connected components be added. We can therefore run our analysis on edges appearing only within connected components of \(G\). Suppose first that \(Z\subset V(G_{1})\) is the vertex set of a component of \(G_{1}\) that does not contain any of the \(C_{j}^{\prime}\), \(j\in[s]\), if such a \(Z\) exists. For each \(j\in[s]\), every copy \(P^{\prime}\) of \(P_{k_{j}}\) at time \(i\) with vertices in \(Z\) can be extended to \(C_{1}^{\prime}\cup\ldots\cup C_{k_{j}-1}^{\prime}\cup P^{\prime}\cup C_{k_{j}+1 }^{\prime}\cup\ldots\cup C_{s}^{\prime}\) so the endpoints of \(P^{\prime}\) are adjacent at time \(i+1\). This implies that if no edge is added inside \(Z\) at time \(i\) then \(\langle G\rangle_{H}[Z]=G_{i}[Z]\). Therefore if \(G[Z]\) is already \(C_{k_{j}}\)-stable for every \(j\in[s]\) at time \(1\) or has at most \(k_{1}\) vertices, it will be \(C_{k_{j}}\)-stable for every \(j\) at time \(k_{1}^{2}+1\). Otherwise we have \(|Z|\geq k_{1}+1\) and one can find a \(k_{j}\)-cycle in \(G_{2}[Z]\) for some \(j\in[s]\). Since \(k_{s}=\min_{j\in[s]}k_{j}\) there must be a \(k_{s}\)-cycle in \(G_{3}[Z]\). Pick \(Z^{\prime}\subseteq Z\) of size \(k_{1}+1\) such that \(G_{3}[Z^{\prime}]\) is connected and contains a \(k_{s}\)-cycle. Let \(i_{0}:=3+M_{C_{k_{s}}}(k_{1}+1)\). By Lemma 5.3 with respect to the \(C_{k_{s}}\)-process on \(G_{3}[Z^{\prime}]\), there is a a copy of \(P_{k_{1}}\) in \(G_{i_{0}}[Z^{\prime}]\), and thus a copy of \(C_{k_{1}}\) in \(G_{i_{0}+1}[Z^{\prime}]\). We have that \(\langle G_{3}[Z]\rangle_{C_{k_{s}}}\subseteq G_{i_{1}}[Z]\) where \(i_{1}:=i_{0}+1+M_{C_{k_{1}}}(n)\). By Lemma 5.3 applied to the \(C_{k_{1}}\)-process on \(G_{i_{0}+1}[Z]\), we have that \(G_{i_{1}}[Z]\) is either complete or contains a complete bipartite graph with at least \(\lfloor k_{1}/2\rfloor\) vertices in either part. If \(G_{i_{1}}[Z]\) is a complete bipartite graph with at least \(\lfloor k_{1}/2\rfloor\) vertices in either part, then \(G_{i_{1}}[Z]\) is a complete bipartite graph with at least \(\lfloor k_{1}/2\rfloor\) vertices in either part. graph it will be \(C_{k_{j}}\)-stable for all \(j\) after two more steps 1. If not, Lemma 5.2 tells us that \(Z\) will be a clique at time \(i_{2}:=i_{1}+2\). This shows that a component not containing any of the \(C^{\prime}_{j}\) will stabilise by time \(i_{2}=M_{C_{k_{1}}}(n)+M_{C_{k_{s}}}(k_{1}+1)+6\leq\log_{k_{1}-1}(n)+k_{1}^{2}+6\). Footnote 1: Any permutation of the vertices inside one of the partite sets defines an automorphism of the complete bipartite graph, so the only way a bootstrap process can add new edges is by turning one of the partite sets into a clique. This can happen at most two times. It remains to analyse components containing the cycles \(C^{\prime}_{j}\). Let \(V^{\prime}:=V(C^{\prime}_{1})\cup\ldots\cup V(C^{\prime}_{s})\) and for \(j\in[s]\), let \(U^{\prime}_{j}\) be the set of vertices in \(V(G)\setminus V^{\prime}\) for which there exists a path to \(C^{\prime}_{j}\) in \(G_{1}\) that does not involve any vertices from \(C^{\prime}_{\ell}\) for each \(\ell\neq j\). Furthermore, let \[U_{j,i}:=\bigcup_{v\in V(C^{\prime}_{j})}N_{G_{i}}(v)\setminus V^{\prime}.\] Any \(k_{j}\)-cycle in \(U^{\prime}_{j}\cup V(C^{\prime}_{j})\) or \(k_{1}\)-cycle in \(U^{\prime}_{j}\) can be extended to a copy of \(H\). We claim that for all \(j\in[s]\) we have \(U^{\prime}_{j}\subseteq U_{j,i_{2}}\). Indeed, if \(u\in U^{\prime}_{j}\) then there is a path \(P_{u}\) in \(G_{1}\) from \(u\) to \(C^{\prime}_{j}\) avoiding the \(C^{\prime}_{\ell}\) with \(\ell\neq j\). If \(P_{u}\) has length at least \(k_{1}+1\) (and hence \(k_{1}+1\) vertices disjoint from \(C^{\prime}_{j}\)), Theorem 3.1 applied with \(k=k_{1}\) to \(P_{u}\) gives that \(u\) has distance at most 3 from \(C^{\prime}_{j}\) in \(G_{i^{\prime}_{1}}\) with \(i^{\prime}_{1}:=M_{C_{k_{1}}}(n)+1\). Therefore at time \(i^{\prime}_{1}\) we can assume that every vertex in \(U^{\prime}_{j}\) is of distance at most \(k_{1}+1\) from \(C^{\prime}_{j}\). By Lemma 5.3 with \(k=k_{j}\) on \(G_{i^{\prime}_{1}}[U^{\prime}_{j}\cup V(C^{\prime}_{j})]\), we have that indeed \(U^{\prime}_{j}\subseteq U_{j,i_{2}}\), using that \(i^{\prime}_{1}+M_{C_{k_{j}}}(k_{j}+k_{1})\leq i_{2}\) because \(n\) is sufficiently large. Thus, the \(U_{j,i_{2}}\), \(j\in[s]\), cover \(\cup_{j}U^{\prime}_{j}\). In the following if \(U\subset V(G)\) we define \(N_{G_{i}}(U):=\{v\in V(G)\setminus U:N_{G_{i}}(v)\cap U\neq\emptyset\}\), that is, \(U\) is disjoint from its neighbourhood \(N_{G_{i}}(U)\) by definition. **Claim 6.2**.: _For any \(j\in[s]\) and \(i\geq i_{2}\) the following hold:_ 1. \(N_{G_{i}}(U_{j,i}\cup V(C^{\prime}_{j}))\setminus V^{\prime}\subset U_{j,i+1}\)_._ 2. _If_ \(U_{j,i}\) _is non-empty, either_ \(G_{i+k_{j}^{2}+2}[U_{j,i}\cup V(C^{\prime}_{j})]\) _is complete or_ \(k_{j}\) _is even and_ \(G_{i+k_{j}^{2}+2}[U_{j,i}\cup V(C^{\prime}_{j})]\) _contains a spanning complete bipartite graph with partite sets of size at least_ \(k_{j}/2\)_._ 3. _If_ \(\ell\in[s]\setminus\{j\}\) _and_ \(|U_{j,i}\cap U_{\ell,i_{2}}|\geq 3\)_, then_ \(U_{\ell,i_{2}}\subseteq U_{j,i+k_{1}^{2}+3}\)_._ Proof.: Let \(j\in[s]\) be fixed. (1) Every \(u\in U_{j,i}\cup V(C^{\prime}_{j})\) has a neighbour on \(C^{\prime}_{j}\), so by going around \(C^{\prime}_{j}\) we can pick \(x_{j}(u)\in V(C^{\prime}_{j})\) such that \(u\) and \(x_{j}(u)\) are the endpoints of a path of length \(k_{j}-2\) in \(V(C^{\prime}_{j})\cup\{u\}\). Therefore, if \(uv\in E(G_{i})\) for some \(v\in V(G)\setminus(U_{j,i}\cup V^{\prime})\), we have \(x_{j}(u)v\in E(G_{i+1})\) and thus \(v\in U_{j,i+1}\). Here it is important that \(v\notin V^{\prime}\) so we may extend the path of length \(k_{j}-2\) to a copy of \(H\) minus an edge. (2) Recall that any \(k_{j}\)-cycle with vertices in \(U_{j,i}\cup V(C^{\prime}_{j})\) can be extended to a copy of \(H\) using \(C^{\prime}_{1},\ldots,C^{\prime}_{s}\). Lemma 5.3 implies that for any non-empty \(U\subseteq U_{j,i}\) of size at most two, \(G_{i+k_{j}^{2}+2}[U\cup V(C^{\prime}_{j})]\) is complete or contains a spanning complete bipartite graph with partite sets of size at least \(k_{j}/2\). Here we used the estimate \(M_{C_{k_{j}}}(k_{j}+|U|)\leq\binom{k_{j}+2}{2}\leq k_{j}^{2}+2\). (3) Let \(x\in U_{\ell,i_{2}}\setminus U_{j,i}\). If \(x\) has a \(G_{i+k_{1}^{2}+2}\)-neighbour in \(U_{j,i+k_{1}^{2}+2}\), then \(x\in U_{j,i+k_{1}^{2}+3}\) by part (1). Now suppose that \(x\) does not have such a neighbour. Since \(|U_{j,i}\cap U_{\ell,i_{2}}|\geq 3\), we have that \(G_{i+k_{1}^{2}+2}[U_{\ell,i}\cup V(C^{\prime}_{\ell})]\) contains both \(x\) and a vertex from \(U_{j,i}\) and thus cannot be complete. Then part (2) forces \(k_{\ell}\) to be even (in particular \(k_{\ell}\geq 4\)) and \(G_{i+k_{1}^{2}+2}[U_{\ell,i}\cup V(C^{\prime}_{\ell})]\) to contain a spanning complete bipartite graph. Label the vertices of \(C^{\prime}_{\ell}\) by \(v_{0},\ldots,v_{k_{\ell}-1}\) such that \(E(C^{\prime}_{\ell})=\{v_{0}v_{1},\ldots,v_{k_{\ell}-1}v_{0}\}\) and \(xv_{1}\in E(G_{i_{2}})\). Let \(y,y^{\prime},y^{\prime\prime}\) be three distinct vertices in \(U_{j,i}\cap U_{\ell,i_{2}}\). As \(N_{G_{i+k_{1}^{2}+2}}(x)\cap\{y,y^{\prime},y^{\prime\prime}\}=\emptyset\) we have \(yv_{t},y^{\prime}v_{t},y^{\prime\prime}v_{t}\in E(G_{i+k_{1}^{2}+2})\) for all odd \(t\in[0,k_{\ell}-1]\). Label the vertices of \(C^{\prime}_{j}\) by \(w_{0},\ldots,w_{k_{j}-1}\) such that \(yw_{3}\in E(G_{i+k_{1}^{2}+2})\), and \(r\in\{1,2\}\) such that \(y^{\prime}w_{r},y^{\prime\prime}w_{r}\in E(G_{i+k_{1}^{2}+2})\). Now \[xv_{1}yw_{3}\ldots w_{k_{j}-1}\] is a path of length \(k_{j}-1\) that is vertex-disjoint from the \(k_{\ell}\)-cycle \[v_{3}\ldots v_{k_{\ell}-1}y^{\prime}w_{r}y^{\prime\prime}v_{3}.\] Together with the cycles \(C_{t}^{\prime}\), \(t\in[s]\setminus\{j,\ell\}\) they form a copy of \(H\) minus the edge \(xw_{k_{j}-1}\). Therefore, \(x\in U_{j,i+k_{1}^{2}+3}\). Define \[R_{j,i}:=\{\ell\in[s]:U_{\ell,i_{2}}\subseteq U_{j,i}\}\] and note that \(R_{j,i}\subseteq R_{j,i+1}\) because \(U_{j,i}\subseteq U_{j,i+1}\). Let \(c:=2s^{2}+3s+2k_{1}^{2}+5+\binom{|V^{\prime}|}{2}\). **Claim 6.3**.: _For every \(i\geq i_{2}\) with \(i+c\leq\tau_{H}(G)\) there exists \(j\in[s]\) such that \(|R_{j,i+c}|>|R_{j,i}|\)._ Proof.: Let \(i\geq i_{2}\) such that \(i+c\leq\tau_{H}(G)\). A simple application of the Pigeonhole Principle shows that whenever \(|U_{j,i+c-(k_{1}^{2}+3)}|>|U_{j,i}|+2(s-|R_{j,i}|)\) for some \(j\in[s]\), there exists \(\ell\in[s]\setminus R_{j,i}\) such that \(|U_{j,i+c-(k_{1}^{2}+3)}\cap U_{\ell,i_{2}}|\geq 3\). In that case part (3) of Claim 6.2 tells us that \(U_{\ell,i_{2}}\subseteq U_{j,i+c}\) and hence \(|R_{j,i+c}|>|R_{j,i}|\). It remains to find \(j\in[s]\) satisfying \(|U_{j,i+c-(k_{1}^{2}+3)}|>|U_{j,i}|+2(s-|R_{j,i}|)\). Suppose for a contradiction that no such \(j\) exists. Every edge that is added at time \(i^{\prime}+1\) for some \(i^{\prime}\geq i\) either lies in \(U_{j,i}\cup V(C_{j}^{\prime})\) for some \(j\in[s]\) or lies in \(V^{\prime}\) or has one endpoint in \(U_{j,i}\cup V(C_{j}^{\prime})\) and the other in \(U_{\ell,i_{2}}\) for some \(j\in[s]\), \(\ell\in[s]\setminus R_{j,i}\). For any \(j\in[s]\), part (2) of Claim 6.2 and Lemma 5.2 together imply \[|\{i^{\prime}\geq i+k_{j}^{2}+2:G_{i^{\prime}}[U_{j,i}\cup V(C_{j}^{\prime})] \neq G_{i^{\prime}+1}[U_{j,i}\cup V(C_{j}^{\prime})]\}|\leq 3.\] There are at most \(\binom{|V^{\prime}|}{2}\) steps in which a new edge with both endpoints in \(V^{\prime}\) can be added. By averaging we can find \(j\in[s]\) such that \[\left|\left\{i^{\prime}\in[i+k_{1}^{2}+2,i+c-(k_{1}^{2}+3)-1]:N_{G_{i^{\prime}+ 1}}(U_{j,i}\cup V(C_{j}^{\prime}))\setminus V^{\prime}\neq\emptyset\right\} \right|\geq\frac{c-(2k_{1}^{2}+5)-3s-\binom{|V^{\prime}|}{2}}{s},\] and consequently, by part (1) of Claim 6.2 and the definition of \(c\), \[|U_{j,i+c-(k_{1}^{2}+3)}|\;\geq\;|U_{j,i}|+\frac{c-(2k_{1}^{2}+5)-3s-\binom{|V ^{\prime}|}{2}}{s}\;\geq\;|U_{j,i}|+2s\;>\;|U_{j,i}|+2(s-|R_{j,i}|).\] Consider the sequence \(i_{2},i_{2}+c,\ldots,i_{2}+s(s-1)c\). The hypothesis \(i+c\leq\tau_{H}(G)\) from Claim 6.3 cannot hold for every \(i\) in that sequence since for every \(j\in[s]\) there can be at most \(s-1\) different \(i\in\{i_{2},i_{2}+c,\ldots,i_{2}+s(s-1)c\}\) such that \(|R_{j,i+c}|>|R_{j,i}|\). Therefore, \[\tau_{H}(G)<i_{2}+s(s-1)\cdot c\leq M_{C_{k_{1}}}(n)+k_{1}^{3}s^{4}-1\] Finally, we apply Theorem 1.2 to obtain \(\tau_{H}(G)\leq\log_{k_{1}-1}(n)+k_{1}^{3}s^{4}\) whenever \(n\) is sufficiently large, as required.
2305.08038
First Principles and Machine Learning Identify Key Pairing Strength Factors of Cuprate Superconductors
By using band structure calculations of quantum mechanical theory, some important peaks of DoS (Density of States) were obtained and classified based on crystal structure laws of cuprate superconductivity. In particular, the orbital interactions of the in-plane and out-of-plane ions of the copper-oxygen plane were investigated. The position, half-width, and height of DOS peak features were collected for all 35 typical curate systems which have critical temperature maximum data from the works of literature. By training test of 7 common machine learning algorithms, the relationship between the Tc maximum values and these orbital interaction parameters were mined. It was found that the key features of the orbital interaction affecting the Tc maximum were not only the flat band but also a new interaction between core orbitals in a deeper energy band position.
Xinyu He, Ning Chen, Jingpei Chen, Xuezhou Wang, Yang Li
2023-05-14T00:59:41Z
http://arxiv.org/abs/2305.08038v1
## First Principles and Machine Learning Identify Key Pairing ### Abstract By using band structure calculations of quantum mechanical theory, some important peaks of DoS (Density of States) were obtained and classified based on crystal structure laws of cuprate superconductivity. In particular, the orbital interactions of the in-plane and out-of-plane ions of the copper-oxygen plane were investigated. The position, half-width, and height of DOS peak features were collected for all 35 typical curate systems which have critical temperature maximum data from the works of literature. By training test of 7 common machine learning algorithms, the relationship between the Tc maximum values and these orbital interaction parameters were mined. It was found that the key features of the orbital interaction affecting the Tc maximum were not only the flat band but also a new interaction between core orbitals in a deeper energy band position. ### Introduction The electron pairing mechanism of high-temperature superconductivity has been one of the most challenges of condensed matter physics for almost four decays[1] as a result of a number of complex and deeply hidden factors influencing on the critical temperature (\(T_{\mathrm{c}}\)) of superconductivity, where \(T_{\mathrm{c}}\) represents the stiffness of superconductivity which combines with both electronic pairing factors and carrier concentration or doping factors [2]. By means of experimental studies on \(T_{\mathrm{c}}\) variations with environment, composition and structure of materials, these above two kinds of influence factors had been found[3] and some complex relationships were also understood through machine learning (ML) studied on a large data of 12,000 \(T_{\mathrm{c}}\) values.[4, 5] However, the governing strength of pairing factors remains unknown because of the following two technical problems. The first one is that the \(T_{\mathrm{c}}\) Raw data's weights distribute mainly on doping factors but less on pairing ones. For example, nearly 6000 experimental data of \(T_{\mathrm{c}}\) values cover mainly 35 of typical systems with only one \(T_{\mathrm{c}}\) maximum value for each system, or only 6% data weight of pairing factor, so that the predicted ML model was trained out not a new system with a higher \(T_{\mathrm{c}}\) maximum or a stronger pairing factor, but only a new one with suitable doping in fact.[6, 7] As the \(T_{\mathrm{c}}\) maximum is different in each system for a similar optimal doping of copper oxides [8], the pairing strength of one system should be replaced by its \(T_{\mathrm{c}}\) maximum values for ML training test in order to search for pairing factors. On the other hand, the second problem is that these pairing factors were mainly observed from features or attributions of atoms or ions rather than explicit orbitals or electronic interactions[9]. Although electronic structures could be easily obtained by the first principal approach of quantum physics, it is hard to tackle with both these doped models' complex constructions and these burden computations for thousands of doping systems. Even if this problem were solved, the limited data set problem still remains due to a much low weight ratio on the pairing factor. Therefore, changing target data set of the \(T_{\mathrm{c}}\) maximum as well as training by key orbital features is the only way to solve the existing technical problems. ## Method For cuprates, there are three crystal categories: the layer structure for Hg, Tl, Pb and Bi families, the 123 structure and the 214 structure. But even for one structure, there is also a lot of complex orbital interaction features to be considered. Here we should provide a simpler method which is based on orbital peaks of interacting bands from Density-of-State (DoS) diagrams obtained by band structure calculations of First principles, in which one peak represents only one band, ignoring its orbital peaks in details. In addition, its position, half-width and height, quite different from its atomic orbital, are chosen as their peaks' features to discuss the interactions between different orbitals. As result of lacks of \(T_{\mathrm{c}}\) maximum data values, the number of influencing features for ML training had to be limited by introducing domain knowledge, so that crystal structure laws of cuprate superconducting are quite important to help us for cutting off orbital feature's candidates. In this paper, we especially analysis a relatively larger energy range of energy band to cover most of important \(s,p\) and \(d\) peaks of DoS. [10] Firstly, the copper-oxygen plane is a main superconducting "highway" for high temperature superconductivity, oxygen ion's \(2p\) and copper ion's \(3d\) orbitals need to be focused on, which were thought to be a common feature of cuprate superconductors. Secondly, as a difference of key crystal structure of each cuprate system mainly comes from the nearest neighboring cation beside the copper-oxygen plane,[11] so that the second outer saturated orbitals of them had to be taken into account. Similar reason for the second outer saturated \(2s\) orbital of oxygen ion since its orbital is similar in energy and space. Therefore, much deeper range of orbitals up to -30\(eV\) below Fermi level were studied in this work, which is quite different from in other works (Up to -10\(eV\)), in order to emphasize on all of these important orbital interactions around the copper-oxygen plane. Figure 1 show that a schematic diagram of some important orbitals in crystal structure, as well as their classification of orbital peaks in DoS corresponding to a typical part of cuprate crystal structure. The calculation procedure and parameters are detailed in reference[7]. Especially, we focused on 1#-4# four types of ions in the near-neighbor position outside the plane (1# Ions occupies a relatively lower \(p\) orbital quite far away from Fermi level, and 2# Ions occupies for a relatively higher one close to Fermi level) and in the copper-oxygen plane (3# is Cooper Ions and 4# is Oxygen Ions). Here, the ionic classification is defined for the propose of studying the interaction between some important orbitals in two energy-bands, in which it is believed that the electron orbit interaction in more detail contrary to a strength and a type of chemical bond. Therefore, total four key ions coupled with two different energy-bands composed of those 8 important peaks selected from DoS of the first principal calculations, therefore, there are total 21 basic features (position, half-width and height) of orbital peak collected for ML training. Actually, it should be noticed that the most important selection principle come from not only basic Figure 1: Schematic diagram of Key four ions and two energy-bands for the copper-oxygen plane of cuprate and the classification of some important peaks in DoS\({}_{1}\) 1#, 2# for neighboring cations with deeper and shallower orbital levels compared to \(E_{\mathrm{F}}\) respectively, 3# for copper ions and 4# for oxygen ions structure laws of cuprate systems but also the simplification of some important orbital coupling parameters. The standardized identification of the position of the orbital peaks is relatively important, which is easier to identify relatively independent peaks but it is difficult to identify peaks with orbital coupling phenomena so that some special definition shou be taken. Here S, P, and D represent orbitals derived from the \(s\), \(p\), and \(d\) orbitals, so that S14 or S24 actually represent the \(2s\) orbitals of the oxygen ion influenced by 1# or 2# neighboring cations, P14 or P24 represent the \(p\) orbitals of the 1# or 2# neighboring cations respectively as they are influenced by \(2s\) of 4# ## Results Based on the data in **Appendix A**, hundreds of training tests were done by 7 commonly used machine learning algorithms, and coupled with different training routes and test sample selection. And the predicted model results obtained from the best results of all different training tests for different algorithms. Here, it is quite important for us to discuss the SHAP (Shapley Additive ExPlanations) value analysis, by which we can understand the weight distribution of each peak feature on the \(T_{\text{c}}\) maximum value. As shown in Figure 2a, the vertical coordinate represents 21 different orbital features, the horizontal coordinates are their corresponding SHAP values, and the red and blue indicate positive and negative correlations with \(T_{\text{c}}\) respectively. Figure 2a shows the SHAP analysis of an elastic network regression model, which is quite similar to the average value of total 7 methods. Among 21 compared features, it shows that the stronger ones come from P14_L, S14_L, P24_L, D43_W and positively correlated with \(T_{\text{c}}\) maximum, while P14_L is negatively correlated with \(T_{\text{c}}\) maximum, and S14_L, P24_L, D43_W, positively correlated with \(T_{\text{c}}\) maximum. Figure 2b also shows a schematic diagram of the training test results. It shows that the deeper the P14 orbital position, the larger the \(T_{\text{c}}\) maximum. And some other order of peaks with respect to \(T_{\text{c}}\) maximum. By comparing of the best experimental results of these seven models, it was found that the SHAP weight analysis of 7 algorithms are sometime different in orders, but actually the features ranked in the top 8 were roughly similar for all 7 methods. Among three kinds of peak parameters, the position was the most critical influencing factors, and only two peak half-width factors (D43_W and P34_W) was obtained, but no peak height features were found to influence on \(T_{\text{c}}\) maximum, as shown in Table 1. As shown in Table 1, among different algorithms or models, the SHAP analysis histogram of Bayesian ridge regression model shows a relatively a smaller number of features, Fig. 2: **Elastic network regression model SHAP analysis (a) bar graph (upper left); (b) a schematic diagram of DoS (right);** with the stronger correlation among P14_L, P24_L, D34_W, and D34_L. From the results of elastic network regression model, the SHAP values of P14_L, S14_L and P24_L are relatively large. For the gradient boost regression model SHAP analysis, the SHAP value of P14_L is unusually large and also has a negative correlation with _T_c maximum. The SHAP values of P14_L and S14_L are relatively large in the SHAP analysis of the K-nearest neighbor regression model results. Furthermore, the Lasso regression model is somewhat special, the SHAP values of D34_W, D34_L, P24_L, P14_L are relatively are larger. For the random forest regression model, it shows that the SHAP values of P14_L, D34_W, and P34_L are relatively larger, but with a larger error. Finally, from the support vector machine regression model, results show that the three features of P24_L, P14_L, and S14_L are more strongly correlated with _T_c maximum, and also with a larger error. In summary, although differences between the models, the range of main candidate factors are similar but only with some difference from the SHAP value weight ranks. By compared with each other's, only the random forest and gradient boosting models have a larger error on SHAP ranks, seems these two methods may require a larger amount of data as they usually analyze the problem of large sample data set. ## Discussion From the governing characteristic parameters of seven ML training results, it was showed that there are two important energy-band features highly influencing on _T_c maximum: the first one is the outer layer coupling energy band from P34 and D43, which is the well-known _pd_ coupling around the Fermi energy level, corresponding to the traditional valence band from the copper-oxygen plane. Actually, it is an important law of the flat band feature of cuprate superconductor, which were thought to explain the formal doping law that more suitable conductive carries enhance _T_c maximum. Another one is a new inner orbital coupling with P14 and S14, which is a special coupling between 2\(s\) core orbital of O ion and the \(p\) core orbital from the model is somewhat special, the SHAP values of D34_W, D34_L, P24_L, P14_L are relatively are larger. For the random forest regression model, it shows that the SHAP values of P14_L, D34_W, and P34_L are relatively larger, but with a larger error. Finally, from the support vector machine regression model, results show that the three features of P24_L, P14_L, and S14_L are more strongly correlated with _T_c maximum, and also with a larger error. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Model Name & MAE & MSE & Test Index & P14\_L & P24\_L & D34\_W & S14\_L & D34\_L & D5\_L \\ \hline Bayesian ridge regression[12] & 0.057 & 0.003 & 25 & -7.4 & 10.8 & 7.2 & 2.9 & 6.5 & 5.9 \\ Elastic network regression[13] & 0.138 & 0.019 & 19 & -7.6 & 6.1 & 2.2 & 6.1 & 1.9 & 2.2 \\ Gradient boost regression[14] & 0.123 & 0.015 & 21 & -19.5 & 0.2 & 2.8 & 0.2 & 1.5 & 0.4 \\ K-nearest neighbor regression[15] & 0.167 & 0.028 & 13 & -12.9 & 1.8 & 0.4 & 10.4 & 0.6 & 1.6 \\ Lasso regression[16] & 0.052 & 0.003 & 0 & -5.6 & 7.8 & 12.3 & 2.6 & 8.1 & 4.5 \\ Random forest regression[17] & 0.308 & 0.095 & 19 & -8.8 & 1.4 & 6.4 & 0.4 & 2.2 & 0.6 \\ Support vector machine regression[18] & 0.394 & 0.155 & 20 & -5.6 & 11.1 & 2.5 & 0.1 & 1.2 & 3.1 \\ \hline 7 model mean value & - & - & - & -9.6 & 5.6 & 4.8 & 3.3 & 3.1 & 2.6 \\ \hline \end{tabular} \end{table} Table 1: SHAP weight analysis table and Errors for seven models influence factors on _T_c maximum. Actually, no further prediction model was discussed in this work since here we have no idea about any new cuprate superconductor system. As a result of multiple factors had been obtained by ML, obviously it is hard to do such a work by common statistical method. But if we could find some parameters, directly related with _T_c maximum rather than orbital feathers, we might also search for a simpler statistical law of _T_c maximum, for example the energy span of two energy band was a suitable feature of band structure as our precious work achieved.[9] As the core orbital coupling found by ML is more important for _T_c maximum than the formal flat band character, we thought that this new direction is reasonable since it quite agrees with Anderson's "More is Different", which was thought to be a basic law of condensed mater physics. Furthermore, according to Moriya-Ueda theory, the frequency spread of the wavevector-dependent part of the spin fluctuation, which is mainly restricted to a bandwidth.[19] Therefore, a bandwidth should be a governing factor to answer the question about the difference of _T_c maximum for each cuprate system.[20] But, it should be known that the "real" bandwidth could be taken as not only _pd_ coupling band, but also the new _sp_ interaction, in which these two coupling band could be, through _2s_ and _2p_ orbitals of O ion, connected by corelated or entangled with each other orbitals. Therefore, two orbital feature laws should be used to explain the strength of pairing factor, and new idea might help us find some new high-_T_c superconductors, rather than only some suitable doping systems.
2308.01786
Unique properties of the optical activity in noncentrosymmetric superconductors: sum rule, missing area, and relation with the superconducting Edelstein effect
We present general properties of the optical activity in noncentrosymmetric materials, including superconductors. We derive a sum rule of the optical activity in general electric states and show that the summation of the spectrum is zero, which is independent of the details of electric states. The optical activity has a $\delta$-function singularity that vanishes in normal phases. However, the singularity emerges in superconducting phases, corresponding to the Meissner effect in the optical conductivity. The spectrum decreases by the superconducting gap and has a missing area compared to the normal phase. This area is exactly equivalent to the coefficient of the $\delta$-function singularity due to the universal sum rule. Furthermore, the coefficient is exactly equivalent to the superconducting Edelstein effect, which has not yet been observed in experiments. Thus, this measurement of the missing area offers an alternative way to observe the superconducting Edelstein effect.
Koki Shinada, Robert Peters
2023-08-03T14:35:36Z
http://arxiv.org/abs/2308.01786v2
# Unique properties of the optical activity in noncentrosymmetric superconductors: ###### Abstract We present general properties of the optical activity in noncentrosymmetric materials, including superconductors. We derive a sum rule of the optical activity in general electric states and show that the summation of the spectrum is zero, which is independent of the details of electric states. The optical activity has a \(\delta\)-function singularity that vanishes in normal phases. However, the singularity emerges in superconducting phases, corresponding to the Meissner effect in the optical conductivity. The spectrum decreases by the superconducting gap and has a missing area compared to the normal phase. This area is exactly equivalent to the coefficient of the \(\delta\)-function singularity due to the universal sum rule. Furthermore, the coefficient is exactly equivalent to the superconducting Edelstein effect, which has not yet been observed in experiments. Thus, this measurement of the missing area offers an alternative way to observe the superconducting Edelstein effect. ## I Introduction Optical responses are one of the key research topics in condensed matter physics because they offer valuable insights into diverse material characteristics, such as momentum-resolved electric spectral functions using angle-resolved photoemission spectroscopy, symmetry breaking and associated domains using the Kerr effect and the second harmonic generation. The wide range of optical frequencies, spanning from microwaves to X-rays, enables the investigation of phenomena across an extensive spectrum of energy scales. Recently, terahertz spectroscopy is also attracting attention because important energy scales exist in this regime in condensed matter physics, such as the superconducting gap and collective excitations of magnets [1]. Optical responses have also played an essential role in the research of superconductors. It dates back to the observation of the superconducting gap in thin films of Pb using the far-infrared ray in 1956, which gave the first evidence of the superconducting gap [2]. Furthermore, the optical conductivity has contributed to the identification of the gap symmetry of superconductors and the exact measurement of the superfluid density or the magnetic penetration length through the use of a sum rule. This measurement has been mainly done in high-temperature superconductors [3; 4; 5; 6; 7]. In recent times, the research area of optical responses in superconductors has been more diverse; the third harmonic generation observing the Higgs mode [8] and the optical conductivity in noncentrosymmetric superconductors [9; 10; 11; 12; 13; 14; 15] are energetically studied. In this work, we will extend the study of optical responses to the optical activity in superconductors. The optical activity represents one of the optical responses, and it originates from the spatial dispersion of the optical conductivity, exhibiting the optical rotation, the dichroism, and the birefringence depending on material symmetries [16; 17; 18; 19; 20; 21]. It comprises two categories depending on the existence of the time-reversal symmetry (\(\mathcal{T}\)). One is the natural optical activity with \(\mathcal{T}\)-symmetry, and the other is the spatially-dispersive magneto-optical effect or the optical magnetoelectric effect without \(\mathcal{T}\)-symmetry. Spatial inversion symmetry breaking is necessary for a finite optical activity, and it is observed in various systems including chiral molecules as well as the noncentrosymmetric crystals. Despite the ubiquity of optical activity, theoretical studies of optical activity have mainly been carried out in molecular systems [22; 23; 24; 25; 26], and research in solids is not developed to the same level. The band theory of the optical activity in solids is developed in some works [27; 28; 29; 30; 31; 32; 33; 34], and it has been applied to various systems including chiral crystals [35; 36; 37; 38], twisted bilayer graphenes [39; 40; 41; 42; 43; 44], and a topological antiferromagnet [45]. Recently, the optical activity is formulated through the multipole theory in solids, revealing the correspondence with molecular systems [46], and the first-principle calculation is carried out based on this formulation [47]. While there has been gradual progress in theoretical studies about the optical activity in the normal phase, the research in noncentrosymmetric superconductors remains largely unexplored, except for few work [29; 48]. Recently, noncentrosymmetric superconductors have attracted increasing attention [49], because they cause novel superconducting states, such as parity-mixing superconductors, topological superconductors, and helical superconductors with finite momentum Cooper pairs. They, furthermore, display unique magnetoelectric responses and nonreciprocal phenomena due to inversion symmetry breaking, including the superconducting Edelstein effect [50], the magnetochiral anisotropy [51; 52; 53], and the superconducting diode effect [54; 55; 56; 57]. In addition, the optical activity will give valuable information about these superconductors due to its uniqueness in inversion-symmetry broken systems. In this paper, we show the general properties of the optical activity in noncentrosymmetric systems, including superconductors. First, we discuss a sum rule of the opti cal activity valid in all systems and reveal that the summation does not depend on material details and electric states in Sec. II. Second, we formulate the optical activity using Green's functions and discuss a no-go theorem stating the absence of a \(\delta\)-function singularity, meaning that optical activity does not appear in equilibrium in Sec. III. Furthermore, we show a typical optical spectrum calculated in a two-dimensional model, including a Rashba spin-orbit coupling, and confirm this sum rule. Third, we discuss the optical activity in noncentrosymmetric superconductors in Sec. IV. In this section, we formulate the optical activity for superconductors and demonstrate that the no-go theorem is broken. For this reason, the singularity appears, and the optical spectrum of the optical activity is reduced compared to the normal phase. This area is called the _missing area_ and is exactly equivalent to the coefficient of the \(\delta\)-function singularity because of the universal sum rule. Furthermore, we reveal that the missing area is exactly equivalent to the superconducting Edelstein effect, where a magnetization is induced by supercurrents, which has not been experimentally observed. The relation established here by the missing area provides an alternative way to observe this effect. Furthermore, we also calculate the optical activity in a two-dimensional noncentrosymmetric superconductor to verify the typical behavior of the missing area. Finally, we conclude this paper in Sec. V. ## II Sum rule of the optical activity In this section, we discuss a sum rule of the optical activity. The optical activity is one of the responses to light and, particularly, is related to the optical rotation and the nonreciprocity due to the inversion symmetry breaking. A similar effect is the magneto-optical effect, which is not included in this paper because inversion symmetry breaking is unnecessary. The optical activity is theoretically described by a spatially dispersive optical conductivity. When applying an electromagnetic wave, electric current, orbital moments and, also, spin moments interact with the light. These responses are described by a general current-current correlation function where the current operator is conjugate to the electromagnetic vector potential and includes spin moments. First, as a preparation, we discuss an exact symmetry of the current-current correlation function \(\Phi_{\mu\nu}(\mathbf{q},\omega)\), where \(\mathbf{q}\) is the wave number and \(\omega\) is the frequency. The following relationship holds between this correlation function and its complex conjugate as \[\Phi_{\mu\nu}^{*}(\mathbf{q},\omega)=\Phi_{\mu\nu}(-\mathbf{q},-\omega). \tag{1}\] This derivation requires only the hermicity of the current operator (see Appendix A for a detailed derivation). Next, we expand this correlation function by the wave number \(\mathbf{q}\) and discuss the symmetry of the zeroth order \(\Phi_{\mu\nu}(\omega)=\Phi_{\mu\nu}(\mathbf{0},\omega)\) and the first order term \(\Phi_{\mu\nu\lambda}(\omega)=\partial_{q_{\lambda}}\Phi_{\mu\nu}(\mathbf{0},\omega)\). Separating the correlation function into real and imaginary parts, we find that the following symmetry relations for the frequency hold: \[\left\{\begin{aligned} &\text{Re}\Phi_{\mu\nu}(-\omega)=+\text{Re} \Phi_{\mu\nu}(\omega)\\ &\text{Im}\Phi_{\mu\nu}(-\omega)=-\text{Im}\Phi_{\mu\nu}(\omega) \\ &\text{Re}\Phi_{\mu\nu\lambda}(-\omega)=-\text{Re}\Phi_{\mu\nu \lambda}(\omega)\\ &\text{Im}\Phi_{\mu\nu\lambda}(-\omega)=+\text{Im}\Phi_{\mu\nu \lambda}(\omega).\end{aligned}\right. \tag{2a}\] \[\left\{\begin{aligned} &\text{Re}\Phi_{\mu\nu}(-\omega)=-\text{Im} \Phi_{\mu\nu}(\omega)\\ &\text{Im}\Phi_{\mu\nu\lambda}(-\omega)=+\text{Im}\Phi_{\mu\nu \lambda}(\omega).\end{aligned}\right. \tag{2b}\] The zeroth order term is even for the real part and odd for the imaginary part, and the opposite relation holds for the first order term. Next, we derive the sum rule. The spatially dispersive optical conductivity is given by \[\sigma_{\mu\nu}(\mathbf{q},\omega)=\frac{\Phi_{\mu\nu}(\mathbf{q},\omega)-D_{\mu\nu}}{ i(\omega+i\delta)}. \tag{3}\] Here, \(D_{\mu\nu}\) is the diamagnetic term, which is real. \(\delta=+0\) is an adiabatic factor. Using the equation \(\lim_{\delta\to+0}1/(\omega+i\delta)=\mathscr{P}/\omega-i\pi\delta(\omega)\), the optical conductivity is divided into the real part and the imaginary part as \[\text{Re}\sigma_{\mu\nu}(\mathbf{q},\omega)\] \[=\mathscr{P}\frac{\text{Im}\Phi_{\mu\nu}(\mathbf{q},\omega)}{\omega}- \pi\delta(\omega)(\text{Re}\Phi_{\mu\nu}(\mathbf{q},\omega)-D_{\mu\nu}) \tag{4a}\] \[\text{Im}\sigma_{\mu\nu}(\mathbf{q},\omega)\] \[=-\mathscr{P}\frac{\text{Re}\Phi_{\mu\nu}(\mathbf{q},\omega)-D_{\mu \nu}}{\omega}-\pi\delta(\omega)\text{Im}\Phi_{\mu\nu}(\mathbf{q},\omega). \tag{4b}\] Here, \(\delta(\omega)\) is the \(\delta\)-function and \(\mathscr{P}\) is the principal value. Similarly, expanding the optical conductivity by \(\mathbf{q}\), the real part of the zeroth order term is an even function of \(\omega\) and the imaginary part is odd, which can be seen from Eq. (2a). The opposite relation is satisfied for the first-order term shown in Eq. (2b). The zeroth order term is the usual optical conductivity, and the first order is called the optical activity finite only in noncentrosymmetric systems and the main quantity in this paper. For the even functions, we can find the following sum rules. Using the Kramers-Kronig relation, the sum rule for the usual optical conductivity (the zeroth order) is established [58], \[\int_{0}^{\infty}d\omega\text{Re}\sigma_{\mu\nu}(\omega)=\frac{\pi}{2}D_{\mu \nu}. \tag{5}\] Next, the sum rule for the optical activity reads (see Appendix A for a detailed derivation) \[\int_{0}^{\infty}d\omega\text{Im}\sigma_{\mu\nu\lambda}(\omega)=0. \tag{6}\] This relation is the first main result of this paper. This equation means that the summation is zero and has universality due to the independence of material details. This property will be important in the following discussion. This sum rule is partially derived in molecule systems [22; 23], which are finite systems, and it has been further extended to infinite systems, crystals [27, 28, 46]. However, the derivation is limited to the noninteracting band theory. On the other hand, Eq. (6) generalizes the sum rule to systems without any assumption and is also valid for, e.g., interacting systems and superconducting states. ## III Optical activity in noncentrosymmetric crystals In this section, we discuss general properties, such as symmetry constraints and a no-go theorem, and typical behaviors of the optical activity in noninteracting crystals. Symmetry classification for the optical activity: natural optical activity and optical magnetoelectric effect Response functions are, in general, constrained by time-reversal symmetry, and this constraint is given by the Onsager reciprocal theorem. The optical conductivity satisfies the reciprocal relation [18] \[\sigma_{\mu\nu}(\mathbf{q},\omega,\mathbf{M})=\sigma_{\nu\mu}(-\mathbf{q},\omega,-\mathbf{M}). \tag{7}\] Here, \(\mathbf{M}\) is a time-reversal symmetry-breaking term such as an external magnetic field or a magnetization. Thus, the symmetric and antisymmetric parts of the optical activity behave differently for the interchange of the indices \(\mu\leftrightarrow\nu\) as [30, 46] \[\sigma^{(S)}_{\mu\nu\lambda}(\omega,\mathbf{M}) =-\sigma^{(S)}_{\mu\nu\lambda}(\omega,-\mathbf{M}) \tag{8a}\] \[\sigma^{(A)}_{\mu\nu\lambda}(\omega,\mathbf{M}) =+\sigma^{(A)}_{\mu\nu\lambda}(\omega,-\mathbf{M}). \tag{8b}\] These equations show that the symmetric part \(\sigma^{(S)}_{\mu\nu\lambda}\) is odd and the antisymmetric part \(\sigma^{(A)}_{\mu\nu\lambda}\) is even under the time-reversal operation \(\mathcal{T}\). Thus, the symmetric part needs time-reversal breaking (\(\mathbf{M}\neq 0\)), but the antisymmetric part does not. The optical activity is further restricted by the spatial inversion symmetry. The optical activity tensors are odd under the spatial-inversion operation \(\mathcal{P}\), therefore, the antisymmetric part vanishes if systems have \(\mathcal{PT}\) symmetry. Because of the different symmetry constraints of these parts, they are named differently. The antisymmetric part is called the natural optical activity (NOA) and the symmetric part is called the optical magnetoelectric effect. The NOA is mainly composed of the optical rotation and the circular dichroism and has been studied for a long time. It dates back to the first observation in 1811 by Arago, showing that a quartz displayed an optical rotation. The NOA is often used to distinguish chiral molecules because the enantiomers, which are the mirrored states, exhibit the NOA with opposite signs to the original molecules. Furthermore, the NOA is also active in chiral solids, such as Te and Se [59], and twisted bilayer graphene, as we have noted in the introduction. In the aspect of symmetry, the NOA can appear even in \(\mathcal{T}\)-symmetric systems and, then, purely reflects the crystal symmetry. The antisymmetric optical activity behaves as a rank-2 axial tensor \(\alpha_{\xi\lambda}=\varepsilon_{\mu\nu\xi}\sigma^{(A)}_{\mu\nu\lambda}\) (\(\varepsilon_{\mu\nu\xi}\) is a totally antisymmetric tensor), and this tensor is active in gyrotropic point groups (GPGs) [60, 61]. GPGs are divided into strong and weak GPGs, and weak GPGs are composed of \(\mathrm{C_{3v},C_{4v},C_{6v}}\). These two GPGs generate different types of the NOA. The optical rotation is active in strong GPGs, however, it does not appear in weak GPGs. On the other hand, weak GPGs display the Voigt-Fedorov dichroism or a specific reflection phenomenon [62, 63, 64]. This phenomenon was observed, for example, in CdS with \(\mathrm{C_{6v}}\)[65, 66]. Furthermore, in spin-orbit coupled systems, the NOA includes the optical Edelstein effect, where the AC current induces a dynamical magnetization [67, 68]. The symmetric part can be decomposed into a rank-2 axial tensor \(\beta_{\mu\xi}=\varepsilon_{\nu\lambda\xi}\sigma^{(S)}_{\mu\nu\lambda}\) and a rank-3 totally symmetric tensor \(\gamma_{\mu\nu\lambda}=\sigma^{(S)}_{\mu\nu\lambda}+\sigma^{(S)}_{\nu\lambda\mu }+\sigma^{(S)}_{\lambda\mu\nu}\). \(\beta_{\mu\xi}\) corresponds to the optical magnetoelectric response [20], and it induces, e.g., the directional dichroism and the directional birefringence. The response is observed in the typical magnetoelectric material \(\mathrm{Cr_{2}O_{3}}\)[69, 70], and now the magnetoelectric optics allows domain imaging of antiferromagnets [71, 72, 73]. Furthermore, the optical magnetoelectric response is now widely observed [21], and the response caused by magnons in multiferroic magnets is also reported [74, 75]. \(\gamma_{\mu\nu\lambda}\) is known to be an electric quadrupole response [30, 46], which also induces the directional dichroism [33]. ### Green's function formula of the optical activity for noninteracting systems We derive the Green's function formula of the optical activity for the noninteracting systems. The noninteracting Hamiltonian without electromagnetic wave is \[H_{0}=\frac{\mathbf{p}^{2}}{2m}+V(\mathbf{x})+\frac{1}{4m^{2}}\Big{(}\frac{\partial V (\mathbf{x})}{\partial\mathbf{x}}\times\mathbf{p}\Big{)}\cdot\mathbf{\sigma}. \tag{9}\] Here, \(\mathbf{p}\) and \(\mathbf{x}\) are the momentum and position operators, respectively, \(m\) is the mass of an electron, \(V(\mathbf{x})=V(\mathbf{x}+\mathbf{a})\) is a periodic potential, and \(\mathbf{\sigma}\) is the Pauli matrix representing the spin degrees of freedom. This Hamiltonian is diagonalized by the Bloch wave function \(|\psi_{n\mathbf{k}}\rangle\) (\(\mathbf{k}\) is the Bloch wave number and \(n\) is the band index) as \(H_{0}\left|\psi_{n\mathbf{k}}\right\rangle=\epsilon_{n\mathbf{k}}\left|\psi_{n\mathbf{k}}\right\rangle\). For the following discussion, we define the Bloch Hamiltonian \(H_{\mathbf{k}}=e^{-i\mathbf{k}\cdot\mathbf{x}}H_{0}e^{i\mathbf{k}\cdot\mathbf{x}}\) and the periodic part of the Bloch function \(\left|u_{n\mathbf{k}}\right\rangle=e^{-i\mathbf{k}\cdot\mathbf{x}}\left|\psi_{n\mathbf{k}}\right\rangle\). Then, introducing electromagnetic waves by the vector potential \(\mathbf{A}(\mathbf{x},t)\), the momentum changes as \(\mathbf{p}\rightarrow\mathbf{p}+e\mathbf{A}(\mathbf{x},t)\) (\(-e<0\) is the charge of the electron) and the Zeeman term is added. The first-order perturbed Hamiltonian is given by \[H_{A} =\frac{e}{2}\Big{(}\mathbf{v}\cdot\mathbf{A}(\mathbf{x},t)+\mathbf{A}(\mathbf{x},t) \cdot\mathbf{v}\Big{)} \tag{10a}\] \[H_{B} =\frac{g_{S}\mu_{B}}{2}(\partial_{\mathbf{x}}\times\mathbf{A}(\mathbf{x},t)) \cdot\mathbf{\sigma}. \tag{10b}\] Here, \(\mathbf{v}=i[H_{0},\mathbf{x}]\) is the velocity operator, \(g_{S}=2.002\cdots\) is the spin \(g\)-factor, and \(\mu_{B}=e/2m\) is the Bohr magneton. The generalized current operator \(\mathbf{J}(\mathbf{r})\), conjugate to the vector potential \(\mathbf{A}(\mathbf{r},t)\) (\(\mathbf{r}\) is just the position coordinate, not the operator), is defined as \[\mathbf{J}(\mathbf{r}) \equiv -\frac{\delta(H_{A}+H_{B})}{\delta\mathbf{A}(\mathbf{r},t)}\] \[= -\frac{e}{2}\{\mathbf{v},\delta(\mathbf{r}-\mathbf{x})\}+\frac{g_{S}\mu_{B}}{ 2}(\mathbf{\sigma}\times\partial_{\mathbf{r}})\delta(\mathbf{r}-\mathbf{x}).\] This current operator is the quantity induced by the interaction with electromagnetic waves. Following the dynamical linear response theory given by the Kubo formula, the current-current correlation function using the Green's functions is given by \[\Phi_{\mu\nu}(\mathbf{q},\Omega)\] \[=\int[d^{4}k]f(\omega)\mathrm{Tr}\Big{[}G^{RA}(\mathbf{k}-,\omega)j^{ \mu}_{\mathbf{k},\mathbf{q}}G^{R}(\mathbf{k}+,\omega+\Omega)j^{\nu}_{\mathbf{k},-\mathbf{q}}\] \[+G^{A}(\mathbf{k}-,\omega-\Omega)j^{\mu}_{\mathbf{k},\mathbf{q}}G^{RA}(\mathbf{k} +,\omega)j^{\nu}_{\mathbf{k},-\mathbf{q}}\Big{]}. \tag{12}\] Here, \(G^{R/A}(\mathbf{k},\omega)=1/(\omega-H_{\mathbf{k}}+\mu\pm i\Gamma)\) is the retarded/advanced Green function, and we define \(G^{RA}=G^{R}-G^{A}\), \(\mathbf{k}\pm=\mathbf{k}\pm\mathbf{q}/2\) and \(j^{\mu}_{\mathbf{k},\mathbf{q}}=-ev^{\mu}_{\mathbf{k}}\frac{-g_{S}\mu_{B}}{2}(i\mathbf{q} \times\mathbf{\sigma})_{\mu}\). \(v^{\mu}_{\mathbf{k}}=\partial H_{\mathbf{k}}/\partial k_{\mu}\) is the velocity operator of the Bloch Hamiltonian, \(\mu\) is a chemical potential, and \(f(\omega)=1/(e^{\beta\omega}+1)\) is the Fermi distribution function at temperature \(1/\beta\). The integral symbol is abbreviated as \(\int[d^{4}k]=\int_{-\infty}^{\infty}d\omega/(2\pi i)\int_{\mathrm{BZ}}d^{3}k/ (2\pi)^{3}\). We phenomenologically introduce the dissipation effect by assuming a finite \(\Gamma\). In this calculation, we neglect the diamagnetic term \(D_{\mu\nu}\) because it does not contribute to the optical activity due to the independence of \(\mathbf{q}\). The optical activity is attributed to the first-order term of this correlation function by \(\mathbf{q}\), thus, two different contributions appear. One is coming from the spin part in the current operator \(j^{\mu}_{\mathbf{k},\mathbf{q}}\), and the other is the orbital contribution given by the expansion of the Green's function by \(\mathbf{q}\). In the following, we focus on the spin contribution for simplicity and only consider crystal symmetries and dimensions where the orbital contribution is absent. The spin term is given by \[\Phi_{\mu\nu\lambda}(\Omega)=\frac{ieg_{S}\mu_{B}}{2}\int[d^{4}k] f(\omega)\mathrm{Tr}\Big{[}\varepsilon_{\mu\lambda\theta}\Big{\{}G^{RA}(\mathbf{k}, \omega)\sigma_{\theta}G^{R}(\mathbf{k},\omega+\Omega)v^{\nu}_{\mathbf{k}}+G^{A}(\mathbf{ k},\omega-\Omega)\sigma_{\theta}G^{RA}(\mathbf{k},\omega)v^{\nu}_{\mathbf{k}}\Big{\}}\] \[-\varepsilon_{\nu\lambda\theta}\Big{\{}G^{RA}(\mathbf{k},\omega)v^{ \mu}_{\mathbf{k}}G^{R}(\mathbf{k},\omega+\Omega)\sigma_{\theta}+G^{A}(\mathbf{k},\omega -\Omega)v^{\mu}_{\mathbf{k}}G^{RA}(\mathbf{k},\omega)\sigma_{\theta}\Big{\}}\Big{]}. \tag{13}\] Then, we obtain the formula of the optical activity \(\sigma_{\mu\nu\lambda}(\Omega)=\Phi_{\mu\nu\lambda}(\Omega)/i(\Omega+i\delta)\). ### No-go theorem and typical behaviors of the optical activity We discuss a general property, namely a no-go theorem, using the obtained formula (Eq. 13). The optical activity appears to have a singularity at \(\omega=0\) due to the \(\delta\)-function in Eqs. (4a) and (4b). However, we can prove that this singularity vanishes in the normal phase. In fact, taking the limit \(\Omega\to 0\), the integrand in Eq. (13) can be rewritten as a total differential form with respect to the wave number \(\mathbf{k}\) using the fact that \(G^{R/A}v^{\lambda}_{\mathbf{k}}G^{R/A}=\partial_{\lambda}G^{R/A}\), and thus \(\Phi_{\mu\nu\lambda}(\Omega=0)=0\) resulting in the singularity vanishing. This result shows that the optical activity vanishes in equilibrium. However, this is not the case in superconductors as we will discuss later. Next, we calculate the optical activity in a simple model and discuss the typical behaviors. We use a 2-dimensional noncentrosymmetric system including the Rashba spin-orbit coupling, however, we note that the optical activity is also expected in other 3-dimensional systems with other types of spin-orbit coupling. This Hamiltonian reads \[H_{\mathbf{k}}=\sum_{\mathbf{k}\sigma}\epsilon_{\mathbf{k}}c^{\dagger}_{\sigma\mathbf{k}}c_{ \sigma\mathbf{k}}+\alpha\sum_{\mathbf{k}\sigma\sigma^{\prime}}\mathbf{g}_{\mathbf{k}}\cdot \mathbf{\sigma}_{\sigma\sigma^{\prime}}c^{\dagger}_{\sigma\mathbf{k}}c_{\sigma^{ \prime}\mathbf{k}}. \tag{14}\] Here, we define \(\epsilon_{\mathbf{k}}=-2t(\cos k_{x}+\cos k_{y})\) and \(\mathbf{g}_{\mathbf{k}}=(\sin k_{y},-\sin k_{x},0)\). In this paper, we set the lattice constant \(a=1\). This model is \(\mathcal{T}\)-symmetric and belongs to the weak gyrotropic point group \(\mathrm{C}_{4\nu}\). Thus, the symmetric part of the optical activity vanishes, however, the antisymmetric part is not forbidden. For this symmetry, there is only one finite component \(\sigma_{yzy}=-\sigma_{zyy}=-\sigma_{zxx}=\sigma_{zxx}\). Figure 1 (Left) shows the energy dispersion and there is a band splitting due to the spin-orbit coupling. Figure 1 (Right) shows the frequency dependence of the optical activity \(\sigma_{zxx}\) at zero temperature. In the numerical calculation, we set \(t=1.0\), \(\alpha=0.3\), \(\mu=-0.1\), and \(\Gamma=0.1\). There is a gap of about \(0.1\sim 0.3\) magnitude around the chemical potential \(\mu=-0.1\) for direct transitions, not changing the wave numbers \(\mathbf{k}\), as seen in Fig. 1 (Left). At the frequency \(\Omega\) equivalent to the gap, the imaginary part of the optical activity changes its sign. The frequency dependence, including the sign change, can be explained by the concept of intraband and interband transitions. The formula of the imaginary part of the optical activity in the band representation is given by \[\begin{split}&\text{Im}\sigma_{zxx}^{\text{intra}(A)}(\Omega)= \frac{-eg_{S}\mu_{B}}{2}\frac{\tau}{(\Omega\tau)^{2}+1}\sum_{n\mathbf{k}}\frac{ \partial f(\tilde{\epsilon}_{n\mathbf{k}})}{\partial k_{x}}\sigma_{nn}^{y}\\ &\text{Im}\sigma_{zxx}^{\text{inter}(A)}(\Omega)\\ &=\frac{eg_{S}\mu_{B}}{2}\sum_{n\neq m,\mathbf{k}}\frac{f_{mn\mathbf{k}} }{\epsilon_{mn\mathbf{k}}}\frac{\tau}{\tau^{2}(\epsilon_{mn\mathbf{k}}-\Omega)^{2}+1 }\text{Re}\Big{[}v_{\mathbf{k}mn}^{x}\sigma_{nm}^{y}\Big{]}.\end{split} \tag{15}\] This equation is composed of two parts, the intraband effect and the interband effect. Here, we define the matrix element \(M_{mn}=\bra{u_{m\mathbf{k}}}M\ket{u_{n\mathbf{k}}}\), \(f_{mn\mathbf{k}}=f(\tilde{\epsilon}_{m\mathbf{k}})-f(\tilde{\epsilon}_{n\mathbf{k}})\), \(\epsilon_{mn\mathbf{k}}=\epsilon_{m\mathbf{k}}-\epsilon_{n\mathbf{k}}\), and \(\tilde{\epsilon}_{n\mathbf{k}}=\epsilon_{n\mathbf{k}}-\mu\). The intraband effect is attributed to the Fermi surface and behaves like the Drude form \(\sim\tau/(1+(\Omega\tau)^{2})\) as seen in Eq. (15). This behavior can be seen at low frequencies in Fig. 1. On the other hand, the interband effect is enhanced at the frequency resonant with the band gap (\(\sim 0.2\) in the current model) as seen in Eq. (15), and Fig. 1 shows that the sign of the spectrum changes at the corresponding frequency \(\Omega\sim 0.2\). We can confirm that the interband term provides this sign change due to the universal sum rule. The low-frequency peak originating from the intraband effect has a constant sign because of the Drude form. On the other hand, the interband effect needs to show a spectrum with an opposite sign so as to cancel the spectrum of the intraband effect and fulfill the universal sum rule (the summation is zero in Eq. 6). Thus, the interband term generates the high-frequency peak with the opposite sign. Furthermore, in the numerical result in Fig. 1 (Right), we can confirm the sum rule, and, in fact, the summation of the area is zero (\(\sim-0.0001917\cdots\)). ## IV Optical activity in noncentrosymmetric superconductors As discussed in previous studies [29; 48], a superconducting gyrotropic current changes the frequency dependence of the optical rotation. In this section, we derive the optical activity in superconductors using Green's functions and discuss general properties, including the sum rule and the missing area. In addition, we confirm these properties by a model calculation. ### Green function formula of the optical activity for superconductors We formulate the optical activity for superconductors with Green's functions. In this paper, the superconducting state is treated in the mean-field approximation, and it is described by the BdG Hamiltonian in the band representation \[H_{\text{BdG}} =\frac{1}{2}\sum_{\mathbf{k}nm}\psi_{n\mathbf{k}}^{\dagger}H_{\mathbf{k}nm}^ {\text{BdG}}\psi_{m\mathbf{k}}, \tag{16a}\] \[H_{\mathbf{k}}^{\text{BdG}} =\begin{pmatrix}H_{\mathbf{k}}-\mu&-\Delta_{\mathbf{k}}\\ -\Delta_{\mathbf{k}}^{\dagger}&-H_{-\mathbf{k}}^{\dagger}+\mu\end{pmatrix}. \tag{16b}\] Figure 1: (Left) The energy dispersion of the Rashba spin-orbit coupling model (Eq. 14). (Right) Numerical results of the optical activity \(\sigma_{zxx}\) for the model Hamiltonian (Eq. 14). We set \(t=1.0\), \(\alpha=0.3\), \(\mu=-0.1\) and \(\Gamma=0.1\). The mesh of the wavenumbers in the BZ is \(800\times 800\). We set Energy and \(\Omega\) in units of \(t\), and \(\sigma_{zxx}\) in units of \(e^{2}a/\hbar\). In this unit, we assume the electron mass \(m\approx 1\cdot h^{2}/ta^{2}\). Here, \(\mathbf{\psi}_{\mathbf{k}}^{\dagger}=(c_{1\mathbf{k}}^{\dagger},\cdots,c_{N\mathbf{k}}^{\dagger},c_ {1-\mathbf{k}},\cdots,c_{N-\mathbf{k}})\) is the Nambu spinor, \(\Delta_{\mathbf{k}}\) is the pair potential, which is the order parameter of superconductors. We define the transpose of a matrix \(M\) as \(M^{\mathrm{T}}\) and the hermitian conjugate of a matrix \(M\) as \(M^{\dagger}\). The current-current correlation function for the generalized current operator (Eq. 11) is given by \[\Phi_{\mu\nu}(\mathbf{q},\Omega) = \frac{1}{2}\int[d^{4}k]f(\omega)\mathrm{Tr}\Big{[}G^{RA}_{\rm BdG }(\mathbf{k}-,\omega)\tilde{j}^{\mu}_{\mathbf{k},\mathbf{q}}G^{R}_{\rm BdG}(\mathbf{k}+,\omega+ \Omega)\tilde{j}^{\nu}_{\mathbf{k},-\mathbf{q}}+G^{A}_{\rm BdG}(\mathbf{k}-,\omega-\Omega) \tilde{j}^{\mu}_{\mathbf{k},\mathbf{q}}G^{RA}_{\rm BdG}(\mathbf{k}+,\omega)\tilde{j}^{\nu }_{\mathbf{k},-\mathbf{q}}\Big{]}.\] There are some differences from the formula for the normal state (Eq. 12). First, \(G^{R/A}_{\rm BdG}(\mathbf{k},\omega)=1/(\omega-H^{\rm BdG}_{\mathbf{k}}+\Sigma^{R/A}( \omega))\) is the Green's function for the BdG Hamiltonian, and \(\Sigma^{R/A}(\omega)\) represents the self-energy of the dissipation effect. A specific form will be introduced later. Second, the current operator is expanded to the particle-hole Hilbert space as \[\tilde{j}^{\mu}_{\mathbf{k},\mathbf{q}}=\begin{pmatrix}j^{\mu}_{\mathbf{k},\mathbf{q}}&0\\ 0&-(j^{\mu}_{-\mathbf{k},\mathbf{q}})^{\mathrm{T}}\end{pmatrix}. \tag{18}\] Third, the prefactor \(1/2\) is introduced to prevent a double counting of the particle and hole degrees of freedom. After a Taylor expansion by \(\mathbf{q}\), the first-order coefficient is decomposed into a spin contribution and an orbital contribution. In this paper, we focus on the spin contribution, which is given by \[\Phi_{\mu\nu\lambda}(\Omega)=\frac{ieg_{S}\mu_{B}}{4}\int[d^{4}k]f (\omega)\mathrm{Tr}\Big{[}\varepsilon_{\mu\lambda\theta}\Big{\{}G^{RA}_{\rm BdG }(\mathbf{k},\omega)\tilde{\sigma}_{\theta}G^{R}_{\rm BdG}(\mathbf{k},\omega+\Omega) \tilde{v}^{\nu}_{\mathbf{k}}+G^{A}_{\rm BdG}(\mathbf{k},\omega-\Omega)\tilde{\sigma}_{ \theta}G^{RA}_{\rm BdG}(\mathbf{k},\omega)\tilde{v}^{\nu}_{\mathbf{k}}\Big{\}}\] \[-\varepsilon_{\nu\lambda\theta}\Big{\{}G^{RA}_{\rm BdG}(\mathbf{k}, \omega)\tilde{v}^{\mu}_{\mathbf{k}}G^{R}_{\rm BdG}(\mathbf{k},\omega+\Omega)\tilde{ \sigma}_{\theta}+G^{A}_{\rm BdG}(\mathbf{k},\omega-\Omega)\tilde{v}^{\mu}_{\mathbf{k} }G^{RA}_{\rm BdG}(\mathbf{k},\omega)\tilde{\sigma}_{\theta}\Big{\}}\Big{]} \tag{19}\] Here, the spin operator \(\tilde{\sigma}_{\theta}\) and the velocity operator in the Bloch basis \(\tilde{v}^{\mu}_{\mathbf{k}}\) are different from the normal states. They are defined as \[\tilde{\sigma}_{\theta}=\begin{pmatrix}\sigma_{\theta}&0\\ 0&-\sigma_{\theta}^{\mathrm{T}}\end{pmatrix},\quad\tilde{v}^{\mu}_{\mathbf{k}}= \begin{pmatrix}v^{\mu}_{\mathbf{k}}&0\\ 0&-(v^{\mu}_{-\mathbf{k}})^{\mathrm{T}}\end{pmatrix}. \tag{20}\] Then, we obtain the formula of the optical activity \(\sigma_{\mu\nu\lambda}(\Omega)=\Phi_{\mu\nu\lambda}(\Omega)/i(\Omega+i\delta)\). In this section, we mainly focus on the spin contribution. Of course, the orbital part, in general, contributes to the optical activity, and this part is studied in previous works investigating the superconducting orbital Edelstein effect and the superconducting gyrotropic current resulting in an additional correction to the optical rotation [29; 48; 76]. ### Relation between the singularity and the superconducting Edelstein effect As discussed in Sec. III.3, the singularity due to the \(\delta\)-function vanishes in the normal state. This result is guaranteed by the identity \(G^{R/A}v^{\lambda}_{\mathbf{k}}G^{R/A}=\partial_{\lambda}G^{R/A}\). However, we will see that a similar identity is not valid in superconducting states. The Bloch velocity operator in superconducting states is described by \(\tilde{v}^{\mu}_{\mathbf{k}}\) (Eq. 20), and this operator does not satisfy \(G^{R/A}_{\rm BdG}\tilde{v}^{\lambda}_{\mathbf{k}}G^{R/A}_{\rm BdG}=\partial_{ \lambda}G^{R/A}_{\rm BdG}\), even if the pair potential \(\Delta_{\mathbf{k}}\) is independent of the wavenumber \(\mathbf{k}\). Thus, the integrand in Eq. (20) cannot be transformed to a total differential form of \(\mathbf{k}\) in the limit of \(\Omega\to 0\). This means that the singularity can generally exist in superconducting states. A similar singularity appears in the optical conductivity, and the coefficient corresponds to the superfluid density [77]. The singularity corresponds to an equilibrium current, which is known to be the Meissner effect. Recently, a similar singularity also appears in the nonlinear conductivity resulting in anomalous divergences in the low-frequency regime [11], and the origin is the nonreciprocal Meissner effect [78]. The coefficient of the singularity in the optical activity can also be interpreted by a physical effect. The coefficient of the optical activity can be rewritten as \[\mathrm{Im}\Phi_{\mu\nu\lambda}(0)=\varepsilon_{\mu\lambda\theta} \mathcal{K}_{\nu\theta}-(\mu\leftrightarrow\nu). \tag{21}\] \(\mathcal{K}_{\nu\theta}\) is the superconducting Edelstein response coefficient, where the supercurrent induces a magnetization in noncentrosymmetric superconductors (\(S_{\nu}=\mathcal{K}_{\nu\theta}A_{\theta}\) or \(J_{\nu}=\mathcal{K}_{\nu\theta}B_{\theta}\)). This response is firstly derived by Edelstein in polar superconductors with Rashba spin-orbit coupling [50], and subsequent works [79; 80; 81; 82] study it in more details. Although the superconducting Edelstein effect is an important response originating in the uniqueness of noncentrosymmetric superconductors, it has not yet been observed in experiments. ### Missing area The singularity due to the \(\delta\)-function is difficult to be directly observed in optical responses. However, it can be exactly measured with the help of the sum rules (Eqs. 5 and 6). These sum rules state that the summation of the spectrum of the optical responses is independent of material details and does not change before and after superconducting transitions. Thus, the regular part in the superconducting state, accessible in optical measurements, appears to have a reduced area. This is called the missing area and is absorbed in the contribution to the \(\delta\)-function. This exact relation is used to measure the superfluid density and the penetration length by the optical conductivity. This sum rule is called the Ferrell-Glover-Tinkham (FGT) sum rule [3; 4], and the exact measurement using this sum rule is mainly discussed in high-temperature superconductors [5; 6; 7]. The discussion of the missing area can be extended to the optical activity. The optical activity also satisfies the sum rule (Eq. 6), which does not change before and after phase transitions. Thus, the following equation is established: \[\int_{+0}^{\infty}d\Omega\Big{(}\text{Im}\sigma^{(n)}_{\mu\nu \lambda}(\Omega)-\text{Im}\sigma^{(s)}_{\mu\nu\lambda}(\Omega)\Big{)}=-\frac{ \pi}{2}\text{Im}\Phi^{(s)}_{\mu\nu\lambda}(0). \tag{22}\] Here, we label \((n)\) and \((s)\) for the normal state and the superconducting state, respectively. The left-hand side of this equation represents the missing area, i.e., the difference between the spectral summations of the normal state and the superconducting state at finite frequencies, which is accessible in optical measurements. The right-hand side represents the coefficient of the \(\delta\)-function singularity. As discussed in the no-go theorems, the singularity is finite only in the superconducting state. Thus, the right-hand side includes only the coefficient of the superconducting state. As discussed in Sec. IV.2, the coefficient of the \(\delta\)-function is equivalent to the superconducting Edelstein response that has not yet been observed in experiments. Thus, the missing area measurement gives an alternative way to experimentally determine the superconducting Edelstein effect. Furthermore, we can directly determine the superconducting Edelstein effect only using the optical spectrum in the superconducting phase. The sum rule states that the summation is zero. Thus, the following equation is established: \[\int_{+0}^{\infty}d\Omega\text{Im}\sigma^{(s)}_{\mu\nu\lambda}( \Omega)=\frac{\pi}{2}\text{Im}\Phi^{(s)}_{\mu\nu\lambda}(0). \tag{23}\] ### Model calculation for the optical activity in a noncentrosymmetric superconductor We analyze an optical spectrum of the optical activity in superconductors to verify the above discussion and the missing area. We consider the same model used in Sec. III.3 as the normal Hamiltonian \(H_{\mathbf{k}}\), and the superconducting paring is uniform singlet. Thus, we set \(\Delta_{\mathbf{k}}=i\Delta\sigma_{y}\), where \(\Delta\) is real. This model is one of the noncentrosymmetric superconductors including the Rashba spin-orbit coupling. Such superconductors are discussed in the surface atomic-layer superconductors on substrates [83] such as the monolayer of FeSe [84; 85] and noncentrosymmetric bulk superconductors including heavy fermion superconductors with large spin-orbit coupling [49]. We plot the optical spectrum of the optical activity at various magnitudes of the pair potential \(\Delta\) in Fig. 2. In this calculation, we phenomenologically introduce the dissipation effect by multiplying a factor \(\eta_{\omega}\) obtained by the first Born approximation [86] in the retarded Green's Figure 2: (Left) Numerical results of the optical activity in the superconducting phase for several different pair potentials \(\Delta\). We set \(t=1,\alpha=0.3,\mu=-0.1\), and \(\Gamma=0.1\). (Right) Missing area. We plot the difference between the optical activity in the normal phase \(\text{Im}\sigma^{(n)}_{xx}\) and in the superconducting phase \(\text{Im}\sigma^{(s)}_{xxx}\). The integration mesh of the wavenumber in the BZ is \(800\times 800\). We use \(\Omega\) and \(\Delta\) in units of \(t\), and \(\sigma_{zxx}\) in units of \(e^{2}a/\hbar\). function as \[G_{\text{BdG}}^{R}(\mathbf{k},\omega)=\frac{1}{\eta_{\omega}\omega-H_{ \mathbf{k}}^{(n)}-\eta_{\omega}\Delta\rho_{y}\sigma_{y}} \tag{24}\] \[\eta_{\omega}=1+\Gamma\bigg{(}\frac{\theta(|\Delta|-|\omega|)}{ \sqrt{\Delta^{2}-\omega^{2}}}+\frac{i\text{sign}(\omega)\theta(|\omega|-| \Delta|)}{\sqrt{\omega^{2}-\Delta^{2}}}\bigg{)}. \tag{25}\] Here, \(H_{\mathbf{k}}^{(n)}\) is the normal part of \(H_{\mathbf{k}}^{\text{BdG}}\). \(\mathbf{\rho}\) is the Pauli matrix representing the particle-hole Hilbert space, \(\theta(x)\) is the step function, and sign(\(x\)) is the sign function returning \(+1\) if \(x>0\) and \(-1\) if \(x<0\). This dissipation effect is consistent with the normal phase introduced in Sec. III.2 in the limit \(\Delta\to 0\). Figure 2 (Left) shows the optical spectrum, and the weight vanishes for \(\omega<2\Delta\). The curve asymptotically approaches the curve of the normal phase at the high-frequency regime because the superconducting hybridization becomes small. Figure 2 (Right) shows the difference between the spectra of the normal state and the superconducting state for visibility. Figure 2 demonstrates that the origin of the missing area comes from the superconducting gap, and the area disappears in the high-frequency regime. Therefore, when measuring this area, it is sufficient to observe it for small frequencies corresponding to the energy scale of the superconducting gap. As discussed in Sec. IV.3, the missing area offers the exact measurement of the superconducting Edelstein effect. In this numerical calculation, we can confirm that the relation is valid as seen in Fig. 3, where we plot both the missing area and the direct calculation of the superconducting Edelstein effect, and we can see that the superconducting Edelstein effect and the missing area coincide. ## V Conclusion We have investigated general properties of the optical activity in noncentrosymmetric systems, including superconductors. We have derived the sum rule of the optical activity as the property of a two-body correlation function, applicable to general electric states such as interacting systems and superconductors. We have found that the summation is zero, independent of material details in Sec. II. We have discussed the typical behaviors of the optical activity in the normal phase in Sec. III. We have formulated the optical activity using Green's functions and discussed a no-go theorem using the obtained formula. The no-go theorem states that the singularity from the \(\delta\)-function at the zero frequency \(\Omega=0\) is absent, which means that the equilibrium current is forbidden in normal states. In addition, we have calculated the spectrum of the optical activity using a model, including the spin-orbit coupling and seen that there were two peaks with opposite signs in the spectrum. One peak appears around the zero frequency \(\Omega=0\), corresponding to the Drude peak (\(\sim\tau/(1+(\Omega\tau)^{2})\)) with finite relaxation time \(\tau\). This peak originates at the Fermi surface. Another peak appears at high frequencies and is enhanced around the band gap because its origin is the interband effect. The reason why the two peaks have the opposite sign is that the low-frequency peak from the intraband effect has the constant sign, and the high-frequency peak from the interband effect should show a spectrum with an opposite sign so as to cancel it out due to the sum rule (the summation is zero). Next, we have discussed the optical activity in noncentrosymmetric superconductors in Sec. IV. We have formulated the optical activity in the superconducting state using Green's functions and discussed some properties using this formula. First, we have discussed a no-go theorem. In superconductors, the theorem no longer holds, and the singularity from the \(\delta\)-function can appear. Second, we have found a characteristic sum rule similar to the FGT sum rule. Due to the existence of the singularity and the universal sum rule (Eq. 6), the spectrum of the optical activity in the finite frequency regime is reduced, and this missing area is equivalent to the coefficient of the singularity (Eq. 22). Furthermore, we have shown the coefficient is equivalent to the superconducting Edelstein effect, which has not been observed in experiments since the first proposal by Edelstein. Our result has shown that the exact measurement of the missing area gives the alternative way of the observation of the superconducting Edelstein effect. We have also calculated the optical activity using a specific model of a \(s\)-wave noncentrosymmetric superconductor with a spin-orbit coupling to investigate its typical spectrum. We have found that the missing area originates from the superconducting gap, and that, in the frequency region beyond the superconducting gap, the spectrum asymptotically approaches the normal phase case. Thus, it is sufficient to measure the missing area in the low frequency range about the super conducting gap. Finally, we comment on the orbital contribution of the optical activity. In this paper, we have focused on the spin contribution. However, the orbital part exists in more general cases as seen in Refs. [29; 76; 48]. The discussion on the no-go theorem and the missing area involving the orbital part remains as future works. ## Acknowledgements K.S. thanks Akira Kofuji, Hiroo Tanaka, and Michiya Chazono for variable discussions. K.S. acknowledges support as a JSPS research fellow and is supported by JSPS KAKENHI, Grant No.22J23393 and No.22KJ2008. R.P. is supported by JSPS KAKENHI No.23K03300. The computer calculations in this work have been done using the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo. ## Appendix A Derivation of the sum rule In this appendix, we derive the sum rule of the optical activity. At first, we show the symmetry relations (Eqs. 2a and 2b). The current-current correlation function, in general, can be written in the Lehmann representation as \[\Phi_{\mu\nu}(\mathbf{q},\omega) =\sum_{lm}\frac{e^{-\beta E_{l}}-e^{-\beta E_{m}}}{Z(\omega+i \delta-E_{lm})}\bra{l}J_{\mathbf{q}}^{\mu}\ket{m}\bra{m}J_{-\mathbf{q}}^{\nu}\ket{l}. \tag{10}\] Here, \(Z=\mathrm{Tr}[e^{-\beta H}]\) is the partition function. We use exact eigenstates \(\ket{n}\) and eigenvalues \(E_{n}\) of a given system Hamiltonian \(H\), and the current operator \(\mathbf{J}_{\mathbf{q}}=\int d\mathbf{r}\mathbf{J}(\mathbf{r})e^{-i\mathbf{q}\cdot\mathbf{r}}\). The current operator is Hermitian \(\mathbf{J}^{\dagger}(\mathbf{r})=\mathbf{J}(\mathbf{r})\), thus \(\mathbf{J}_{\mathbf{q}}^{\dagger}=\mathbf{J}_{-\mathbf{q}}\). The complex conjugate of this correlation function is \[\Phi_{\mu\nu}^{*}(\mathbf{q},\omega) = \tag{11}\] \[= \sum_{lm}\frac{e^{-\beta E_{m}}-e^{-\beta E_{i}}}{Z(\omega-i \delta+E_{lm})}\bra{l}J_{-\mathbf{q}}^{\mu}\ket{m}\bra{m}J_{\mathbf{q}}^{\nu}\ket{l}\] \[= \Phi_{\mu\nu}(-\mathbf{q},-\omega).\] Then, we can derive Eqs. (2a) and (2b) by expanding this equation in \(\mathbf{q}\). Next, we move on to the derivation of the sum rule. The imaginary part of the optical activity is \[\mathrm{Im}\sigma_{\mu\nu\lambda}(\omega)=-\mathscr{P}\frac{\mathrm{Re}\Phi_{ \mu\nu\lambda}(\omega)}{\omega}-\pi\delta(\omega)\mathrm{Im}\Phi_{\mu\nu \lambda}(\omega). \tag{12}\] The imaginary part is an even function of \(\omega\), while the real part is odd as we can see in Eq. (2b). Therefore, integrating the imaginary part along the real-\(\omega\) axis is given by \[\int_{-\infty}^{\infty}d\omega\mathrm{Im}\sigma_{\mu\nu\lambda}(\omega) = 2\int_{0}^{\infty}d\omega\mathrm{Im}\sigma_{\mu\nu\lambda}(\omega) \tag{13}\] \[= -\mathscr{P}\int_{-\infty}^{\infty}d\omega\frac{\mathrm{Re}\Phi_{ \mu\nu\lambda}(\omega)}{\omega}-\pi\mathrm{Im}\Phi_{\mu\nu\lambda}(0)\] \[= 0.\] Here, we use the Kramers-Kroning relation, which is valid for retarded functions including \(\Phi_{\mu\nu}(\mathbf{q},\omega)\) due to the analyticity in the upper half-plane of the \(\omega\)-complex plane, at the third equality. Then, we obtain the sum rule (Eq. 6).
2304.08311
A Review on Octupolar Tensors
In its most restrictive definition, an octupolar tensor is a fully symmetric traceless third-rank tensor in three space dimensions. So great a body of works have been devoted to this specific class of tensors and their physical applications that a review would perhaps be welcome by a number of students. Here, we endeavour to place octupolar tensors into a broader perspective, considering non-vanishing traces and non-fully symmetric tensors as well. A number of general concepts are recalled and applied to either octupolar and higher-rank tensors. As a tool to navigate the diversity of scenarios we envision, we introduce the octupolar potential, a scalar-valued function which can easily be given an instructive geometrical representation. Physical applications are plenty; those to liquid crystal science play a major role here, as they were the original motivation for our interest in the topic of this review.
Giuseppe Gaeta, Epifanio G. Virga
2023-04-17T14:27:35Z
http://arxiv.org/abs/2304.08311v2
# A Review on Octupolar Tensors ###### Abstract In its most restrictive definition, an _octupolar tensor_ is a fully symmetric traceless third-rank tensor in three space dimensions. So great a body of works have been devoted to this specific class of tensors and their physical applications that a review would perhaps be welcome by a number of students. Here, we endeavour to place octupolar tensors into a broader perspective, considering non-vanishing traces and non-fully symmetric tensors as well. A number of general concepts are recalled and applied to either octupolar and higher-rank tensors. As a tool to navigate the diversity of scenarios we envision, we introduce the _octupolar potential_, a scalar-valued function which can easily be given an instructive geometrical representation. Physical applications are plenty; those to liquid crystal science play a major role here, as they were the original motivation for our interest in the topic of this review. ## 1 Introduction An _octupolar tensor_\(\mathbf{A}\) usually designates a fully symmetric traceless tensor of rank 3, possibly in three space dimensions. One may well wonder why such a specific topic should deserve an extended review. Granted that physical applications of such a class of tensors may indeed be many, the question would remain as to whether one should invest time reading such a review. We offer (what we think are) two good reasons to continue reading. Both concern the perspective adopted here. First, our perspective is broader than the title suggests. We review properties of octupolar tensors as pertaining to general tensors of higher ranks and dimensions. Second, our perspective is open to the many novel results that have been gathered in the last few decades, with an eye to their physical motivation. Here is how our material is organized. Section 2 contains all preliminary definitions and basic results that should make our presentation nearly self-contained, thus sparing the reader the hurdle of consulting respectable, but often opaque books on tensor algebra. The primary physical motivation behind our interest in the topic of this review rests with liquid crystal science and (especially) the new phases whose description calls for an octupolar tensor. This motivation is also recalled in section 2, but not divorced from those arising from other fields of physics. In section 3, we present our geometric approach to octupolar tensors. It is based on the _octupolar potential_\(\Phi\), a scalar-valued function on the unit sphere amenable to a geometric representation that we find instructive. The characterization of a generic octupolar tensor \(\mathbf{A}\) afforded in section 3 is backed by a different, fully algebraic approach presented in section 4, where a polynomial of degree 6 in a _single_ variable embodies all properties of \(\mathbf{A}\). Section 4 also contains new results; its development is meticulous since a few, not totally irrelevant details were missed in the original literature. This section is finely articulated in minute computational items so as to ease the reader decide which details to skip and which to dwell in. Section 5 hosts our first extension: we consider the role of non-vanishing traces, mainly phrased in the language of the octupolar potential. Section 6 further widens our scope. We study third-rank non-symmetric tensors, trying to adapt to this general context the octupolar-potential formalism. In section 7, we briefly present a number of applications of the theory, ranging from gravitation to liquid crystals, as exemplary fields that could further benefit from the unified approach pursued here. Finally, in section 8, we outline issues that even a cursory glance at the different perspectives evoked in this review would suggest for future research. ## 2 Preliminaries In this section we lay down the basis of our development. We start from a general decomposition of tensors of any rank and in any dimension, with the aim of providing a solid mathematical justification for seeking special cases in our representations with a reduced number of parameters. Our primary interest lies in third-rank tensors in three dimensions. A noticeable subclass of these are properly called the _octupolar tensors_, but our terminology will be more flexible on this account. ### Invariant tensor decomposition The set \(\mathcal{T}(r,\mathsf{V})\) of tensors of rank \(r\) in \(n\)-dimensional space \(\mathsf{V}\) over the field \(\mathbb{F}\) form a vector space of dimension \(n^{r}\). If \(G\) is a group of linear transformations in \(\mathsf{V}\), then \(\mathcal{T}(r,\mathsf{V})\) provide a basis for a representation (in general, reducible) \(T\) of \(G\), \(T\subseteq GL(n^{r},\mathbb{F})\). By using, e.g., _Young diagrams_ (which give rise to _Young patterns_, or _Young tableaux_), one can decompose such a representation of \(GL(n,\mathbb{F})\) into irreducible ones. This decomposition is based on the decomposition of representation of the _symmetry_ group \(S_{n}\) (the group of permutations of \(n\) symbols); in turn, this decomposition can also be performed with the technique of _Yamanouchi symbols_. There is a one-to-one correspondence between Young diagrams and Yamanouchi symbols; see, for example, [51, p. 221] (general references on tensor algebra and irreducible representations are the classical books [14, 73, 124]). A tensor \(\boldsymbol{\mathsf{A}}\in\mathcal{T}(r,\mathsf{V})\) transforms (under maps in the base space \(\mathsf{V}\)) as the tensor product of \(r\) vectors, \(\boldsymbol{x}_{1}\otimes\boldsymbol{x}_{2}\otimes\cdots\boldsymbol{x}_{r}\). In studying the transformation properties of tensors in concrete terms under a given group action \(G\) in \(\mathsf{V}\), it is often convenient to consider the basis in \(\mathcal{T}(r,\mathsf{V})\) built by taking the product of basis vectors in \(\mathsf{V}\), \[\boldsymbol{\mathsf{A}}=A_{i_{1}i_{2}\ldots i_{r}}\boldsymbol{e}_{i_{1}} \otimes\boldsymbol{e}_{i_{2}}\otimes\cdots\otimes\boldsymbol{e}_{i_{r}}, \tag{1}\] where \((\boldsymbol{e}_{1},\boldsymbol{e}_{2},\ldots,\boldsymbol{e}_{n})\) is a basis for \(\mathsf{V}\). In (1), and routinely below, we employ the convention of summing over repeated indices. Moreover, if the space \(\mathsf{V}\) is endowed with an inner product, the basis \((\boldsymbol{e}_{1},\boldsymbol{e}_{2},\ldots,\boldsymbol{e}_{n})\) can be taken to be orthonormal, in which case the corresponding scalar components \(A_{i_{1}i_{2}\ldots i_{r}}\) will also be referred to as Cartesian. Covariance dictates that the matrix elements for the transformations of \(\mathcal{T}(r,\mathsf{V})\) are homogeneous polynomials of degree \(r\) in the matrix elements for the action of the group \(G\) in \(\mathsf{V}\). For second-rank tensors, any \(\boldsymbol{L}\in\mathcal{T}(2,\mathsf{V})\) can be decomposed as \(\boldsymbol{L}=\boldsymbol{S}+\boldsymbol{W}\), where \(\boldsymbol{S}^{\mathsf{T}}=\boldsymbol{S}\), \(\boldsymbol{W}^{\mathsf{T}}=-\boldsymbol{W}\), and a superscript \({}^{\mathsf{T}}\) denotes transposition.12 A similar decomposition exists for tensors \(\boldsymbol{\mathsf{A}}\) of arbitrary rank and can be described with the aid of Young diagrams. In terms of the scalar components \(A_{i_{1}i_{2}\ldots i_{r}}\), with which we shall also identify \(\boldsymbol{\mathsf{A}}\), these are obtained by arranging \(r\) boxes in all possible ways in a texture of rows and columns, with the constraint that each row should not be longer than the preceding one. The boxes represent tensor indices, and the corresponding tensor will be symmetric under permutations exchanging indices on different columns on the same row, and antisymmetric under permutations exchanging indices on different rows on the same column. The latter condition implies that there should not be more than \(n\) rows, or the corresponding representation will be trivial (tensors fully antisymmetric in \(r>n\) indices, having necessarily at least two equal indices, will automatically vanish). It should be noted that the representations corresponding to Young diagrams obtained from each other by an exchange of rows and columns are _conjugated_; thus, in particular, the maximal number \(n\) is such for both rows and columns (see also Sect. 7.4 of [51]). Footnote 12: Here \(\boldsymbol{S}\) stands for “symmetric” and \(\boldsymbol{W}\) for “skew-symmetric”, synonymous with “antisymmetric”. Thus, e.g., for \(r=2\) we have \[\Box\otimes\Box=(\Box\,\Box)\ \oplus\ \left(\begin{array}{c}\Box\\ \Box\end{array}\right), \tag{2}\] while for \(r=3\) we have \[\Box\otimes\Box\otimes\Box=(\Box\,\Box\,\Box)\ \oplus\ \left(\begin{array}{c} \Box\\ \Box\end{array}\right)\ \oplus\ \left(\begin{array}{c}\Box\\ \Box\\ \Box\end{array}\right). \tag{3}\] Clearly, in the case \(n=2\), the last diagram will correspond to null tensors. Following [99], Weyl [123] initiated a fully general theory of decomposition of tensors into irreducible symmetry parts, having especially in mind its application to quantum mechanics. An early description of the role of both Young's diagrams and tableaux can be retraced in [119]; here we follow a more recent approach [54]. A Young tableau is obtained by filling the boxes of Young diagrams as in (2) or (3) with indices. Each diagram \(\Lambda\) has a corresponding dimension, given by the following _hook_ formula: \[\dim\Lambda=\frac{r!}{\prod_{(\alpha,\beta)\in\Lambda}\mathrm{ hook}(\alpha,\beta)}. \tag{4}\] Here \((\alpha,\beta)\) denotes the position of a cell in the diagram: \(\alpha\) is the row index, while \(\beta\) is the column index. For a cell \((\alpha,\beta)\) in the diagram \(\Lambda\), the _hook length_\(\mathrm{hook}(\alpha,\beta)\) is the sum of the number of boxes that are in the same row on the right of the cell and the number of boxes in the same column below it plus \(1\) (to account for the cell itself). Denoting by \(\boldsymbol{\mathsf{A}}^{(p)}\) the tensorial component of \(\boldsymbol{\mathsf{A}}\) corresponding to the tableau generated by a diagram \(\Lambda_{p}\), its dimension \(\dim\boldsymbol{\mathsf{A}}^{(p)}\), that is, the number of independent parameters needed to represent it, is given by \[\dim\boldsymbol{\mathsf{A}}^{(p)}=\prod_{(\alpha,\beta)\in\Lambda_{p}}\frac{ n+\beta-\alpha}{\mathrm{hook}(\alpha,\beta)}. \tag{5}\] #### 2.1.1 Case of interest. In the case where \(r=3\), which will be of special interest to us, letting \(\Lambda_{1}\), \(\Lambda_{2}\), and \(\Lambda_{3}\) denote orderly the Young diagrams on the right-hand side of (3), we easily see that \[\dim\Lambda_{1}=1,\quad\dim\Lambda_{2}=2,\quad\dim\Lambda_{3}=1, \tag{6}\] meaning that \(\boldsymbol{\mathsf{A}}\) can be decomposed into three different types of tensors \(\boldsymbol{\mathsf{A}}^{(p)}\): 1. A single \(\boldsymbol{\mathsf{A}}^{(1)}\), which is fully symmetric; 2. Two independent components of \(\boldsymbol{\mathsf{A}}^{(2)}\), \(\boldsymbol{\mathsf{A}}^{(2,1)}\) and \(\boldsymbol{\mathsf{A}}^{(2,2)}\), which are partly symmetric; 3. A single \(\boldsymbol{\mathsf{A}}^{(3)}\), which is fully antisymmetric. We shall denote by \(\boldsymbol{\mathsf{A}}\) a generic tensor of \(\mathcal{T}(3,\mathsf{V})\) and by \(A_{ijk}\) its scalar components in a basis \((\boldsymbol{e}_{1},\boldsymbol{e}_{2},\ldots,\boldsymbol{e}_{n})\) of \(\mathsf{V}\). We shall reserve the symbol \(\mathbf{A}\) for the special, but important case where \(n=3\). The three types of tensors outlined above correspond to coefficients \(A_{ijk}\) having three types of symmetry under permutations. Specifically, case (i) is characterized by \[A^{(1)}_{\pi(i,j,k)}=A^{(1)}_{ijk}, \tag{7}\] for any permutation \(\pi\in S_{3}\), and case (iii) is characterized by \[A^{(3)}_{\pi(i,j,k)}=\mathrm{sgn}(\pi)A^{(3)}_{ijk}, \tag{8}\] where \(\mathrm{sgn}(\pi)\) is the _sign_ (or _index_) of the permutation. Finally, tensors under case (ii) are characterized by the following mixed symmetry relations (see also [54]) \[A^{(2,1)}_{ijk}=A^{(2,1)}_{jik}\quad\mbox{and}\quad A^{(2,2)}_{ijk}=A^{(2,2)}_{ kji}. \tag{9}\] A general tensor \(\mathbf{\mathsf{A}}\in\mathcal{T}(3,\mathsf{V})\) can thus be written as the following sum of irreducible tensors with respect to \(GL(n^{3},\mathbb{F})\): \[\mathbf{\mathsf{A}}=\mathbf{\mathsf{A}}^{(1)}+\mathbf{ \mathsf{A}}^{(2,1)}+\mathbf{\mathsf{A}}^{(2,2)}+\mathbf{ \mathsf{A}}^{(3)}. \tag{10}\] The Cartesian components of these tensors can be expressed in terms of the components of \(\mathsf{A}\) as follows, \[A^{(1)}_{ijk} = \frac{1}{6}\left(A_{ijk}+A_{jki}+A_{kij}+A_{jik}+A_{kji}+A_{ikj} \right), \tag{11}\] \[A^{(2,1)}_{ijk} = \frac{1}{3}\left(A_{ijk}+A_{jik}-A_{kji}-A_{kij}\right),\] (12) \[A^{(2,2)}_{ijk} = \frac{1}{3}\left(A_{ijk}-A_{jik}+A_{kji}-A_{jki}\right),\] (13) \[A^{(3)}_{ijk} = \frac{1}{6}\left(A_{ijk}+A_{jki}+A_{kij}-A_{jik}-A_{kji}-A_{ikj} \right). \tag{14}\] Moreover, it follows from (5) that \[\dim\mathbf{\mathsf{A}}^{(1)} = \frac{1}{6}n(n+1)(n+2), \tag{15}\] \[\dim\mathbf{\mathsf{A}}^{(2,1)} = \dim\mathbf{\mathsf{A}}^{(2,2)}=\frac{1}{3}n(n+1)(n-1),\] (16) \[\dim\mathbf{\mathsf{A}}^{(3)} = \frac{1}{6}n(n-1)(n-2), \tag{17}\] which together with (10) easily imply that \[\dim\mathbf{\mathsf{A}}=n^{3}. \tag{18}\] While \(\mathbf{\mathsf{A}}^{(1)}\) and \(\mathbf{\mathsf{A}}^{(3)}\) are (uniquely identified) irreducible components of \(\mathsf{A}\), as pointed out in [54], the decomposition \(\mathbf{\mathsf{A}}^{(2)}=\mathbf{\mathsf{A}}^{(2,1)}+ \mathbf{\mathsf{A}}^{(2,2)}\) is irreducible, but not unique. It is also worth noting that by (12) and (13) \[A^{(2)}_{ijk}=\frac{1}{3}\left(2A_{ijk}-A_{jki}-A_{kij}\right), \tag{19}\] which shows how both fully symmetric and fully antisymmetric parts of \(\mathbf{\mathsf{A}}^{(2)}\) (defined as in (11) and (14), respectively) vanish, in agreement with (7) and (8). With a tensor \(\mathsf{A}\) we shall also associate the scalar field \(\Phi:\mathsf{V}\to\mathbb{F}\) defined as \[\Phi:=A_{ijk}x_{i}x_{j}x_{k}, \tag{20}\] where \(x_{i}\) are the components of a vector \(\mathbf{x}\in\mathsf{V}\) in the basis \((\mathbf{e}_{1},\mathbf{e}_{2},\ldots,\mathbf{e}_{ n})\). \(\Phi\) will also be referred to as the _potential_ associated with \(\mathsf{A}\). It is clear from the foregoing discussion that \(\Phi\) is only determined by the fully symmetric part \(\mathbf{\mathsf{A}}^{(1)}\) of \(\mathsf{A}\),1 Footnote 1: A further characterization of \(\Phi\) for a fully symmetric tensor \(\mathsf{A}\) will be given in section 2.3.1. \[\Phi=A^{(1)}_{ijk}x_{i}x_{j}x_{k}. \tag{21}\] We shall be especially interested in the case where \(\mathbb{F}=\mathbb{R}\), \(\mathsf{V}\) is endowed with an inner product, and \(r=n=3\); this is the case that identifies a general _octupolar tensor_ \(\mathbf{A}.\|\) Correspondingly, the potential in (20) will be called the _octupolar potential_. In this special case, equations (15), (16), and (17) deliver \[\dim\mathbf{A}^{(1)}=10,\quad\dim\mathbf{A}^{(2,1)}=\dim\mathbf{A}^{(2,2)}=8, \quad\dim\mathbf{A}^{(3)}=1 \tag{22}\] and \(\Phi\) can be explicitly written as \[\Phi =A_{111}x_{1}^{3}+3(A_{112}x_{2}+A_{113}x_{3})x_{1}^{2} \tag{23}\] \[+3\left(A_{122}x_{2}^{2}+2A_{123}x_{2}x_{3}+A_{133}x_{3}^{2} \right)x_{1}+A_{222}x_{2}^{3}+3A_{223}x_{2}^{2}x_{3}\] \[+3A_{233}x_{2}x_{3}^{2}+A_{333}x_{3}^{3},\] which displays the 10 real parameters that represent \(\mathbf{A}^{(1)}\). A recurrent case is that of an octupolar tensor symmetric in all indices and with all vanishing partial traces. Strictly speaking, this is the case which the name _octupolar tensor_ should be reserved for, but here we shall adopt a more flexible terminology, occasionally denoting as _genuine_ the octupolar tensors in their strictest definition. Such tensors feature 7 independent; this is the simplest of all octupolar tensors with a physical relevance and can be fully characterized by a variety of methods, elaborated upon in sections 3 and 4 below. The more general case of a fully symmetric tensor will be analyzed in section 5. The octupolar potential \(\Phi\) is a homogeneous polynomial of degree 3 over \(\mathsf{V}\); its values are thus completely determined by its restriction onto the _unit_ sphere \(\mathbb{S}^{2}\), where \(\Phi\) can be properly defined. Occasionally, to reflect this restriction, we shall pass to spherical coordinates \[x_{1}=r\cos\theta\cos\phi,\quad x_{2}=r\sin\theta\cos\phi,\quad x_{3}=r\sin\phi, \tag{24}\] where \(r\in(0,\infty)\), \(\theta\in(0,2\pi)\), \(\phi\in[-\pi/2,\pi/2]\), or we shall explicitly represent one hemisphere of \(\mathbb{S}^{2}\), writing, for example, \[x_{3}=\pm\sqrt{1-x_{1}^{2}-x_{2}^{2}}. \tag{25}\] This, however, is not the only decomposition of \(\mathbb{S}^{2}\) in halves that shall be considered in the following. ### Orthogonal irreducible decomposition An alternative way to represent a tensor \(\boldsymbol{\mathsf{A}}\in\mathcal{T}(r,\mathsf{V})\) is by decomposing it in orthogonal irreducible tensors of rank \(r\). It is known from the theory of group representation (see, for example, [14]) that \(\boldsymbol{\mathsf{A}}\) can be expanded as a direct sum of traceless symmetric tensors. Following [132], we can formally write \[\boldsymbol{\mathsf{A}}=\boldsymbol{\mathsf{D}}^{(r)}+J_{1}\boldsymbol{ \mathsf{D}}^{(r-1)}+J_{2}\boldsymbol{\mathsf{D}}^{(r-2)}+\ldots+J_{r-1} \boldsymbol{\mathsf{D}}^{(1)}+J_{r}\boldsymbol{\mathsf{D}}^{(0)}, \tag{26}\] where \(J_{r}{\hbox{\kern 0.0pt$\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-0. where \(\delta_{ij}\) is Kronecker's symbol. The decomposition in (26) can then be written as \[\mathbf{A}=\mathbf{D}^{(3)}+\mathbf{D}_{1}^{(2)}+\mathbf{D}_{2}^{(2)}+\mathbf{D }_{1}^{(1)}+\mathbf{D}_{2}^{(1)}+\mathbf{D}_{3}^{(1)}+\mathbf{D}^{(0)}, \tag{33}\] where the Cartesian components of the third-rank tensors \(\mathbf{D}_{i}^{(j)}\) ar explicitly given by \[D_{ijk}^{(0)} =\frac{1}{6}A\epsilon_{ijk}, \tag{34}\] \[D_{1,ijk}^{(1)} =\frac{1}{10}\left(4v_{i}^{(1)}\delta_{jk}-\delta_{ik}v_{j}^{(1)} -\delta_{ij}v_{k}^{(1)}\right),\] (35) \[D_{2,ijk}^{(1)} =\frac{1}{10}\left(-v_{i}^{(2)}\delta_{jk}+4\delta_{ik}v_{j}^{(2 )}-\delta_{ij}v_{k}^{(2)}\right),\] (36) \[D_{3,ijk}^{(1)} =\frac{1}{10}\left(-v_{i}^{(3)}\delta_{jk}-\delta_{ik}v_{j}^{(3) }+4\delta_{ij}v_{k}^{(3)}\right),\] (37) \[D_{1,ijk}^{(2)} =\frac{1}{3}\left(2\epsilon_{ijl}D_{lk}^{(1)}+D_{il}^{(1)} \epsilon_{ljk}\right),\] (38) \[D_{2,ijk}^{(2)} =\frac{1}{3}\left(2\epsilon_{ijl}D_{lk}^{(2)}+D_{il}^{(2)} \epsilon_{ljk}\right),\] (39) \[D_{ijk}^{(3)} =\overrightarrow{A}_{ijk}. \tag{40}\] **Remark 2**: It is clear from this explicit representation of the third-rank tensors \(\mathbf{D}_{i}^{(j)}\) how they may fail to be symmetric, although they result from the direct sum of traceless symmetric tensors of lower rank. **Remark 3**: By letting \[A_{(ijk)}:=\frac{1}{6}\left(A_{ijk}+A_{jki}+A_{kij}+A_{kji}+A_{ jik}+A_{ikj}\right), \tag{41}\] \[V_{i}:=\frac{1}{3}\left(v_{i}^{(1)}+v_{i}^{(2)}+v_{i}^{(3)} \right), \tag{42}\] we can easily give \(D_{ijk}^{(3)}\) in (40) the explicit form \[D_{ijk}^{(3)}=A_{(ijk)}-\frac{1}{5}V_{i}t_{ijk}\quad\mbox{(no sum on $i$)}, \tag{43}\] where the symbol \(t_{ijk}\) is defined as follows \[t_{ijk}:=\cases{3&for $i=j=k$,\cr 1&when two indices are equal,\cr 0&otherwise.} \tag{44}\] **Remark 4**: It is easily seen that the representation of \(\mathbf{A}\) in (33) depends on 27 independent parameters, as it should: one is \(A\), 9 come from the components of the vectors \(\boldsymbol{v}\)'s and 10 from the components of the symmetric traceless second-rank tensors \(\boldsymbol{D}\)'s; finally, only 7 are hidden in \(\mathbf{D}^{(3)}\). **Remark 5**: Since \(D_{ijk}^{(0)}\) is antisymmetric in the exchange of all indices, \(D_{ijk}^{(0)}x_{i}x_{j}x_{k}=0\) for all \(\boldsymbol{x}\in\mathbb{S}^{2}\), and similarly vanish both \(D_{1,ijk}^{(2)}x_{i}x_{j}x_{k}\) and \(D_{2,ijk}^{(2)}x_{i}x_{j}x_{k}\). Moreover, \[(D_{1,ijk}^{(1)}+D_{2,ijk}^{(1)}+D_{3,ijk}^{(1)})x_{i}x_{j}x_{k}=\frac{3}{5}V_ {i}x_{i}. \tag{45}\] Thus, building the potential \(\Phi\) defined in (20) for the tensor \({\bf A}\) expressed as in 33 results in a function depending only on 10 independent parameters out of the 27 present in \({\bf A}\), in accord with (21): here 7 are needed for \({\bf D}^{(3)}\) and 3 for \(\boldsymbol{V}\). **Remark 6**: If \(A_{ijk}\) enjoys the partial symmetry \(A_{ijk}=A_{ikj}\), it follows from (29), (30), and (32) that \(A=0\), \(\boldsymbol{v}^{(2)}=\boldsymbol{v}^{(3)}\), and \(\boldsymbol{D}^{(2)}=\boldsymbol{0}\). The number of independent parameters in the decomposition (33) then reduces to 18: 6 are the components of the \(\boldsymbol{v}\)'s, 5 the components of \(\boldsymbol{D}^{(1)}\), and 7 those of \({\bf D}^{(3)}\). We can also write explicitly the Cartesian components of \({\bf A}\) as follows: \[\begin{split} A_{ijk}=&\stackrel{{ \longrightarrow}}{{A}}_{ijk}+\frac{1}{3}\left(2\epsilon_{ijl}D_{lk}+D_{il} \epsilon_{ljk}\right)\\ +&\frac{1}{10}\left(4u_{i}\delta_{jk}-u_{j}\delta_ {ik}-u_{k}\delta_{ij}\right)+\frac{1}{10}\left(-2v_{i}\delta_{jk}+3v_{j} \delta_{ik}+3v_{k}\delta_{ij}\right),\end{split} \tag{46}\] where \(u_{i}=A_{ikk}\) and \(v_{i}=A_{jij}=A_{jji}\) are the components of \(\boldsymbol{v}^{(1)}\) and \(\boldsymbol{v}^{(2)}=\boldsymbol{v}^{(3)}\), respectively, and \[D_{ij}=\frac{1}{2}\left(\epsilon_{iml}A_{mlj}+\epsilon_{jml}A_{mli}\right) \tag{47}\] are the components of \(\boldsymbol{D}^{(1)}\). Similar expressions for \(A_{ijk}\) in this case can also be found in [131, 132]. **Remark 7**: In the fully symmetric case, where \(A_{ijk}=A_{ikj}=A_{jik}\), \(A=0\), all vectors \(\boldsymbol{v}^{(i)}\) are one and the same \(\boldsymbol{v}\), and both \(\boldsymbol{D}^{(1)}\) and \(\boldsymbol{D}^{(2)}\) vanish, so that (33) reduces to \[{\bf A}={\bf D}^{(3)}+{\bf D}^{(1)}. \tag{48}\] Here \({\bf D}^{(1)}={\bf D}^{(1)}_{1}+{\bf D}^{(1)}_{2}+{\bf D}^{(1)}_{3}\) and, in explicit components, \[A_{ijk}=\stackrel{{\longrightarrow}}{{A}}_{ijk}+\frac{1}{5}\left( v_{i}\delta_{jk}+v_{j}\delta_{ik}+v_{k}\delta_{ij}\right), \tag{49}\] where \(v_{i}=A_{ijj}\) are the components of \(\boldsymbol{v}\), in agreement with the general _detracer_ operator introduced in [3, 4]. ### Generalized eigenvectors and eigenvalues For tensors of rank higher than 2, the very notion of eigenvectors and eigenvalues is _not_ universally accepted and, what is worse for our purposes, for these tensors no analogue is known of the Spectral Theorem, which characterizes symmetric second-rank tensors in terms of their eigenvectors and eigenvalues. Different notions of generalized eigenvectors and eigenvalues have been proposed for not necessarily symmetric tensors of rank \(r>2\) in a general \(n\)-dimensional space \({\sf V}\). The one we adopt below has been put forward and studied in [88, 89, 80]; it has also been enriched by a theorem [19] that estimates the cardinality of eigenvalues.* For definiteness, here we shall take \({\sf V}\) endowed with an inner product (denoted by the symbol \(\cdot\)) over the field \(\mathbb{F}=\mathbb{C}\). Letting \(\boldsymbol{x}^{\otimes r}\) be the member of \(\mathcal{T}(r,{\sf V})\) defined by the multiple tensor product \[\boldsymbol{x}^{\otimes r}:=\underbrace{\boldsymbol{x}\otimes\cdots\otimes \boldsymbol{x}}_{r\text{ times}} \tag{50}\] and following [88, 89], for a tensor \(\boldsymbol{\mathsf{A}}\in\mathcal{T}(r,{\sf V})\), we define \(\boldsymbol{\mathsf{A}}\boldsymbol{x}^{r-1}:=\boldsymbol{\mathsf{A}}\cdot \boldsymbol{x}^{\otimes(r-1)}\), which is the vector in \({\sf V}\) with Cartesian components \[\left(\boldsymbol{\mathsf{A}}\boldsymbol{x}^{r-1}\right)_{i}:=A_{ii_{2}\ldots i _{r}}x_{i_{2}}\ldots x_{i_{r}}, \tag{51}\] where \(A_{i_{1}i_{2}\ldots i_{r}}\) are the Cartesian components of \(\boldsymbol{\mathsf{A}}\) relative to a prescribed, orthonormal basis \((\boldsymbol{e}_{1},\boldsymbol{e}_{2},\ldots,\boldsymbol{e}_{n})\) of \({\sf V}\), as in (1). The solutions \(\boldsymbol{x}\in{\sf V}\) and \(\lambda\in\mathbb{C}\) of the non-linear problem \[\boldsymbol{\mathsf{A}}\boldsymbol{x}^{r-1}=\lambda\boldsymbol{x} \tag{52}\] such that \[\boldsymbol{x}\cdot\boldsymbol{x}=1 \tag{53}\] are a (generalized) _eigenvector_\(\boldsymbol{x}\) of \(\boldsymbol{\mathsf{A}}\) and the associated (generalized) eigenvalue \(\lambda\).1 Footnote 1: In the traditional case, where \(r=2\) and equation (52) becomes linear, the normalization condition (53) is virtually superfluous. It is far from being so in the present non-linear context. \(\boldsymbol{\mathsf{A}}\) is said to be _real_ if all its Cartesian components are real. A solution \((\lambda,\widehat{\boldsymbol{x}})\) of (52) and (53) is also collectively called a (generalized) _eigenpair_ of \(\boldsymbol{\mathsf{A}}\).2 Footnote 2: Often the (generalized) eigenvectors defined above are also said to be _normalized_, as they are required to satisfy the constraint (53). We do not consider here non-normalized eigenvectors, as others do, and so we need not that appellation. Similarly, whenever no ambiguity can arise, we also omit the adjective “generalized” in referring to the solutions of (52) and (53) and we simply call them the eigenvectors and eigenvalues of \(\boldsymbol{\mathsf{A}}\). A number of facts have been established about the eigenvectors of a generic tensor \(\boldsymbol{\mathsf{A}}\). Below, we recall from [19] those which are more relevant to our pursuit. 1. It should be noted that eigenpairs \((\lambda,\widehat{\boldsymbol{x}})\) come in equivalence classes. Letting \(\boldsymbol{x}^{\prime}=t\widehat{\boldsymbol{x}}\) and \(\lambda^{\prime}=t^{r-2}\lambda\) with \(t^{2}=1\), it is readily seen that \((\lambda^{\prime},\boldsymbol{x}^{\prime})\) is an eigenpair whenever \((\lambda,\widehat{\boldsymbol{x}})\) is so. We shall consider both \((\lambda,\widehat{\boldsymbol{x}})\) and \((\lambda^{\prime},\boldsymbol{x}^{\prime})\) as members of one and the same equivalence class. 2. The _spectrum_\(\mathrm{sp}(\boldsymbol{\mathsf{A}})\) of all eigenvalues of \(\boldsymbol{\mathsf{A}}\) is either finite or it consists of all complex numbers in the complement of a finite set. If \(\mathrm{sp}(\boldsymbol{\mathsf{A}})\) is finite and \(r>2\), then the number \(d\) of equivalence classes of eigenvalues in \(\mathrm{sp}(\boldsymbol{\mathsf{A}})\) (counted with their multiplicity) is given by \[d=\frac{(r-1)^{n}-1}{r-2}.\] (54) 3. If \(\boldsymbol{\mathsf{A}}\) is _real_ and either \(r\) or \(n\) is odd, then \(\boldsymbol{\mathsf{A}}\) has at least one real eigenpair. 4. Every fully _symmetric_ tensor \(\boldsymbol{\mathsf{A}}\) (as under case (i) above) has _at most_\(d\) distinct (equivalence classes of) eigenvalues. Moreover, this bound is indeed attained for _generic_ fully symmetric tensors \(\boldsymbol{\mathsf{A}}\).3 #### 2.3.1 Generalized potential. A potential \(\Phi\) that generalizes (21) can be defined for a fully symmetric tensor \(\mathbf{\mathsf{A}}\in\mathcal{T}(r,\mathsf{V})\) as \[\Phi(\mathbf{x}):=\mathbf{\mathsf{A}}\cdot\mbox{\boldmath$x$ }^{\otimes r}=A_{i_{1}i_{2}\ldots i_{r}}x_{i_{1}}x_{i_{2}}\ldots x_{i_{r}}, \tag{55}\] which is a (complex) homogeneous polynomial of degree \(r\). Differentiating \(\Phi\) with respect to \(x\), we easily see from (51) that \[\nabla\Phi(\mathbf{x})=r\mathbf{\mathsf{A}}\mbox{\boldmath $x$}^{r-1}. \tag{56}\] If \(\widehat{\mathbf{x}}\) is a generalized eigenvector of \(\mathsf{A}\) with eigenvalue \(\lambda\), then it follows from (56) that \[\nabla\Phi(\widehat{\mathbf{x}})=r\lambda\widehat{\mathbf{x }}, \tag{57}\] which, by (53) and Euler's theorem on homogeneous functions, implies that \[\Phi(\widehat{\mathbf{x}})=\lambda. \tag{58}\] Thus, the eigenvalues of \(\mathsf{A}\) are the values taken by the potential \(\Phi\) on the corresponding eigenvectors in the unit sphere \(\mathbb{S}^{n-1}\) of \(\mathsf{V}\). Conversely, the critical points of \(\Phi\) in \(\mathbb{S}^{n-1}\) satisfy the parallelism condition \[\nabla\Phi\parallel\mathbf{x}, \tag{59}\] which by (56) is equivalent to (52). Thus, all the eigenvectors of \(\mathsf{A}\) are characterized as critical points of \(\Phi\) in \(\mathbb{S}^{n-1}\) and the corresponding eigenvalues are given by the values attained there by \(\Phi\). **Remark 8**: For \(\mathbb{F}=\mathbb{R}\), if \(\mathsf{A}\) is real and symmetric, then \(\Phi\) is a real-valued polynomial. Its critical values and critical points in \(\mathbb{S}^{n-1}\) are all the generalized real eigenpairs of \(\mathsf{A}\), whose number can be far less than \(d\) given in (54). **Remark 9**: Although a potential \(\Phi\) can also be associated with a partly symmetric tensor \(\mathsf{A}\) as in (55), its critical points in \(\mathbb{S}^{n-1}\) can no longer be interpreted as generalized eigenvectors of \(\mathsf{A}\) according to definition (52), but just as those of the fully symmetric part \(\mathsf{A}^{(1)}\) of \(\mathsf{A}\) defined by extending (11). In the rest of this review, we shall lay special emphasis on fully symmetric tensors, so that their eigenvectors can be identified with the critical points of \(\Phi\) (and their generalized eigenvalues with the corresponding critical values). #### 2.3.2 Case of interest. We shall tackle in detail the case where \(r=3\) and \(n=3\), so that by (54) \(d=7\). If a tensor \(\mathbf{A}\in\mathcal{T}(3,\mathsf{V})\) with \(\dim\mathsf{V}=3\) is both real and symmetric, we are assured that it possesses at most \(7\) distinct (equivalence classes of) complex eigenvalues, of which at least \(1\) is real. The analysis performed in sections 3 and 4 will actually reveal more than the general facts recalled above would lead us to expect. For example, we shall see that the distinct real eigenvalues of \(\mathbf{A}\) are never less than \(5\), but they can be less than \(7\) in a generic fashion. In the following section, we pause briefly to illustrate the physical meanings that a general octupolar tensor \(\mathbf{A}\) can have, both in the symmetric and non-symmetric cases. ### Physical motivation _Octupolar order_ in soft matter physics is not just an exotic mathematical curiosity. Our main physical motivation for this review lies in the theory of liquid crystals, especially in connection with the recently discovered _polar_ nematic phases [60, 72, 76, 100]. This is why we start from liquid crystals to illustrate the physical background of the mathematical theory. #### 2.4.1 Generalized nematic phases. Liquid crystals provide a noticeable case of soft ordered materials for which a _quadrupolar order_ tensor may not suffice to capture the complexity of the condensed phases they can exhibit. After some earlier theoretical attempts to describe _tetrahedratic_ nematic phases [40, 41], it was established [17, 70, 92] that the phases observed experimentally in liquid crystals composed of bent-core molecules [65, 81] could be described by means of an additional fully symmetric, completely traceless, third-rank order tensor \(\mathbf{A}\). 2 Intuition was rooted in representing \(\mathbf{A}\) as the following ensemble average, Footnote 2: An excluded-volume theory to this effect is presented in [12]. For the role played by octupolar tensors in representing steric interactions and “shape polarity”, the reader could also consult the works [83, 84, 85]. \[\mathbf{A}=\left\langle\sum_{\alpha=1}^{4}\boldsymbol{n}_{\alpha}\otimes \boldsymbol{n}_{\alpha}\otimes\boldsymbol{n}_{\alpha}\right\rangle, \tag{60}\] where the _tetrahedral vectors_\(\boldsymbol{n}_{\alpha}\) are the unit vectors directed from the centre of a (microscopic) tetrahedron to its vertices, as shown in figure 1[86, 87], \[\left\{\begin{aligned} \boldsymbol{n}_{1}&=-\frac{1}{ \sqrt{3}}(\boldsymbol{e}_{1}+\boldsymbol{e}_{2}+\boldsymbol{e}_{3}),\quad \boldsymbol{n}_{2}&=\frac{1}{\sqrt{3}}(\boldsymbol{e}_{1}- \boldsymbol{e}_{2}+\boldsymbol{e}_{3}),\\ \boldsymbol{n}_{3}&=\frac{1}{\sqrt{3}}(-\boldsymbol {e}_{1}+\boldsymbol{e}_{2}+\boldsymbol{e}_{3}),\quad\boldsymbol{n}_{4}& =\frac{1}{\sqrt{3}}(\boldsymbol{e}_{1}+\boldsymbol{e}_{2}- \boldsymbol{e}_{3}),\end{aligned}\right. \tag{61}\] where \((\mathrm{e}_{1},\mathrm{e}_{2},\mathrm{e}_{3})\) is a Cartesian frame. **Remark 10**: Since \(\sum_{\alpha=1}^{4}\boldsymbol{n}_{\alpha}=\mathbf{0}\), it is easy to see that \(\mathbf{A}\) in (60) is a symmetric traceless octupolar tensor. **Remark 11**: Alternatively, in a series of papers [67, 68, 69, 110, 111] on generalized nematic phases (both achiral and chiral) the octupolar order tensor \(\mathbf{A}\) was defined as \[\mathbf{A}=\frac{1}{\sqrt{6}}\left\langle\sum_{\pi\in S_{3}}\boldsymbol{e}_{ \pi(1)}\otimes\boldsymbol{e}_{\pi(2)}\otimes\boldsymbol{e}_{\pi(3)}\right\rangle, \tag{62}\] where the sum is extended to all permutations in \(S_{3}\). It is a simple exercise to show that, despite appearances, the tensors in (62) and (60) are proportional to one another. This would suggest that \(\mathbf{A}\) should partly preserve the parent tetrahedral symmetry and be somehow associated with _four_ directions in space. Such a supposition would also be supported by the analysis in [117], which showed that in two space dimensions \(\mathbf{A}\) is indeed geometrically fully described by an equilateral triangle. We shall show in the following sections how this expectation is indeed illusory. An octupolar tensor arises as an _order tensor_ in the description of the orientational distribution of a microscopic polar axis \(\bi{p}\). This is especially relevant to the study of generalized liquid crystals, including polar nematic phases. A probability density \(\varrho\) over the unit sphere \(\mathbb{S}^{2}\) can be represented by Buckingham's formula [18] as \[\varrho(\bi{p})=\frac{1}{4\pi}\left(1+\sum_{k=1}^{\infty}\frac{(2k+1)!!}{k!} \left\langle\,\overline{\bi{p}^{\otimes k}}\,\right\rangle_{\varrho}\cdot \bi{p}^{\otimes k}\right), \tag{63}\] where, much in the spirit of [125], \(\left\langle\,\overline{\bi{p}^{\otimes k}}\,\right\rangle_{\varrho}\) is the _multipole average_ corresponding to the multiple tensor product \(\bi{p}^{\otimes k}\) (see [114]). A combinatoric proof of (63) can be found in [44]. Collectively, the multipole averages are _order tensors_ of increasing rank that decompose \(\varrho\). In (63), \(\otimes\) denotes (as above) tensor product, and \(\left\langle\cdots\right\rangle_{\varrho}\) is the _ensemble_ average associated with \(\varrho\), \[\left\langle\cdots\right\rangle_{\varrho}:=\frac{1}{4\pi}\int_{\mathbb{S}^{2} }(\cdots)\varrho(\bi{p})\mathrm{d}a(\bi{p}). \tag{64}\] Especially, the first three multipole averages play a role in resolving the characteristic features of \(\varrho\): they are the _dipolar_, _quadrupolar_, and _octupolar_ order tensors defined by \[\bi{d} := \left\langle\bi{p}\right\rangle_{\varrho},\quad\mathbf{Q}:= \left\langle\,\overline{\bi{p}\otimes\bi{p}}\,\right\rangle_{\varrho}, \tag{65}\] \[\mathbf{A} := \left\langle\,\overline{\bi{p}\otimes\bi{p}\otimes\bi{p}}\,\right\rangle _{\varrho}, \tag{66}\] respectively.4 Footnote 4: Other computational definitions of scalar order parameters for both tetrahedral and cubatic symmetries can also be found in [96, 97]. Here, we shall focus on the octupolar order tensor \(\mathbf{A}\). In accordance with (1), in a Cartesian frame \((\bi{e}_{1},\bi{e}_{2},\bi{e}_{3})\), the tensor \(\mathbf{A}\) is represented as \[\mathbf{A}=A_{ijk}\,\bi{e}_{i}\otimes\bi{e}_{j}\otimes\bi{e}_{k}, \tag{67}\] Figure 1: The tetrahedral unit vectors \(\bi{n}_{\alpha}\) defined in (61) and featuring in (60). where by (66) the coefficients \(A_{ijk}\) fall under case (i) above and obey the following properties, see (7): \[A_{ijk}=A_{jik}=A_{ikj},\ \forall\ i,j,k,\qquad A_{iik}=A_{iki}=A_{kii}=0,\ \forall\ k. \tag{68}\] As already remarked, combined together, these properties reduce to 7 the number of independent parameters needed to represent in a generic frame all possible octupolar order tensors \(\mathbf{A}\). For definiteness, we shall adopt the following definitions: \[\cases{\alpha_{0}:=A_{123},\cr\alpha_{1}:=A_{111},\quad\alpha_{2}:=A_{222}, \quad\alpha_{3}:=A_{333},\cr\beta_{1}:=A_{122},\quad\beta_{2}:=A_{233},\quad \beta_{3}:=A_{311},\cr} \tag{69}\] so that \[A_{133}=-(\alpha_{1}+\beta_{1}),\quad A_{211}=-(\alpha_{2}+\beta_{2}),\quad A _{322}=-(\alpha_{3}+\beta_{3}). \tag{70}\] Given the number of scalar coefficients needed to represent \(\mathbf{A}\) in a generic Cartesian frame, one may think to absorb three by selecting a convenient _orienting_ frame and let the remaining four describe scalar order parameters with a direct physical meaning, in complete analogy with what is customary for the second-rank, symmetric and traceless quadrupolar order tensor \(\mathbf{Q}\), which is described by five scalar coefficients in a generic frame and characterized by only two scalar order parameters. For \(\mathbf{Q}\), the reduction of the scalar coefficients to the essential scalar order parameters is performed by representing \(\mathbf{Q}\) in its eigenframe, where only two eigenvalues suffice to characterize it. Now the definition of generalized eigenvectors and eigenvalues for \(\mathbf{A}\) recalled in section 2.3 above comes in handy. Here, we shall take the equivalent route of representing \(\mathbf{A}\) through the critical points of the _octupolar potential_\(\Phi\), the scalar-valued function defined on the unit sphere \(\mathbb{S}^{2}\) as in (55); in this setting, \(\Phi\) is nothing but the octupolar component of Buckingham's formula (63). Thus, in particular, maxima and minima of \(\Phi\), with their relative values, would designate the directions in space along which a microscopic polar axis \(\boldsymbol{p}\) is more and less likely to be retraced, respectively, according with the octupolar component of \(\varrho\). We shall see in section 2.5.1 how to employ the properties of the octupolar potential to reduce the number of independent parameters that represent \(\mathbf{A}\) in the orienting frame. **Remark 12**: Such a reduction is meaningful as long as the octupolar component of the probability density \(\varrho\) can be isolated from the quadrupolar component, so as to be treated independently. Allegedly, this is seldom the case for ordinary liquid crystals, where the quadrupolar component is expected to be dominant. If that is the case, the natural frame for \(\mathbf{A}\) would be the eigenframe of \(\mathbf{Q}\), which need not coincide with the orienting frame. In our applications of \(\mathbf{A}\) to liquid crystal science (which are not the only ones considered here), we shall consistently presume that quadrupolar and octupolar effects are separable. The physical motivation illustrated here will primarily guide our intuition below, to the point that we shall often picture the maxima of the octupolar potential as designating an ordered condensed _phase_ on its own. Other interpretations are also possible, which do not require \(\mathbf{A}\) to be fully symmetric and traceless, and so cannot uniquely rely on the octupolar potential \(\Phi\) as defined in (20). They are briefly recalled for completeness in the following. #### 2.4.2 Non-linear optics. The optical properties of crystals are described by the constitutive laws linking electromagnetic fields and induced polarizations. In the linear theory, for example, the induced polarization \(\mathbf{P}\) is related to the electric field \(\mathbf{E}\) through the formula \[\mathbf{P}(\omega)=\mathbf{\chi}^{(1)}\mathbf{E}(\omega), \tag{71}\] where \(\omega\) is the oscillation frequency of the fields and the linear susceptibility \(\mathbf{\chi}^{(1)}\) is in general represented by a symmetric second-rank tensor. The lower-order optical non-linearity, such as frequency mixing, arises when the polarization \(\mathbf{P}(\omega_{3})\) at frequency \(\omega_{3}=\omega_{1}+\omega_{2}\) is related to the electric fields \(\mathbf{E}(\omega_{1})\) and \(\mathbf{E}(\omega_{2})\) oscillating at frequencies \(\omega_{1}\) and \(\omega_{2}\) through the following quadratic law (see, for example, [55] and Sect. 1.5 of [16]), \[\mathbf{P}(\omega_{1}+\omega_{2})=\mathbf{A}(\omega_{1},\omega_{2})[\mathbf{E}( \omega_{1})\otimes\mathbf{E}(\omega_{2})], \tag{72}\] where the generic third-rank tensor \(\mathbf{A}\) represents a non-linear susceptibility. In Cartesian components, (72) reads as \[P_{i}(\omega_{1}+\omega_{2})=A_{ijk}(\omega_{1},\omega_{2})E_{j}(\omega_{1})E_ {k}(\omega_{2}). \tag{73}\] In general, for \(\omega_{1}\neq\omega_{2}\), \(A_{ijk}\) need not enjoy any symmetry, as \(E_{j}(\omega_{1})\) may differ from \(E_{k}(\omega_{2})\). However, for \(\omega_{1}=\omega_{2}\), which is the case of _second harmonic_ generation, we may take \(A_{ijk}=A_{ikj}\) in (73) with no loss of generality, and 18 parameters suffice to represent \(\mathbf{A}\). Moreover, often non-linear optical interactions involve waves with frequency much smaller than the lowest resonance frequency of the material. If this is the case, the non-linear susceptibility \(\mathbf{A}\) is virtually independent of frequency and we can permute all indices in \(A_{ijk}\) leaving the response of the material unaltered. This is often called the Kleinman symmetry condition for the tensor \(\mathbf{A}\)[59]. When it applies, \(\mathbf{A}\) is represented by 10 independent parameters. #### 2.4.3 Linear piezoelectricity. In a crystal, polarization can also arise in response to stresses; this is called the _piezoelectric effect_ and was discovered by the Curie brothers [29, 30]. In a linear constitutive theory, the induced polarization \(\mathbf{P}\) is related to the Cauchy stress tensor \(\mathbf{T}\) by \[\mathbf{P}=\mathbf{A}[\mathbf{T}], \tag{74}\] where \(\mathbf{A}\) is now the piezoelectric tensor. The component form of (74) is \[P_{i}=A_{ijk}T_{jk}. \tag{75}\] In classical elasticity, \(T_{ij}=T_{ji}\), and so \({\bf A}\) enjoys the symmetry \[A_{ijk}=A_{ikj} \tag{76}\] and is represented by 18 independent parameters. The invariant decomposition of the piezoelectric tensor can help to classify piezoelectric crystals; its algebraic properties have recently received a renewed interest (see, for example, [90, Chapt. 7] and [46, 54, 61]). The decomposition of \({\bf A}\) as in (10) is affected by the extra symmetry requirement (76). Clearly, \({\bf A}^{(3)}\) vanishes, but neither \({\bf A}^{(2,1)}\) nor \({\bf A}^{(2,2)}\) does. These two latter do not enjoy the symmetry (76), whereas \({\bf A}^{(2)}={\bf A}^{(2,1)}+{\bf A}^{(2,2)}\) does. Moreover, as shown in [54], \[{\bf A}={\bf A}^{(1)}+{\bf A}^{(2)} \tag{77}\] is the unique irreducible invariant decomposition of the piezoelectric tensor. #### 2.4.4 Couple-stresses. Cauchy's stress tensor \(T\) is symmetric to guarantee the balance of moments, but it has long been known that non-symmetric stress tensors may occur in mechanics [112, Sect. 98]. The symmetry of Cauchy's stress tensor actually amounts to the assumption that all torques come from moments of forces. The presence of internal contact couples was already hypothesized in the early theory of the Cosserat brothers [26, 27], although in the special context of rods and shells. Toupin [109] put forward a non-linear theory of elastic materials with couple-stresses, which was soon found to be equivalent to Grioli's [49]. In Toupin's theory, the contact couple \(c\) is represented by the second-rank skew-symmetric tensor \(C\) that has \(c\) as its axial vector. The couple stress is then the third-rank tensor \({\bf A}\) that delivers \(C\) when applied to the outer unit normal \(\nu\) designating the orientation of the contact surface, \[{\mathbf{C}}={\bf A}[\mathbf{\nu}]. \tag{78}\] In components, (78) reads as \[C_{ij}=A_{ijk}\nu_{k} \tag{79}\] and, since \(C_{ij}=-C_{ji}\), \[A_{ijk}=-A_{jik}, \tag{80}\] which shows that there are only 9 independent components of \({\bf A}\). The reader is referred to [34, 35, 36, 37] for the connection between Toupin's theory and the early mechanical theories of Ericksen for liquid crystals. In a way similar to that enacted in section 2.4.3, also the symmetry property (80) affects the representation of a couple-stress tensor \({\bf A}\) (see [109] and [66], the latter also referring to \({\bf A}\) as the _Hall_ tensor for the role an octupolar tensor with the symmetry (80) plays in describing the Hall effect in crystals). Clearly, in this case \({\bf A}^{(1)}={\bf 0}\), while \({\bf A}^{(2)}\) enjoys the symmetry (80). As shown in [54], \[{\bf A}={\bf A}^{(2)}+{\bf A}^{(3)} \tag{81}\] is a unique irreducible invariant decomposition of \({\bf A}\). ### Octupolar potential For the octupolar order tensor \(\mathbf{A}\) in (67), the octupolar potential \(\Phi\) is given by (20), which we reproduce here for the reader's ease, \[\Phi(\boldsymbol{x}):=\mathbf{A}\cdot\boldsymbol{x}\otimes\boldsymbol{x} \otimes\boldsymbol{x}=A_{ijk}x_{i}x_{j}x_{k}. \tag{82}\] Given the symmetries enjoyed by \(\mathbf{A}\), the octupolar potential \(\Phi\) identifies it uniquely. The critical points \(\widehat{\boldsymbol{x}}\) of \(\Phi\) constrained to \(\mathbb{S}^{2}\) have Cartesian components \((\widehat{x}_{1},\widehat{x}_{2},\widehat{x}_{3})\) that solve the equations \[A_{ijk}x_{j}x_{k}=\lambda x_{i},\quad i=1,2,3, \tag{83}\] where \(\lambda\) is a Lagrange multiplier associated with the constraint \[x_{i}x_{i}=1. \tag{84}\] Comparing (83) and (52), we readily realize that \((\lambda,\widehat{\boldsymbol{x}})\) is a real eigenpair of \(\mathbf{A}\). Moreover, it follows from (83) and (84) that \[\Phi(\widehat{x}_{1},\widehat{x}_{2},\widehat{x}_{3})=\lambda, \tag{85}\] which is a specialization of (58). Since each real eigenpair \((\lambda,\widehat{\boldsymbol{x}})\) is accompanied by its opposite \((-\lambda,-\widehat{\boldsymbol{x}})\), we see that maxima and minima of \(\Phi\) are conjugated by a parity transformation. As \(\mathbf{A}\) is real and symmetric, we know from the general results recalled in section 2.3 that, _modulo_ the parity conjugation, there are generically 7 distinct eigenvalues of \(\mathbf{A}\) one of which at least is real. However, we have no clue as to whether all other eigenvalues are real or not. We are exclusively interested in the real eigenvalues of \(\mathbf{A}\), as, by (85), they are extrema attained by \(\Phi\) and so they possibly bear a statistical interpretation whenever \(\mathbf{A}\) can be regarded as the collective representation of the third moments of a probability density distribution over \(\mathbb{S}^{2}\). Since \(\Phi\) is a polynomial, real-valued mapping on \(\mathbb{S}^{2}\), its critical points are _singularities_ for the _index_ field \(\boldsymbol{u}_{\Phi}\) defined on \(\mathbb{S}^{2}\) by \[\boldsymbol{u}_{\Phi}:=\frac{\nabla_{\!\mathrm{s}}\Phi}{|\nabla_{\!\mathrm{s }}\Phi|}, \tag{86}\] where \(\nabla_{\!\mathrm{s}}\) denotes the surface gradient on \(\mathbb{S}^{2}\). Each _isolated_ singularity of \(\boldsymbol{u}_{\Phi}\) can be assigned an _index_, which is a signed integer \(\iota\)[107, section VIII.10]. Assuming that \(\boldsymbol{u}_{\Phi}\) possesses a finite number \(N\) of isolated singularities, by a theorem of Poincare and Hopf [107, pp. 239-247], the sum of all their indices must equal the Euler characteristic of the sphere, that is, \[\sum_{i=1}^{N}\iota_{i}=2. \tag{87}\] Now, both maxima and minima of \(\Phi\) are critical points with index \(\iota=+1\), whereas its non-degenerate saddle points are critical points with \(\iota=-1\).1 Thus, were the eigenvalues of a generic, symmetric traceless tensor \({\bf A}\) all real (so that according to (54) they occur in 7 distinct pairs), letting \(M\) be the number of eigenvalues corresponding to the maxima of \(\Phi\) (which equal in number the minima of \(\Phi\)) and \(S\) the number of eigenvalues of \({\bf A}\) corresponding to saddle points of \(\Phi\) (which equal in number the saddles with negative eigenvalues), if the critical points of \(\Phi\) have all either index \(\iota=+1\) or \(\iota=-1\), we easily obtain from (87) that \[M-S=1\quad\mbox{and}\quad S+M=7, \tag{88}\] whence it follows that \(M=4\) and \(S=3\).1 Footnote 1: Under precisely these assumptions, equation (88) had already been established by Maxwell [78] in 1870, elaborating on earlier qualitative considerations of Cayley [22]. We shall see below that the complete picture is indeed far more complicated that this, for two reasons: first, not all eigenvalues of \({\bf A}\) are real; second, not all critical points of \(\Phi\) have index \(\iota=\pm 1\). #### 2.5.1 Oriented potential. Making use of (69) and (70) in (82), the octupolar potential can be written in the following explicit form, \[\begin{split}\Phi(x_{1},x_{2},x_{3})&=6\alpha_{0} x_{1}x_{2}x_{3}+\alpha_{1}x_{1}\left(x_{1}^{2}-3x_{3}^{2}\right)\\ &+\alpha_{2}x_{2}\left(x_{2}^{2}-3x_{1}^{2}\right)+\alpha_{3}x_{3 }\left(x_{3}^{2}-3x_{2}^{2}\right)\\ &+3\left[\beta_{1}x_{1}\left(x_{2}^{2}-x_{3}^{2}\right)+\beta_{2 }x_{2}\left(x_{3}^{2}-x_{1}^{2}\right)+\beta_{3}x_{3}\left(x_{1}^{2}-x_{2}^{2} \right)\right],\end{split} \tag{89}\] which is described by 7 scalar parameters. To reduce these, we choose a special _orienting_ Cartesian frame \((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\). The octupolar potential \(\Phi\) cannot be constant on \(\mathbb{S}^{2}\), best it trivially vanishes. It will then have at least a local maximum (accompanied by its antipodal minimum). For now, we choose \(\mathbf{e}_{3}\) such that \(\Phi\) attains a critical point at the North pole \((0,0,1)\) of \(\mathbb{S}^{2}\), which requires, see section 5.2 of [45], \[\alpha_{1}=-\beta_{1},\qquad\beta_{2}=0. \tag{90}\] Later, we shall require \(\Phi\) to attain a local maximum at the North pole of \(\mathbb{S}^{2}\), which will result in an inequality to be obeyed by a conveniently chosen parameter, see (97). We can still choose the orientation of the pair \((\mathbf{e}_{1},\mathbf{e}_{2})\). Since \(\Phi\) is odd on \(\mathbb{S}^{2}\), and so is also on the unit disk \(\mathbb{S}^{1}\) on \(\mathbb{S}^{2}\) orthogonal to \(\mathbf{e}_{3}\), there must be a point on \(\mathbb{S}^{1}\) where \(\Phi\) vanishes. We further orient \(\Phi\) by requiring that \(\Phi(1,0,0)=0\), which implies that \[\beta_{1}=0. \tag{91}\] Finally, the potential can be scaled with no prejudice to its critical points. By requiring that \(\Phi(0,0,1)=1\), we obtain that \[\alpha_{3}=1. \tag{92}\] Combining equations (90)-(92), we then define the _oriented_ octupolar potential as \[\Phi_{\rm o}(x_{1},x_{2},x_{3}):=6\alpha_{0}x_{1}x_{2}x_{3}+\alpha_{2}(x_{2}^ {2}-3x_{1}^{2})x_{2}+(x_{3}^{2}-3x_{2}^{2})x_{3}+3\beta_{3}(x_{1}^{2}-x_{2}^{2 })x_{3}, \tag{93}\] which features only 3 scalar parameters. In sections 3 and 4, two alternative, concurring methods will be presented that afford a complete characterization of the critical points of \(\Phi_{\rm o}\). ## 3 Geometric Approach The oriented potential in (93) enjoys a number of symmetries, which are better explained and represented if the parameter space is described by three new variables \((\rho,\chi,K)\) related to \((\alpha_{0},\alpha_{2},\beta_{3})\) through the equations \[\alpha_{0}=\frac{1}{2}\rho\cos\chi,\quad\alpha_{2}=K,\quad\beta_{3}=\frac{1}{2 }(\rho\sin\chi-1), \tag{94}\] where \[\rho\geqq 0,\quad\chi\in[-\pi,\pi],\quad K\in\mathbb{R}. \tag{95}\] \(\Phi_{\rm o}\) in (93) is accordingly represented as \[\begin{split}\Phi_{\rm o}&=3\rho\cos\chi x_{1}x_{2 }x_{3}+K(x_{2}^{2}-3x_{1}^{2})x_{2}\\ &+(x_{3}^{2}-3x_{2}^{2})x_{3}+\frac{3}{2}(\rho\sin\chi-1)(x_{1}^ {2}-x_{2}^{2})x_{3}.\end{split} \tag{96}\] In the new parameters, the North pole of \(\mathbb{S}^{2}\) is guaranteed to be a maximum for \(\Phi_{\rm o}\) if \[0\leqq\rho\leqq 2, \tag{97}\] see [45]. This shows that the parameter space can be effectively reduced to a cylinder \(\mathcal{C}\) with axis along \(K\). The choice of freezing a maximum of \(\Phi_{\rm o}\) along the \(x_{3}\)-axis preempts the action of rotations other than those preserving that axis as possible symmetries of the octupolar potential. However, a number of discrete symmetries survive; they are illustrated in detail in section 5.5 of [45], where it is shown in particular that changing \(\chi\) into \(\chi+2\pi/3\) simply induces a rotation by \(2\pi/3\) about the \(x_{3}\)-axis in the graph of \(\Phi_{\rm o}\) over \(\mathbb{S}^{2}\), a symmetry that establishes a rotation covariance between parameter and physical spaces. Combining all discrete symmetries, one finally learns that the study of \(\Phi_{\rm o}\) can be confined to a half-cylinder with \(K\geqq 0\) and any sector delimited by the inequalities \(\chi_{0}\leqq\chi\leqq\chi_{0}+\pi/3\), with any \(\chi_{0}\in[-\pi,\pi]\). Extending the parameter space outside one such sector would add noting to the octupolar potential landscape: the graph of \(\Phi_{\rm o}\) over \(\mathbb{S}^{2}\) would only be affected by rotations about the \(x_{3}\)-axis and mirror symmetries across planes through that axis, which leave all critical points unchanged [45]. For definiteness, we choose \(\chi_{0}=-\pi/2\) and proceed to identify the mirror symmetry that involves both physical and parameter spaces. Subjecting \(\Phi_{\rm o}\) in (96) to the change of variables \[x_{1}=\cos\vartheta x_{1}^{\prime}-\sin\vartheta x_{2}^{\prime}\quad x_{2}=- \sin\vartheta x_{1}^{\prime}-\cos\vartheta x_{2}^{\prime},\quad x_{3}=x_{3}^ {\prime}, \tag{98}\] which represents a mirror reflection with fixed point \(x_{1}=x_{1}^{\prime}\), \(x_{2}=x_{2}^{\prime}\) along the plane \[x_{2}=-x_{1}\tan\frac{\vartheta}{2}, \tag{99}\] one easily sees that \(\Phi_{\rm o}\) remains formally unchanged in the variables \((x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})\), thus making (98) a mirror symmetry for \(\Phi_{\rm o}\), if \(\vartheta=\pi/3\) and \(\chi\) is changed into \(\chi^{\prime}=-\chi-\pi/3\), which has a fixed point for \(\chi=-\pi/6\). Thus, by (99), a reflection of the sector \(-\pi/2\leqq\chi\leqq-\pi/6\) across the plane \(\chi=-\pi/6\) in parameter space \((\rho,\chi,K)\) induces a reflection of \(\Phi_{\rm o}\) across the plane through the \(x_{3}\)-axis that makes the angle \(-\pi/6\) with the \(x_{1}\)-axis in the physical space \((x_{1},x_{2},x_{3})\). Thus, we shall hereafter confine attention to the sector of \({\cal C}\) that is represented in cylindrical coordinates \((\rho,\chi,K)\) as \[0\leqq\rho\leqq 2,\quad-\frac{\pi}{2}\leqq\chi\leqq-\frac{\pi}{6},\quad K\geqq 0. \tag{100}\] Special subsets in \({\cal C}\) (and their intersection with the relevant sector in (100)) make \(\Phi_{\rm o}\) enjoy special symmetries in physical space. The corresponding symmetry groups (in the Schoenflies notation) are summarised in table 1; the special subsets are the centre \({\mathscr{C}}\) (\(\rho=K=0\)), the disk \({\mathscr{D}}\) (\(K=0\)), the axis \({\mathscr{A}}\) (\(\rho=0\)), and the tetrahedral pair \({\mathscr{T}}\in{\mathscr{A}}\) (\(\rho=0\), \(K\pm 1/\sqrt{2}\)). To illustrate these special cases, we shall draw the polar plot of \(\Phi_{\rm o}\) and its contour plot in the plane \((x_{1},x_{3})\). The former is the surface in space spanned by the tip of the vector \(\Phi_{\rm o}\mathbf{e}_{r}\), where \(\mathbf{e}_{r}\) is the radial unit vector, \[\mathbf{e}_{r}:=\frac{1}{r}(x_{1}\mathbf{e}_{1}+x_{2}\mathbf{e}_{2}+x_{3}\mathbf{e}_{3}),\quad r :=\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}. \tag{101}\] Since \(\Phi_{\rm o}\) is odd under reversal of the coordinates \((x_{1},x_{2},x_{3})\), antipodal points on \(\mathbb{S}^{2}\) are mapped into the same point on the polar plot of \(\Phi_{\rm o}\), so that minima of \(\Phi_{\rm o}\) are invaginated under its maxima, and the latter are the only ones to be shown by the polar plot of \(\Phi_{\rm o}\). To resolve this ambiguity, we shall often supplement the polar plot of \(\Phi_{\rm o}\) with the contour plot of the function in \((x_{1},x_{3})\) obtained by setting \(x_{2}=\sqrt{1-x_{1}^{2}-x_{3}^{2}}\) in (96). This gives a view of the octupolar potential on a hemisphere based on a great circle passing through both North and South poles and culminating at the point \((0,1,0)\). If polar plots give a quite vivid representation of the maxima (and minima) of \(\Phi_{\rm o}\), the contour plots in the \((x_{1},x_{3})\) plane give a side view of half its critical points. Before showing the illustrations for the symmetric cases in table 1, we must warn the reader that whereas maxima, minima, and genuine saddles (either degenerate or not, but with index \(\iota\neq 0\)) are easily discerned from a contour plot, degenerate saddles with index \(\iota=0\) may easily go unnoticed. We illustrate in the following subsections the special symmetries listed in table 1. \begin{table} \begin{tabular}{l c l} \hline Group & Parameters & Subset \\ \hline \(D_{\infty h}\) & \(\rho=K=0\) & centre \({\mathscr{C}}\) \\ \hline \(D_{2h}\) & \(K=0\) & disk \({\mathscr{D}}\) \\ \hline \(D_{3h}\) & \(\rho=0\) & axis \({\mathscr{A}}\) \\ \hline \(T_{d}\) & \(\rho=0\), \(K=\pm 1/\sqrt{2}\) & \({\mathscr{T}}\in{\mathscr{A}}\) \\ \hline \end{tabular} \end{table} Table 1: Symmetry groups for the oriented octupolar potential (in the Schoenflies notation) corresponding to special sets in the reduced parameter space. ### \(D_{\infty h}\) Figure 2 shows the polar plot and the contour plot for \(\Phi_{\rm o}\) in the centre \(\mathscr{C}\) in parameter space. The former (see figure 1(a)) is symmetric about the \(x_{3}\)-axis, while the latter (see figure 1(a)) exhibits the same \(D_{\infty h}\) symmetry, but seen from a different perspective: the level sets of \(\Phi_{\rm o}\) are parallels and their colour, ranging from green to red, spans the range of values taken by \(\Phi_{\rm o}\), from its minimum (green) to its (opposite) maximum (red). In this specific instance, \(\Phi_{\rm o}\) vanishes on the equator. Alongside the maximum at the North pole (accompanied by its minimum twin at the South pole), a full orbit of maxima (with their twin minima) exist on symmetric parallels. By construction, in our representation the North pole must be red, whereas the South pole must be green, even if our pictures do not always show this very clearly. ### \(D_{2h}\) Figure 3 shows a case exhibiting the \(D_{2h}\) symmetry characteristic for the whole disk \(\mathscr{D}\) in parameter space (see table 1). The octupolar potential \(\Phi_{\rm o}\) has generically three maxima, three minima, and four saddles, which indices \(\iota=+1\), \(\iota=+1\), \(\iota=-1\), respectively, so that the global constraint (87) is satisfied. Two more maxima accompany in the Southern hemisphere the maximum at the North pole (and so do the conjugated minima in the Northern hemisphere). The four (non-degenerate) saddles are two on each hemisphere, for a total of 10 critical points (see also section 4.2.2). Figure 2: The octupolar potential \(\Phi_{\rm o}\) for \(\rho=K=0\). ### \(D_{3h}\) Figure 4 shows the appearance of the octupolar potential \(\Phi_{\mathrm{o}}\) on the axis \(\mathscr{A}\) in parameter space. It enjoys the \(D_{3h}\) symmetry and possesses four maxima, four minima, and six saddles, for a total of 14 isolated critical points. Figure 4: The octupolar potential \(\Phi_{\mathrm{o}}\) for \(\rho=0\) and \(K=1/2\), representing the behaviour on the whole axis \(\mathscr{A}\) in parameter space. Figure 3: The octupolar potential \(\Phi_{\mathrm{o}}\) for \(\rho=1/2\), \(\chi=-\pi/3\), \(K=0\). ### \(T_{d}\) Finally, in figure 5 we see \(\Phi_{\rm o}\) at one special tetrahedral point \(\mathscr{T}\) in parameter space. The octupolar potential has four equal maxima at the vertices of a regular tetrahedron (with four antipodal minima) and six saddles with equal values. The total number of critical points is 14, as in figure 4, but here each maximum, minimum, or saddle cannot be distinguished from all others; \(\Phi_{\rm o}\) enjoys the \(T_{d}\) symmetry. ### Summary This survey of the symmetric cases has shown that \(\Phi_{\rm o}\) can have either 10 or 14 critical points, apart from the highly degenerate case corresponding to the singular point \(\mathscr{C}\) in parameter space, where it has infinitely many. We shall see from the following analysis that cases with either 8 or 12 critical points are also possible, thus revealing a more intricate landscape, which we shall also endeavour to illustrate geometrically. ## 4 Algebraic Approach The (normalized) eigenvectors of the octupolar tensor \(\mathbf{A}\) are identified with the critical points of the octupolar potential \(\Phi_{\rm o}\) on the unit sphere \(\mathbb{S}^{2}\), and the real eigenvalues of \(\mathbf{A}\) are the corresponding critical values. These latter are the only eigenvalues of \(\mathbf{A}\) to bear a physical meaning; as for their number, the general result of Cartwright and Sturmfels [19] only provides an upper bound, which is 14 in the case of interest here. Figure 5: The octupolar potential \(\Phi_{\rm o}\) for \(\rho=0\) and \(K=1/\sqrt{2}\), representing one of the two (symmetric) special points \(\mathscr{T}\in\mathscr{A}\) in parameter space. The number of real eigenvalues of \({\bf A}\) depends on the parameters \((\rho,\chi,K)\) in a rather complicated and intriguing way, which is explored and fully documented below. In this pursuit, we found especially expedient a method applied by Walcher [120]; this reduces the critical points of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\) to the roots of an appropriate polynomial in _one_ real variable. Here we adapt Walcher's idea to our formalism and draw all our conclusions from the polynomial he introduced. The fundamental algebraic tool at the basis of Walcher's method is Bezout's theorem in projective spaces, for a full account on which we defer the reader to Chapt. IV of Shafarevich's book [102] (precursors of this method can also be retraced in the works [94, 95]). The outcomes of our previous analysis [45] for the critical points of \(\Phi_{\rm o}\) are confirmed, but an important detail is added. Our first move is writing the equilibrium equations for \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\), whose solutions are the critical points we want to classify. We incorporate in \(\Phi_{\rm o}\) the constraint \(\mathbf{x}\cdot\mathbf{x}=1\) by defining the _extended_ potential \(\Phi_{\lambda}\) as \[\Phi_{\lambda}:=\Phi_{\rm o}+\Phi_{\rm c}, \tag{102}\] where the constraint term \(\Phi_{\rm c}\) is defined by \[\Phi_{\rm c}:=-\frac{3}{2}\lambda(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}), \tag{103}\] and \(\lambda\) is a Lagrange multiplier to be determined by requiring that \(\mathbf{x}\in\mathbb{S}^{2}\). As shown in [45], the scaling of \(\Phi_{\rm c}\) has been chosen so as to ensure that on a critical point \(\lambda\) would equal the corresponding critical value of \(\Phi_{\rm o}\), and hence be a real eigenvalue of \({\bf A}\). In light of this, it should also be recalled that whereas \(\Phi_{\rm o}\) changes sign upon central inversion, \(\Phi_{\rm c}\) (and so \(\Phi_{\lambda}\)) does so only under the simultaneous changes \(\mathbf{x}\mapsto-\mathbf{x}\) and \(\lambda\mapsto-\lambda\). With the aid of (96), the equilibrium equations for \(\Phi_{\lambda}\) are easily obtained, \[\left\{\begin{aligned} &\rho\cos\chi x_{2}x_{3}-2Kx_{1}x_{2}+( \rho\sin\chi-1)x_{1}x_{3}=\lambda x_{1},\\ &\rho\cos\chi x_{1}x_{3}-K(x_{1}^{2}-x_{2}^{2})-(\rho\sin\chi+1)x _{2}x_{3}=\lambda x_{2},\\ &\rho\cos\chi x_{1}x_{2}-(x_{2}^{2}-x_{3}^{2})+\frac{1}{2}(\rho \sin\chi-1)(x_{1}^{2}-x_{2}^{2})=\lambda x_{3},\end{aligned}\right. \tag{104}\] subject to \[x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1, \tag{105}\] where the parameters \((\rho,\chi,K)\) are chosen as specified in (100). Here we split the quest for solutions of (104) and (105) in two steps. First, we seek solutions with \(x_{2}=0\), and then all others. For the role they will play, the former are called the _background_ solutions, for lack of a better name. Clearly, both poles \((0,0,\pm 1)\) are solutions of (104) and (105) by the way the potential has been _oriented_. To avoid double counting, these solutions will be excluded from the background; they should always be added to the ones we are seeking here. ### Background Solutions By setting \(x_{2}=0\) in (104) and (105) and assuming that \(x_{1}\neq 0\), so as to exclude both poles, we see that these equations reduce to \[(\rho\sin\chi-1)x_{3}=\lambda, \tag{106}\] \[\rho\cos\chi x_{3}=Kx_{2},\] (107) \[x_{3}^{2}+\frac{1}{2}(\rho\sin\chi-1)x_{1}^{2}=\lambda x_{3},\] (108) \[x_{1}^{2}+x_{3}^{2}=1. \tag{109}\] A number of simple cases arise, which are conveniently described separately, for clarity. #### 4.1.1 Case \(\rho=K=0\). In this case, the background solutions are \(x_{1}=\pm 2\sqrt{5}\) and \(x_{3}=-\lambda=\pm 1/\sqrt{5}\), where all choices of sign are possible, so that these roots amount to 4 critical points of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\). #### 4.1.2 Case \(\rho>0\), \(\chi\neq-\pi/2\), \(K=0\). This case is the easiest, as (107) requires \(x_{1}=0\), which is incompatible with (108) in the sector (100) we selected in parameter space. Thus, no background solution exists. As we shall see below, they do exist for \(\chi=-\pi/2\). #### 4.1.3 Case \(\rho=0\), \(K\neq 0\). This is another trivial case, as (107) again implies \(x_{1}=0\), which is disallowed. Thus, once again no background solution exists for this choice of parameters. #### 4.1.4 Case \(\rho>0\), \(\chi=-\pi/2\), \(K>0\). This is another case of non-existence, as (107) implies once more that \(x_{1}=0\). Finally, we see now two cases where background solutions do actually exist. #### 4.1.5 Case \(\rho>0\), \(\chi=-\pi/2\), \(K=0\). For this choice of parameters, equation (107) is identically satisfied, while the remaining equations possess the solutions \[x_{1}=\pm\sqrt{\frac{2(\rho+2)}{5+3\rho}},\quad x_{2}=0,\quad x_{3}=\pm\sqrt{ \frac{\rho+1}{5+3\rho}},\quad\lambda=\mp\sqrt{\frac{(\rho+1)^{3}}{5+3\rho}}, \tag{110}\] where signs can be chosen independently, provided that \(\lambda x_{2}<0\). Thus, these roots correspond to 4 critical points of \(\Phi_{\rm o}\), all lying on a great circle of \(\mathbb{S}^{2}\). #### 4.1.6 Case \(\rho>0\), \(\chi\neq-\pi/2\), \(K>0\). This is the generic case for the existence of background solutions. It easily follows from (106)-(109) that in the selected sector of parameter space (100) the background solutions must satisfy the inequalities \(x_{1}x_{3}>0\) and \(\lambda x_{3}<0\). Elementary calculations deliver \[x_{1}=\mp\sqrt{\frac{2(2-\rho\sin\chi)}{5-3\rho\sin\chi}},\quad x_{2}=0,\quad x _{3}=\mp\sqrt{\frac{1-\rho\sin\chi}{5-3\rho\sin\chi}}, \tag{111}\] \[\lambda=\pm\sqrt{\frac{(1-\rho\sin\chi)^{3}}{5-3\rho\sin\chi}}, \tag{112}\] where signs must be chosen so as to satisfy the above inequalities. However, these solutions do not exist for all values of \(K>0\), but only for \[K=\kappa(\rho,\chi):=\sqrt{\frac{1-\rho\sin\chi}{2(2-\rho\sin\chi)}}\rho\cos\chi. \tag{113}\] Whenever the latter is satisfied, the background solutions correspond to 2 critical points of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\). ### All other solutions We now assume that \(x_{2}\neq 0\) and set \[s:=\frac{x_{1}}{x_{2}},\quad t:=\frac{x_{3}}{x_{2}},\quad\mu:=\frac{\lambda}{x _{2}}. \tag{114}\] With the aid of these definitions, equations (104) become \[\rho\cos\chi t-2Ks+(\rho\sin\chi-1)st=\mu s, \tag{115}\] \[\rho\cos\chi st-K(s^{2}-1)-(\rho\sin\chi+1)t=\mu,\] (116) \[\rho\cos\chi s-(1-t^{2})+\frac{1}{2}(\rho\sin\chi-1)(s^{2}-1)=\mu t. \tag{117}\] While (116) is by itself an explicit expression for \(\mu\) in the new variables \((s,t)\), both (115) and (117) can be made into polynomials in these latter upon insertion of (116). These are \[\begin{array}{l}Ks^{2}t-\rho\cos\chi st^{2}+\frac{1}{2}(\rho\sin\chi-1)s^{2 }+(\rho\sin\chi+2)t^{2}\\ \qquad\qquad\qquad+\rho\cos\chi s-Kt-\frac{1}{2}(\rho\sin\chi+1)=0,\end{array} \tag{118}\] and \[\rho(\cos\chi s^{2}-2\sin\chi s-\cos\chi)t=Ks(s^{2}-3), \tag{119}\] the latter of which has the remarkable feature of being _linear_ in \(t\). The general strategy here will be to extract \(t\) from (119) and transform (118) into a polynomial of degree 6 in the single variable \(s\). However, in a number of selected case this strategy is not viable and the solutions to the system (118) and (119) can be found by finding the roots of polynomials of lower degree. These special cases will be treated first, as they are somehow related to the symmetries studied above. Progressing further, we note that once a solution \((s,t)\) of (118) and (119) is known, by (114) we obtain the solutions \((x_{1},x_{2},x_{3})\) of (104) and (105) through the equations \[x_{1}=\pm\frac{s}{\sqrt{1+s^{2}+t^{2}}},\quad x_{2}=\pm\frac{1} {\sqrt{1+s^{2}+t^{2}}},\quad x_{3}=\pm\frac{t}{\sqrt{1+s^{2}+t^{2}}}, \tag{120}\] \[\lambda=\pm\frac{\mu}{\sqrt{1+s^{2}+t^{2}}}, \tag{121}\] where \(\mu\) is given by (116). Thus, as expected, each solution \((s,t)\) of (118) and (119) corresponds to a conjugated pair of critical points of \(\Phi_{\rm o}\). #### 4.2.1 Case \(\rho=K=0\). This point corresponds to the centre \(\mathscr{C}\) in parameter space. For this choice of parameters, equation (119) is identically satisfied and (118) delivers \(t^{2}=\frac{1}{4}(s^{2}+1)\), which with the aid of (120) readily implies that \[x_{1}=\pm\frac{2s}{\sqrt{5(1+s^{2})}},\quad x_{2}=\pm\frac{2}{\sqrt{5(1+s^{2}) }},\quad x_{3}=\pm\frac{1}{\sqrt{5}},\quad\lambda=\mp\frac{1}{\sqrt{5}}, \tag{122}\] which for varying \(s\) represent the two orbits of critical points shown in figure 2. It is perhaps worth noting that for \(\rho=K=0\) solution (122) reproduces the background solution of section 4.1.1 in the limits as \(s\to\pm\infty\), and so no other critical point of \(\Phi_{\rm o}\) is present in this case, besides the poles and the orbits (122). #### 4.2.2 Case \(\rho>0\), \(\chi\neq-\pi/2\), \(K=0\). This is the plane where the disk \(\mathscr{D}\) lies. The case \(\chi=-\pi/2\) is again somewhat special and will be treated separately below. For this choice of parameters, equation (119) requires that either \(t=0\) or \(s=s_{1,2}=\tan\chi\pm\sqrt{1+\tan^{2}\chi}\). Inserting the former into (118), we readily arrive at \[s=s_{3}=\frac{-\rho\cos\chi\pm\sqrt{\rho^{2}-1}}{\rho\sin\chi-1}, \tag{123}\] which in our admissible sector in parameter space (100) is valid only for \(1\leqq\rho\leqq 2\). Upon insertion of \(s_{1}\) and \(s_{2}\), the roots of the equation (118) transform into \[t_{1}=\pm\sqrt{\frac{1-\rho}{(2-\rho)(1-\sin\chi)}},\quad t_{2}=\pm\sqrt{ \frac{\rho+1}{(2+\rho)(1+\sin\chi)}}, \tag{124}\] respectively, the former of which is valid only for \(0<\rho\leqq 1\) while the latter is valid for all admissible values of \(\rho\). It should be noted that \(t_{1}\) vanishes for \(\rho=1\), while \(s_{3}=s_{2}\), so that these two families of solutions have indeed a member in common. Thus, the total number of critical points \(\widehat{\mathbf{x}}\) of \(\Phi_{\rm o}\) (including the poles) reduces to 8 from the 10 shown in figure 3. This latter, special instance is now illustrated in figure 6, which shows that for \(\rho=1\) a saddle with index \(\iota=-2\) lies on the equator of \(\mathbb{S}^{2}\) and it splits into two saddles with \(\iota=-1\) as \(\rho\) is either increased or decreased. We close this case by recalling that no extra background solution exists for the present choice of parameters, as shown in section 4.1.2. #### 4.2.3 Case \(\rho=0\), \(K\neq 0\). This is the axis \(\mathscr{A}\) in parameter space (deprived of the centre \(\mathscr{C}\)). For this choice of parameters, (119) requires that either \(s=s_{1}=0\) or \(s=s_{2,3}=\pm\sqrt{3}\). Inserting the former into (118), we obtain the roots \[t_{1,2}=\frac{K\pm\sqrt{K^{2}+4}}{4}, \tag{125}\] resulting in 4 critical points \(\mathbf{x}\). Similarly, the roots of (118) corresponding to \(s_{2,3}\) are \[t_{3,4}=-K\pm\sqrt{K^{2}+1}, \tag{126}\] which together amount to 8 critical points \(\widehat{\mathbf{x}}\). Adding the poles, also in view of section 4.1.3, we get the expected total of 14 critical points for \(\Phi_{\rm o}\) shown in figure 4. To single out the special case of tetrahedral symmetry depicted in figure 5, we require that \(\Phi_{\rm o}=1\) at the critical point associated with the roots \(s=0\) and negative \(t\) in (125) (as the maxima other than the North pole live in the Southern hemisphere of \(\mathbb{S}^{2}\)). Thus, by (121), (116) and (114), we must have \[\lambda=\frac{K-t_{2}}{\sqrt{1+t_{2}^{2}}}=1, \tag{127}\] whose unique root is \(K=1/\sqrt{2}\), as expected. Two more cases deserve a special treatment, as they can be resolved explicitly by finding the roots of lower-degree polynomials. Geometrically, they are related to the special planes delimiting the sector of interest in parameter space (100). We treat these cases below, before addressing the generic, more complicated case. #### 4.2.4 Case \(\rho>0\), \(\chi=-\pi/2\), \(K>0\). For this choice of parameters, (119) has the trivial solution \(s=0\), which inserted in (118) delivers \[t=t_{1,2}=\frac{K\pm\sqrt{K^{2}-2(2-\rho)(\rho-1))}}{2(2-\rho)}. \tag{128}\] These are real for all \(K>0\), if \(0<\rho\leqq 1\), but require \(K\geqq K_{2}:=\sqrt{2(2-\rho)(\rho-1))}\) if \(1\leqq\rho<2\). The corresponding critical points of \(\Phi_{\rm o}\) are in general 4, but for \(K=K_{2}\) and \(1\leqq\rho<2\), where they reduce to 2. The case \(\rho=2\) deserves a special notice, as for \(s=0\) there is a single root \(t=1/2K\) and this branch of solutions only brings in 2 critical points of \(\Phi_{\rm o}\) (instead of 4). For \(s\neq 0\), (119) is also solved by \[t=\frac{K}{2\rho}(s^{2}-3), \tag{129}\] Figure 6: Contour plots of \(\Phi_{\rm o}\) on the plane \((x_{1},x_{3})\) for \(\chi-\pi/3\) and \(K=0\). For \(\rho=1\), \(\Phi_{\rm o}\) has a saddle with index \(\iota=-2\) on the equator of \(\mathbb{S}^{2}\). As \(\rho\) is either increased or decreased, this saddle splits into two saddles with \(\iota=-1\) moving along a meridian or sliding on the equator, respectively. which transforms (118) into a quadratic equation in \(\sigma:=s^{2}\), \[K^{2}(\rho+2)\sigma^{2}-[K^{2}(6+\rho)+2\rho^{2}(1+\rho)]\sigma+3K^{2}(6-\rho)+2 \rho^{2}(\rho-1)=0, \tag{130}\] whose roots we shall denote \(\sigma_{1}\) and \(\sigma_{2}\). Elementary analysis shows that for \(1\leqq\rho\leqq 2\) both \(\sigma_{1}\) and \(\sigma_{2}\) are positive, and so they correspond to 8 critical points of \(\Phi_{\rm o}\). For \(0<\rho\leqq 1\), the picture is more articulated and changes as \(K\) crosses the value \[K_{1}:=\sqrt{\frac{2\rho^{2}(1-\rho)}{3(6-\rho)}}. \tag{131}\] For \(0<K<K_{1}\), \(\sigma_{1}\) is negative whereas \(\sigma_{2}\) is positive; the number of corresponding critical points is 4. For \(K=K_{1}\), \(\sigma_{1}\) vanishes, and so it is not an acceptable root, for \(s\neq 0\) on this branch; it does not bring any extra critical point, whereas the root \(\sigma_{2}>0\) does brings in 4, for a total of 10 (including the poles). Finally, for \(K>K_{1}\), both \(\sigma_{1}\) and \(\sigma_{2}\) are positive and the scene we see is the same as for \(1<\rho<2\), with 14 critical points in total. For \(\rho=2\), putting together all roots, we obtain instead 12 critical points for \(\Phi_{\rm o}\) (see figure 7). The situation is more effectively summarized with the aid of the continuous function \[g(\rho):=\left\{\begin{array}{ll}\sqrt{\frac{2\rho^{2}(1-\rho )}{3(6-\rho)}}&\qquad\mbox{for $0\leq\rho\leq 1$},\\ \sqrt{2(2-\rho)(\rho-1)}&\qquad\mbox{for $1\leq\rho\leq 2$},\end{array}\right. \tag{132}\] whose graph is depicted in figure 7, which also shows the total number of critical points of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\) associated with different regions in the \(\chi=-\pi/2\) plane in parameter space. For its role in separating regions with different numbers of critical points, the curve that Figure 7: The graph of the function \(g\) in (132) against \(\rho\). The numbers in different regions of the plane \(\chi=-\pi/2\) indicate how many critical points \(\Phi_{\rm o}\) possesses there. For this role, the graph of \(g\) is a separating curve, or more shortly, a _separatrix_. represents the graph of \(g\) is called a _separatrix_. We shall see below how it extends to a _surface_ in parameter space. Here we are mainly interested in the algebraic avenue opened by Walcher [120], which may readily deliver the total number of critical points of \(\Phi_{\rm o}\), and hence the real eigenvalues and eigenvectors of \({\bf A}\). The following summary supplements the algebraic approach; it relies on stability and bifurcation analyses expounded in [45], to which the reader is referred for further details. 1. For \(K>g(\rho)\) there are eight generic critical points beside the two at the poles and four on the special great circle \(x_{1}=0\), for a total of _fourteen_ critical points. Four are maxima, four minima, and the remaining six are saddles. 2. For \(K<g(\rho)\), there are a total of _ten_ critical points, of which three are maxima, three minima, and the remaining four are saddles. 3. For \(K=g(\rho)\), two different scenarios present themselves, according to whether \(0<\rho<1\) or \(1<\rho<2\). In the former case, the critical points are ten, whereas in the latter case they are _twelve_. In both cases, the total number of maxima is three, as many as the minima; only the number of saddles differs: there are four for \(0<\rho<1\) and six for \(1<\rho<2\). In the former case, two saddles are degenerate, but all four have index \(\iota=-1\). In the latter case, two out of the six saddles are degenerate and have index \(\iota=0\) (marked by a yellow circle in figure 8), while the remaining four are not degenerate and have the usual index \(\iota=-1\). 4. The degenerate saddles with \(\iota=0\) for \(1<\rho<2\) migrate towards the poles as \(\rho\) approaches 2 along the line \(K=g(\rho)\) and towards the equator as \(\rho\) approaches 1. Correspondingly, the North pole becomes a degenerate maximum (while the South pole becomes a degenerate minimum) and the equator hosts two symmetric "monkey saddles" [44]. Figure 8: Contour plots of \(\Phi_{\rm o}\) on the plane \((x_{1},x_{3})\) illustrating the critical points of \(\Phi_{\rm o}\) along the line \(K=g(\rho)\) on the plane \(\chi=-\pi/2\) in parameter space. The centre of the yellow circle in the last three panels designates the position of the degenerate saddle with index \(\iota=0\). Figure 9: Sections of the graph of \(\Phi_{\rm o}\) for the choice of parameters in figure 8d on two orthogonal planes through the point depicted as a yellow circle in figure 8d. The two planes of section have equations \(x_{1}=0\) and \(x_{3}=1/\sqrt{7}\), respectively. valid for \(0<\rho\leqq 1\), and \[s=\pm\sqrt{\frac{\rho-1}{\rho+1}},\qquad t=0, \tag{134}\] valid for \(1\leqq\rho\leqq 2\). They provide 4 critical points \(\widehat{\mathbf{x}}\) of \(\Phi_{\rm o}\) for \(0<\rho<1\) and \(1<\rho\leqq 2\), to which we must now add the 4 corresponding to the background solutions originated from the case studied in section 4.1.5. Thus, the total number of critical points is generically 10 (once both poles are added). As made clear by comparing (133) and (134), the case \(\rho=1\) is singular, as there the four critical points identified by (133) and (134) collapse to \((0,\pm 1,0)\), and so the total number of critical points reduces to 8. For \(\rho=1\), three maxima and three minima are accompanied by two degenerate saddles, each with index \(\iota=-2\). By contrast, For \(\rho=2\), the same number of maxima and minima are accompanied by four degenerate saddles, each with index \(\iota=-1\), for a total of ten critical points (see figure 11). Figure 11: Contour plots of \(\Phi_{\rm o}\) on the plane \((x_{1},x_{3})\) in two special cases for \(\chi=-\pi/2\) and \(K=0\), with 8 and 10 critical points, respectively. Figure 10: Contour plots of \(\Phi_{\rm o}\) on the plane \((x_{1},x_{3})\) for \(\rho=2\), \(\chi=-\pi/2\), and increasing values of \(K\). The yellow circles designate the poles as degenerate saddles with \(\iota=0\). #### 4.2.6 Case \(\rho>0\), \(\chi=-\pi/6\), \(K>0\). As remarked above, by the \(2\pi/3\) covariance enjoyed by \(\Phi_{\rm o}\), the plane in parameter space where \(\chi=-\pi/6\) can be identified with the the plane where \(\chi=\pi/2\). Clearly, the graph of \(\Phi_{\rm o}\) would rotate around the \(x_{3}\)- axis as a consequence of the change in \(\chi\), but neither the number nor the nature of its critical points would change. A glance at equation (107) for \(\chi=\pi/2\) and \(K>0\) suffices to show that there is no background solution in this case. As for the critical points of \(\Phi_{\rm o}\) with \(x_{2}\neq 0\), they are determined by the roots \((s,t)\) of (118) and (119), which now read as \[(2+\rho)t^{2}+K(s^{2}-1)t+\frac{1}{2}(\rho-1)s^{2}-\frac{1}{2}( \rho+1)=0, \tag{135}\] \[2\rho st=Ks(3-s^{2}), \tag{136}\] respectively. The latter is solved for either \(s=0\) or \[t=t_{1}=K\frac{3-s^{2}}{2\rho}\quad\mbox{if}\quad s\neq 0. \tag{137}\] Letting \(s=0\) in (135), we obtain a quadratic equation for \(t\) with roots \[t_{2,3}=\frac{K\pm\sqrt{K^{2}+2(\rho+1)(\rho+2)}}{2(\rho+2)}, \tag{138}\] which amount to 4 critical points \(\widehat{\mathbf{x}}\) of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\). Setting \(t=t_{1}\) in (135) reduces the latter to a quadratic equation in \(\sigma:=s^{2}\), \[K^{2}(\rho-2)\sigma^{2}+2[K^{2}(6-\rho)+\rho^{2}(1-\rho)]\sigma-3K^{2}(6+\rho )+2\rho^{2}(1+\rho)=0. \tag{139}\] An elementary analysis shows that for \(0<\rho<2\) the roots \(\sigma_{1,2}\) of this equation are both positives if \(K>f(\rho)\), where \[f:=\sqrt{\frac{2\rho^{2}(1+\rho)}{3(6+\rho)}}, \tag{140}\] whereas \(\sigma_{1}=0\) and \(\sigma_{2}>0\) if \(K=f(\rho)\), and \(\sigma_{1}<0\) and \(\sigma_{2}>0\) if \(K<f(\rho)\). Correspondingly, in complete analogy to our discussion in section 4.2.6, in the interval \(0<\rho<2\), the critical points \(\widehat{\mathbf{x}}\) of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\) are 10 for \(K\leqq f(\rho)\) and 14 for \(K>f(\rho)\) (see figure 12). The case \(\rho=2\) is once again exceptional, as equation (139) reduces to \[(K^{2}-1)(\sigma-3)=0. \tag{141}\] This shows that, for \(K\neq 1\), \(s=\pm 3\) and \(t=0\) are the only solutions in one branch (to be accompanied by \(s=0\) and \(t=t_{2,3}\) in the other branch), which amounts to a total of 10 critical points for \(\Phi_{\rm o}\). Furthermore, if \(K=1\), (141) is identically satisfied, and so (137) delivers a whole orbit of solutions in this branch, to be again supplemented by \(s=0\) and \(t=t_{2,3}\) in the accompanying branch. It is easily seen that here \(t_{2}=-1/2\) and \(t_{3}=3/4\); the latter is subsumed in the orbit of the first branch (for \(s=0\), of course), whereas the former is not. This special case, where \(\Phi_{\rm o}\) has infinitely many critical points, is nothing but the one considered in section 4.2.1 above, corresponding to the centre \(\mathscr{C}\) in parameter space; only, the graph of \(\Phi_{\rm o}\) is rotated in space. Figure 12 shows the graph of \(f\) in (140) marked with the total number of critical points of \(\Phi_{\rm o}\) in different regions of the plane \(\chi=-\pi/6\). The special case \(\rho=2\) is further illuminated in figure 13, which suggests a radical change of scenery in the arrangement of the critical points of \(\Phi_{\rm o}\) as \(K\) crosses the singular value \(K=1\). Having completed the survey of all special cases where the critical points of \(\Phi_{\rm o}\) are Figure 12: The graph of the function \(f\) defined in (140) is plotted against \(\rho\) in the interval \(0\leqq\rho\leqq 2\). It divides the plane \(\chi=-\pi/6\) in a number of regions with a different total number of critical points of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\) (including the poles). Figure 13: Contour plots of \(\Phi_{\rm o}\) on the plane \((x_{1},x_{3})\) for \(\rho=2\), \(\chi=-\pi/6\), and values of \(K\) on both sides of the singular value \(K=1\). The scenery is quite different in the two adjoining cases, but the total number of critical points is still 10 for both. decided by the roots of a low-degree polynomial, we are in a position to address the generic case, which will require handling a polynomial of degree 6. #### 4.2.7 Generic case. This is the case where \(0<\rho\leqq 2\), \(\pi/2<\chi<-\pi/6\), and \(K>0\). Equation (119) can be solved for \(t\), provided that \(s\neq s_{\pm}\), where \[s_{\pm}:=\tan\chi\pm\sqrt{1+\tan^{2}\chi} \tag{142}\] are the roots of the quadratic polynomial in \(s\) on the left hand side of (119). Whit \(t\) thus given by \[t=\frac{Ks(s^{2}-3)}{\rho[(s^{2}-1)\cos\chi-2s\sin\chi]}, \tag{143}\] equation (118) reduces to the polynomial \[W(s):=\sum_{i=0}^{6}S_{i}s^{i}=0, \tag{144}\] whose coefficients \(S_{i}\) are given by \[\left\{\begin{aligned} S_{0}&:=-\rho^{2}\cos^{2} \chi(1+\rho\sin\chi),\\ S_{1}&:=-6K^{2}\rho\cos\chi+2\rho^{2}\cos\chi(3\rho \cos^{2}\chi-2\sin\chi-2\rho),\\ S_{2}&:=6K^{2}(\rho\sin\chi+6)+5\rho^{2}\cos^{2}\chi(3 \rho\sin\chi+1)-4\rho^{2}(1+\rho\sin\chi),\\ S_{3}&:=4\rho\cos\chi[\rho^{2}(4-5\cos^{2}\chi)-K^{2} ],\\ S_{4}&:=4K^{2}(\rho\sin\chi-6)+5\rho^{2}\cos^{2}\chi( 1-\rho\sin\chi)+4\rho^{2}(\rho\sin\chi-1),\\ S_{5}&:=2\rho\cos\chi[K^{2}+\rho(3\rho\cos^{2}\chi+2 \sin\chi-2\rho)],\\ S_{6}&:=2K^{2}(2-\rho\sin\chi)+\rho^{2}\cos^{2}\chi( \rho\sin\chi-1).\end{aligned}\right. \tag{145}\] Every real root \(s\neq s_{\pm}\) of \(W\), once combined with \(t\) as in (143), corresponds to two (antipodal) critical points of \(\Phi_{\rm o}\) on \(\mathbb{S}^{2}\). So, if all roots of \(W\) are real and not coincident with either \(s_{\pm}\), and if the case in section 4.1.6 for the existence of background solutions does not apply, then \(\Phi_{\rm o}\) possesses 14 critical points (two of which are at the poles), thus reaching the allowed maximum number, according to the theorem of [19] applied to tensor \(\mathbf{A}\). We see now that only \(s_{+}\) can be a spurious root of \(W\) (and must be suppressed) in the selected sector of parameter space (100) where our analysis is confined. This follows from a direct inspection, which yields \[W(s_{\pm})=4K^{2}(2\mp\rho)\frac{(\sin\chi\pm 1)(2\sin\chi\mp 1)^{2}}{\cos^{2} \chi(\sin\chi\mp 3)\pm 4(1\mp\sin\chi)}, \tag{146}\] so that, for \(0<\rho\leqq 2\) and \(-\pi/2<\chi<-\pi/6\), only \(W(s_{+})\) vanishes, for \(\rho=2\). This shows that on the lateral boundary of the selected sector one root of \(W\) is inadmissible and two critical points of \(\Phi_{\rm o}\) are lost. In particular, for \(\rho=0\), whenever \(W\) has 6 real roots, \(\Phi_{\rm o}\) possesses only 12 critical points, not 14. Next we prove that \(W\) has indeed 6 real roots asymptotically for large \(K\). It follows from (144) and (145) that for \(K\gg 1\) \[W(s)=-K^{2}s(s^{2}-3)w(s)+O(1), \tag{147}\] where \[w(s):=(\rho\sin\chi-2)s^{3}-\rho\cos\chi s^{2}+(\rho\sin\chi+6)s-\rho\cos\chi. \tag{148}\] The algebraic discriminant \(\Delta(w)\) of \(w\) is readily computed, \[\Delta(w)=4[432-\rho^{4}+16\sin\chi(4\cos^{2}\chi-1)\rho^{3}-72\rho^{2}] \tag{149}\] and can be shown to be positive for \(0<\rho\leqq 2\) and \(\pi/2<\chi<-\pi/6\) (it vanishes only along a line in our selected sector, where \(\rho=2\) and \(\chi=-\pi/6\)). Thus, for \(K\) sufficiently large, all roots of \(W\) are real, and since the function \(\kappa(\rho,\chi)\) in (113) is bounded, we easily conclude that \(\Phi_{\rm o}\) possesses 14 critical points on \(\mathbb{S}^{2}\). It should be noted that \(S_{6}\) vanishes precisely for \(K=\kappa(\rho,\chi)\). This means that when \(W\) becomes a polynomial of degree 5, loosing at least one root (and \(\Phi_{\rm o}\), correspondingly, 2 critical points), the 2 critical points connected with the background solutions studied in section 4.1.6 come into the picture, replacing the lost ones. Actually, it follows from (120) that whenever a root of \(W\) flies to \(\pm\infty\), the corresponding critical points of \(\Phi_{\rm o}\) approach the great circle of \(\mathbb{S}^{2}\) where \(x_{2}=0\), so that crossing the surface \(K=\kappa(\rho,\chi)\) in parameter space does not result in a discontinuity of the critical points of \(\Phi_{\rm o}\), neither for their number nor for their position. This suggests that the background solutions classified in section 4.1 could never play a role for \(K>0\). There is, however, one singular instance where they can, if \(W\) loses more than one root. This is the case where \(S_{6}\) vanishes alongside with \(S_{0}\), \(S_{1}\), and \(S_{2}\), along the curve in the space \((\rho,\chi,K)\) parameterized by \[\chi=-\arcsin\left(\frac{1}{\rho}\right),\quad K=h(\rho):=\sqrt{\frac{\rho^{2 }-1}{3}},\quad\mbox{for}\quad 1<\rho\leqq 2, \tag{150}\] where \(W\) reduces to \[W_{0}(s):=\frac{8}{3}(\rho^{2}-4)s^{3}(\sqrt{\rho^{2}-1}s^{2}+s-2\sqrt{\rho^{2 }-1}). \tag{151}\] The polynomial \(W_{0}\) has clearly 3 distinct real roots that generate 6 critical points of \(\Phi_{\rm o}\), to which we must add the 2 associated with the background solutions and the 2 poles (as usual), for a total of 10 critical points. We know from our analysis for \(K=0\) that the total number of real roots of \(W\) must decrease upon decreasing \(K\). Since all coefficients of \(W\) are real, this can only happen through the coalescence of two real roots. To identify the critical values of \(K\), for given \(\rho\) and \(\chi\), where this takes place, we need to find a common root for \(W(s)\) and its derivative \(W^{\prime}(s)\). The conventional way is to compute the algebraic discriminant \(\Delta(W)\) of \(W\) and look for its roots. Unfortunately, \(\Delta(W)\) turns out to possess a very complicated expression (involving a polynomial of degree 20 in \(\rho\)). Our strategy will be different. The system requiring that both \(W\) and \(W^{\prime}\) vanish has the following general structure \[K^{2}a_{11}+a_{12} = 0, \tag{152}\] \[K^{2}a_{21}+a_{22} = 0, \tag{153}\] where \(a_{ij}=a_{ij}(\rho,\chi)\) are the entries of a matrix \(A\). This system is compatible only if \(\det A=0\), which turns out to be a polynomial in \(s\) of degree 10, whose complex roots can easily be computed numerically. Among these, we are only interested in the real roots \(s_{*}\) that deliver a positive \(K^{2}\) through either (152) or (153); these are as well all possible double roots of \(W\). We systematically found a single root \(s_{*}\) for all \(0<\rho\leqq 2\) and \(-\pi/2<\chi<-\pi/6\). Figure (a)a shows the critical value \(K_{*}\) of \(K\) corresponding to \(s_{*}\) for \(\chi=-\pi/3\) and \(0\leqq\rho\leqq 2\), along with the graph of the function \(\kappa\) defined in (113). The graph of \(K_{*}\) has two branches connected by a cusp at \(\rho=\rho_{\rm c}\); along the branch with \(\rho<\rho_{\rm c}\), \(s_{*}<0\), whereas \(s_{*}>0\) for \(\rho>\rho_{\rm c}\); for \(\rho=\rho_{\rm c}\), where \(K_{*}\) and \(\kappa\) cross, \(s_{*}=0\). Thus, on both branches of \(K_{*}\) the 5 distinct real roots of \(W\) correspond to 12 critical points of \(\Phi_{\rm o}\) (including the poles), whereas on the cusp the 3 distinct real roots of \(W\) and the 2 background solutions amount to 10 critical points of \(\Phi_{\rm o}\). Figure (b)b illustrates the branches of \(K_{*}\) and their cusp for a sequence of values \(\chi\) in our selected sector (100). They play the same separating role that \(g\) and \(f\) play on the planes \(\chi=-\pi/2\) and \(\chi=-\pi/6\), respectively. Together they form a two-vaulted surface, which we call the _separatrix_, traversed by a _groin_ represented by the line of cusps described by (150). Figure 14: Representations of the separatrix \(K=K_{*}(\rho,\chi)\) and the surface \(K=\kappa(\rho,\chi)\) described by (113). Above the separatrix, \(\Phi_{\rm o}\) has 14 critical points, below and on each vault of the separatrix \(\Phi_{\rm o}\) has 12 critical points, whereas it has only 10 on the groin (see figure 15). The behaviour of \(K_{*}\) around a cusp can be obtained from a standard asymptotic analysis. For given \(\chi\), the value of \(K_{*}\) at the cusp is delivered by setting \(\rho=-1/\sin\chi\) in \(h(\rho)\) as defined by (150). For \(\rho\) close to this value, \(K_{*}\) is expressed by \[K_{*}=-\frac{1}{\sqrt{3}}\frac{\cos\chi}{\sin\chi}+\frac{3^{1/6}}{2^{4/3}} \left(\frac{3-4\cos^{2}\chi}{\cos\chi\sin^{2}\chi}\right)\left(\rho+\frac{1}{ \sin\chi}\right)^{2/3}+O\left(\rho+\frac{1}{\sin\chi}\right). \tag{154}\] Figure 16 shows how two critical points of \(\Phi_{\rm o}\) merge upon approaching the cusp from both branches of the separatrix for a given value of \(\chi\): a degenerate saddle with \(\iota=0\) and a standard saddle with \(\iota=-1\) coalesce into a standard saddle. As clearly emerges from combining figures 7, 12, and 15, the number of critical points of \(\Phi_{\rm o}\) suffers discontinuities on the planes \(\chi=-\pi/2\) and \(\chi=-\pi/6\) that delimit the selected sector. Figures 17 and 18 illustrate these transitions. Finally, we show in figure 19 how the number of critical points of \(\Phi_{\rm o}\) changes on the lateral boundary of the selected sector, where \(\rho=2\), upon approaching the plane \(\chi=-\pi/6\). Here the poles are degenerate saddles with \(\iota=0\) for all \(-2\leqq\chi<-\pi/6\); their nature changes as a maximum (minimum) lands on the North (South) pole at \(\chi=-\pi/6\). ### Comparison with previous studies In our previous studies [44, 45], we have taken a combined geometric-analytic approach for the determination of the critical points of \(\Phi_{\rm o}\) (and the corresponding eigenvalues and eigenvectors of the octupolar tensor \(\mathbf{A}\)). Symmetry was at the basis of our geometric considerations, and path continuation was at the basis of our analytic ones. In that approach, the special cases for \(\chi=-\pi/2\) and \(\chi=-\pi/6\) were suggested by symmetry; they were handled directly by solving explicitly the equilibrium equations (104) and (105) for \(\Phi_{\rm o}\). In the algebraic approach put forward by Walcher [120] and fully adopted here, the determination of the critical points of \(\Phi_{\rm o}\) on the symmetry planes in parameter space stems from the study of the roots of low-degree polynomials, for which resolvent formulas are available. The outcomes of this analysis, which has been detailed above, confirmed our previous findings and are summarized in figures 7 and 12. Things were different in the interior of the selected sector in parameter space, representative of its whole. The algebraic approach, albeit perhaps more pedantic (as testified by the detailed case distinction we had to work out not to loose solutions), revealed itself more accurate. The major differences with our previous findings are summarized below. 1. We found an explicit, analytic expression (150) for the line of cusps that traverses the separatrix, acting as a groin joining two vaults. 2. We showed that the total number of critical points of the octupolar potential is 10 Figure 16: For \(\chi=-\pi/3\), the cusp in the separatrix is hit at \(\rho\doteq 1.15\) (see figure 14a). Here we show the contour plots of \(\Phi_{\rm o}\) on the plane \((x_{1},x_{3})\) on the two branches of the separatrix (panels (a) and (c)) and on the cusp (panel (b)). \(\Phi_{\rm o}\) has 12 critical points in (a) and (c), two of which are degenerate saddles with \(\iota=0\) (marked by yellow circles). \(\Phi_{\rm o}\) has 10 critical points in panel (b), none of which has index \(\iota=0\). along the line of cusps, instead of the \(8\) we had found in [45]. 3. We showed that in the interior of the representative sector in parameter space the whole separatrix (away from the line of cusps) bears \(12\) critical points for the octupolar potential, instead of the \(10\) we had found in [45] on one component bordering on the line of cusps. 4. We found another singular case with only \(8\) critical points for the octupolar potential, a whole circle in parameter space (corresponding to \(\rho=1\), \(K=0\) in our representation). What was predominantly responsible for the incompleteness of our previous analyses is a type of potentially baffling critical point of the octupolar potential, which we had partly missed. This is a singular point of the index field \(\mathbf{u}_{\Phi_{\rm o}}\) in (86) that can be lifted by a local surgery of \(\Phi_{\rm o}\). More particularly, a degenerate saddle with index \(\iota=0\), easily missed in a standard topological analysis of the index field \(\mathbf{u}_{\Phi_{\rm o}}\) (usually Figure 17: Comparison between the contour plots of \(\Phi_{\rm o}\) on the plane \((x_{1},x_{3})\) for different points on the separatrix in parameter space, taken for the two values of \(\chi\) corresponding to the graph of \(g\) in figure 7 and to the graph with \(j=1\) in figure 15, respectively. Panels (b) and (e) refer to the two cusps involved. The number of critical points of \(\Phi_{\rm o}\) changes as follows: from \(10\) to \(12\) going from (a) to (d), from \(8\) to \(10\) going from (b) to (e); it is the same in (c) and (f). Yellow circles mark degenerate saddles with index \(\iota=0\). visually associated with the features of a countour plot). The algebraic method, on the other hand, clearly identifies these elusive critical points with the real roots of even multiplicity of the polynomials involved. These are indeed the roots that a slight, surgical perturbation of the polynomial may make either disappear or unfold in a number of simple roots (with vanishing total topological index). Contrariwise, a non-simple root with odd multiplicity cannot be associated with a critical point with index \(\iota=0\), as perturbations of the polynomial cannot remove it. We have seen both these mechanisms at work here: a critical point with index \(\iota=0\) suddenly appearing, disappearing, or splitting; three critical points coming together in a single one with \(\iota\neq 0\). We have seen the first instance on the separatrix and the second on the line of cusps and the circle with the least number of critical points (eight). This also explains, for what is worth, why critical points were missed in [45]. These were the degenerate saddles with \(\iota=0\) on the fold of the separatrix that borders the Figure 18: Comparison between the contour plots of \(\Phi_{\mathrm{o}}\) on the plane \((x_{1},x_{3})\) for different points on the separatrix in parameter space, taken for the two values of \(\chi\) corresponding to the graph of \(f\) in figure 12 and to the graph with \(j=9\) in figure 15, respectively. Panels (e) and (b) refer to the cusp involved in one separatrix and to its cusp-free limit in the other, respectively. The number of critical points of \(\Phi_{\mathrm{o}}\) changes as follows: from 10 to 12 going both from (a) to (d) and from (c) to (f), from \(\infty\) to 10 going from in (b) and (e). Yellow circles mark degenerate saddles with index \(\iota=0\). plane \(\chi=-\pi/2\) for \(0\leqq\rho\leqq 1\). No critical point with \(\iota=0\) lives on this border, and so it could not be propagated to the rest of the separatrix, as was instead the one that lives on the adjoining border for \(1<\rho\leqq 2\). ## 5 Trace Extensions Our analysis so far has been confined to fully symmetric octupolar tensors \(\mathbf{A}\) with vanishing traces. Here, we broaden the scope of our study by allowing \(\mathbf{A}\) to have non-vanishing traces, while still retaining full symmetry. This will add 3 more parameters to an already crowded scene. However, the octupolar potential will again prove a useful tool to describe this larger class of tensors. ### General symmetric and trace type tensors Let us consider tensors which are fully symmetric, but not necessarily traceless. The most general potential associated to a fully symmetric tensor is written in (23); there we now make use of definitions (69) and \[\gamma_{1}:=A_{133},\quad\gamma_{2}:=A_{112},\quad\ \gamma_{3}:=A_{223}. \tag{155}\] Traceless tensors are characterized by having \[\gamma_{i}=-(\alpha_{i}+\beta_{i}),\quad i=1,2,3. \tag{156}\] For later reference, we will write \[\gamma_{i}=\frac{1}{3}A_{i}-(\alpha_{i}+\beta_{i}); \tag{157}\] thus the coefficients \(A_{i}\) will characterize the _trace type_ part of tensors: traceless tensors are characterized by having \(A_{i}=0\) for \(i=1,2,3\). Figure 19: Contour plots of \(\Phi_{\mathrm{o}}\) on the plane \((x_{1},x_{3})\) for \(\rho=2\), \(K=1/2\). and two different values of \(\chi\). The number of critical points of \(\Phi_{\mathrm{o}}\) is 12 in (a) and 10 in (b). In going from (a) to (b), a maximum (minimum) lands on the North (South) pole changing its singular nature of degenerate saddle with index \(\iota=0\). We will consider a general fully symmetric tensor as being the sum of a traceless tensor and a _trace type_ tensor; the latter are thus identified as having \(A_{i}\) arbitrary real constants, and \(\alpha_{i}=\beta_{i}=0\). The most general octupolar potential associated with a fully symmetric tensor can be written more compactly (understanding cyclic permutations in \(i\), i.e., \(i=4\) means \(i=1\) and \(i=0\) means \(i=3\)) as \[\Phi_{\rm s}=6\alpha_{0}x_{1}x_{2}x_{3}+\sum_{i=1}^{3}\alpha_{i}x_{i}^{3}+3 \sum_{i=1}^{3}\beta_{i}x_{i}x_{i+1}^{2}+3\sum_{i=1}^{3}\gamma_{i}x_{i}x_{i-1}^ {2}. \tag{158}\] In the same formalism, the most general octupolar potential associated with a _traceless_ fully symmetric tensor in (89) can be rewritten as \[\Phi=6\alpha_{0}x_{1}x_{2}x_{3}+\sum_{i=1}^{3}\alpha_{i}x_{i}\left(x_{i}^{2}-3 x_{i-1}^{2}\right)+\sum_{i=1}^{3}\beta_{i}x_{i}\left(x_{i+1}^{2}-x_{i-1}^{2} \right). \tag{159}\] The difference between these is the potential associated with _trace type_ tensors, and turns out to be \[\Phi_{\rm t}:=\Phi_{\rm s}-\Phi=3\sum_{i=1}^{3}(\alpha_{i}+\beta_{i}+\gamma_{ i})x_{i}x_{i-1}^{2}=\sum_{i=1}^{3}A_{i}x_{i}x_{i-1}^{2}. \tag{160}\] **Remark 13**: In the original notation introduced in (23), \(\Phi_{\rm t}\) can alternatively be written as \[\Phi_{\rm t} =(A_{311}+A_{322}+A_{333})x_{3}^{3}+3(A_{111}+A_{122}+A_{133})x_{1 }x_{3}^{2}\] \[+3(A_{211}+A_{222}+A_{233})x_{2}x_{3}^{2}. \tag{161}\] **Remark 14**: Reasoning as in section 2.5.1, we can lower by 4 the number of independent parameters appearing in \(\Phi_{\rm s}\) by _orienting_ the potential in (158). ### Trace type potential Here our attention will be confined to the general trace type potential in (160), which we write in expanded form as \[\Phi_{\rm t}=A_{1}x_{1}x_{3}^{2}+A_{2}x_{2}x_{1}^{2}+A_{3}x_{3}x_{2}^{2}. \tag{162}\] This potential shares several of the remarkable properties of the potential \(\Phi\) corresponding to traceless tensors studied in sections 3 and 4. It is covariant under inversion of \(\bi{x}\), \(x_{i}\to-x_{i}\) (\(i=1,2,3\)), and also under inversion of parameters \(A_{i}\), collected in a vector \(\bi{A}\), \(A_{i}\to-A_{i}\) (\(i=1,2,3\)). Formally, we write these properties as follows \[\Phi_{\rm t}(-\bi{x},\bi{A})=-\Phi_{\rm t}\left(\bi{x},\bi{A}\right)=\Phi_{\rm t }(\bi{x},-\bi{A}), \tag{163}\] which implies that \(\Phi_{\rm t}\) is also invariant under a simultaneous inversion of \(\bi{x}\) and \(\bi{A}\), \[\Phi_{\rm t}(-\bi{x},-\bi{A})=\Phi_{\rm t}(\bi{x},\bi{A}). \tag{164}\] It is likewise invariant under a simultaneous identical permutation of the \(x_{i}\) and of the \(A_{i}\), \[\Phi_{\rm t}(\pi(\bi{x}),\pi(\bi{A}))=\Phi_{\rm t}(\bi{x},\bi{A}). \tag{165}\] By the inversion covariance of \(\Phi_{\rm t}\), we can just study it on a hemisphere (e.g. for \(x_{3}\geqq 0\)) and for non-negative values of one of the control parameters (e.g. for \(A_{2}\geqq 0\)); in the following, we shall explore this possibility. We restrict \(\Phi_{\rm t}\) to the unit sphere \(\mathbb{S}^{2}\); the two standard ways of doing this (which we will use alternatively according to convenience) are: 1. Consider the upper (Northern) and the lower (Southern) hemispheres separately; on these we can just set \[x_{3}=\pm\sqrt{1-x_{1}^{2}-x_{2}^{2}};\] (166) we denote the potential thus obtained as \(\Phi_{\rm t}^{\pm}\). 2. Pass to spherical coordinates: \[x_{1}=\ \cos\theta\cos\phi,\quad x_{2}=\sin\theta\cos\phi,\quad x_{3}=\sin\phi,\] (167) where \(\theta\in[-\pi,\pi]\) and \(\phi\in[-\pi/2,\pi/2]\). We will mostly consider the restriction to the unit sphere using Cartesian coordinates (that is, (a) above); this will lead us to consider separately the potential in the two hemispheres. #### 5.2.1 Oriented potential on hemispheres. The potential in the Northern hemisphere is explicitly written as \[\Phi_{\rm t}^{+}=A_{1}x_{1}(1-x_{1}^{2}-x_{2}^{2})+A_{2}x_{1}^{2}x_{2}+A_{3}x_ {2}^{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}}, \tag{168}\] and its gradient is immediately computed to be \[\nabla\Phi_{\rm t}^{+}=\left(\begin{array}{c}A_{1}(1-3x_{1}^{2}-x_{2}^{2})+ 2A_{2}x_{1}x_{2}-A_{3}\frac{x_{1}x_{2}^{2}}{\sqrt{1-x_{1}^{2}-x_{2}^{2}}}\\ -2A_{1}x_{1}x_{2}+A_{2}x_{1}^{2}+A_{3}\frac{(2-2x_{1}^{2}-3x_{2}^{2})x_{2}}{ \sqrt{1-x_{1}^{2}-x_{2}^{2}}}\end{array}\right). \tag{169}\] It should be stressed that \(\Phi_{\rm t}^{+}\) has no special invariance or covariance properties under reflections in the \(x_{1},x_{2}\) variables (together or one at a time), while it retains of course the covariance under reflection in the \(A_{i}\) parameters. On the other hand, \(\Phi_{\rm t}^{+}\) is invariant under either one of the following transformations: \[(A_{1},A_{2},A_{3};x_{1},x_{2},x_{3})\to(-A_{1},A_{2},A_{3};-x_{1 },x_{2},x_{3}),\] \[(A_{1},A_{2},A_{3};x_{1},x_{2},x_{3})\to(A_{1},-A_{2},A_{3};x_{1 },-x_{2},x_{3}). \tag{170}\] We can orient the potential requiring that it has a critical point in the North pole (and hence also in the South pole); the pole corresponds to \(x_{1}=0\), \(x_{2}=0\), and it is immediately seen from the formula for \(\nabla\Phi_{\rm t}^{+}\) above that this is a critical point if and only if \[A_{1}=0. \tag{171}\] We will assume this to be the case. However, there is no guarantee that the critical points at the poles are either a maximum or a minimum. In this way we are led to consider the oriented potential \[\Phi_{\rm t}^{+}=A_{2}x_{1}^{2}x_{2}+A_{3}x_{2}^{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}}. \tag{172}\] Looking at (170), we see that this retains the second of those invariance properties, while the first is now reduced to the statement that the potential is even in \(x_{1}\). The gradient of the oriented potential in (172) is \[\nabla\Phi_{\rm t}^{+}=\left(\begin{array}{c}2A_{2}x_{1}x_{2}-A_{3}\frac{x_ {1}x_{2}^{2}}{\sqrt{1-x_{1}^{2}-x_{2}^{2}}}\\ A_{2}x_{1}^{2}+A_{3}\frac{(2-2x_{1}^{2}-3x_{2}^{2})x_{2}}{\sqrt{1-x_{1}^{2}-x_{ 2}^{2}}}\end{array}\right). \tag{173}\] Similarly, the potential in the Southern hemisphere is \[\Phi_{\rm t}^{-}=A_{1}x_{1}(1-x_{1}^{2}-x_{2}^{2})+A_{2}x_{1}^{2}x_{2}-A_{3}x_ {2}^{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}}, \tag{174}\] and its gradient is immediately computed to be \[\nabla\Phi_{\rm t}^{-}=\left(\begin{array}{c}A_{1}(1-3x_{1}^{2}-x_{2}^{2})+2 A_{2}x_{1}x_{2}-A_{3}\frac{x_{1}x_{2}^{2}}{\sqrt{1-x_{1}^{2}-x_{2}^{2}}}\\ -2A_{1}x_{1}x_{2}+A_{2}x_{1}^{2}-A_{3}\frac{(2-2x_{1}^{2}-3x_{2}^{2})x_{2}}{ \sqrt{1-x_{1}^{2}-x_{2}^{2}}}\end{array}\right). \tag{175}\] Again to guarantee having a critical point in the South pole we need \(A_{1}=0\); this reduces \(\Phi_{\rm t}^{-}\) to \[\Phi_{\rm t}^{-}=A_{2}x_{1}^{2}x_{2}-A_{3}x_{2}^{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}}, \tag{176}\] We are thus left with the two control parameters, \(A_{2}\) and \(A_{3}\). It is convenient to consider separately the cases with \(A_{2}=0\) and with \(A_{2}\neq 0\). #### 5.2.2 The case \(A_{2}=0\). In this case (assuming \(A_{3}\neq 0\), lest \(\Phi_{\rm t}\) would identically vanish), the potential on the Northern hemisphere reduces to \[\Phi_{\rm t}^{+}=A_{3}x_{2}^{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}}. \tag{177}\] This has degenerate critical points on the whole set \(x_{2}=0\) (which corresponds to a meridian on the hemisphere, and by symmetry there is a whole circle \(\mathbb{S}^{1}\subset\mathbb{S}^{2}\) of degenerate critical points), including the pole, and two isolated critical points at \[\left(0,\pm\sqrt{2/3}\right). \tag{178}\] The meridian \(x_{2}=0\) (in the Northern hemisphere) is hyperbolically unstable for \(A_{3}>0\) and hyperbolically stable for \(A_{3}<0\). As for the two isolated critical points (178), these are maxima (for \(A_{3}>0\)). Finally, analyzing the situation on the equator, we detect two critical points at \((1,0)\) and \((-1,0)\); these are degenerate saddles. Figures 20 and 21 illustrate and confirm the analysis just performed. #### 5.2.3 The case \(A_{2}\neq 0\). In this case it is convenient to write \[A_{3}=\mu A_{2}, \tag{179}\] so that \[\Phi_{\rm t}^{+}=A_{2}x_{2}\left(x_{1}^{2}+\mu x_{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}} \right), \tag{180}\] and \[\nabla\Phi_{\rm t}^{+}=A_{2}\left(\begin{array}{c}x_{1}x_{2}\left(2-\mu\frac {x_{2}}{\sqrt{1-x_{2}^{2}-x_{3}^{2}}}\right)\\ x_{1}^{2}+\mu\frac{x_{2}\left(2-2x_{1}^{2}-3x_{2}^{2}\right)}{\sqrt{1-x_{2}^{2 }-x_{3}^{2}}}\end{array}\right). \tag{181}\] It is clear from (179) that \(A_{2}\) is a multiplicative factor for the potential; thus, rescaling \(\Phi_{\rm t}^{+}\) we can just consider the cases \(A_{2}=1\), with no prejudice for our analysis. Similar formulas hold for \(\Phi_{\rm t}^{-}\); in view of the inversion symmetry of \(\Phi_{\rm t}\), we can just work with \(\Phi_{\rm t}^{+}\), which we will do from now on. Figure 21: Polar plot of \(\Phi_{\rm t}\) in (162) for the case \(A_{1}=A_{2}=0\), \(A_{3}=1\). As customary here, minima are invaginated under maxima since \(\Phi_{\rm t}\) is odd under central inversion. Figure 20: Contour plot on the plane \((x_{1},x_{2})\) of the potential \(\Phi_{\rm t}^{+}\) in (172) for \(A_{2}=0\), \(A_{3}=1\). A change of sign in \(A_{2}\) (keeping \(\mu\) unchanged, which means changing also the sign of \(A_{3}\)) would just flip the potential--in particular, minima would become maxima, and viceversa--so we can as well consider just the case \(A_{2}=1\), which we do from now on.1 Footnote 1: We stress that this holds as far as we only consider the potential associated with trace type tensors _per se_; if we also consider the potential associated with traceless tensors, the scales of the two potentials cannot be set independently. Note that for \(A_{2}>0\), we always have that \[\Phi^{+}_{\rm t}(x_{1},|x_{2}|)\geq\Phi^{+}_{\rm t}(x_{1},-|x_{2}|); \tag{182}\] more precisely, \[\Phi^{+}_{\rm t}(x_{1},|x_{2}|)-\Phi^{+}_{\rm t}(x_{1},-|x_{2}|)=2A_{2}x_{1}^{2 }|x_{2}|. \tag{183}\] Moreover, for \(x_{1}=0\) the potential is even in \(x_{2}\), i.e., \[\Phi^{+}_{\rm t}(0,x_{2})=-\Phi^{+}_{\rm t}(0,-x_{2}). \tag{184}\] Similar formulas hold, with changes of sign, for \(A_{2}<0\). (We recall that for \(A_{2}=0\) we have a degenerate situation, the meridian \(x_{2}=0\) being critical, see above; this corresponds to a global bifurcation.) Summarizing, we are reduced to study \[\Phi^{+}_{\rm t}=x_{2}\left(x_{1}^{2}+\mu x_{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}} \right), \tag{185}\] and \[\nabla\Phi^{+}_{\rm t}=\left(\begin{array}{c}x_{1}x_{2}\left(2-\mu\frac{x_{ 2}}{\sqrt{1-x_{2}^{2}-x_{3}^{2}}}\right)\\ x_{1}^{2}+\mu\frac{x_{2}\left(2-2x_{1}^{2}-3x_{2}^{2}\right)}{\sqrt{1-x_{2}^{2 }-x_{3}^{2}}}\end{array}\right) \tag{186}\] in the Northern hemisphere; while in the Southern one we have \[\Phi^{-}_{\rm t}=x_{2}\left(x_{1}^{2}-\mu x_{2}\sqrt{1-x_{1}^{2}-x_{2}^{2}} \right), \tag{187}\] and \[\nabla\Phi^{-}_{\rm t}=\left(\begin{array}{c}x_{1}x_{2}\left(2-\mu\frac{x_{ 2}}{\sqrt{1-x_{2}^{2}-x_{3}^{2}}}\right)\\ x_{1}^{2}-\ \mu\frac{x_{2}\left(2-2x_{1}^{2}-3x_{2}^{2}\right)}{\sqrt{1-x_{2}^{2}-x_{3}^ {2}}}\end{array}\right). \tag{188}\] Note that the formulas for the Northern and Southern hemispheres are interchanged under a change of sign in \(\mu\); that is, in an obvious notation, we have that \[\Phi^{-}_{\rm t}(x_{1},x_{2};\mu)=\Phi^{+}_{\rm t}(x_{1},x_{2};-\mu). \tag{189}\] #### 5.2.4 Critical points. We will now look at the critical points for \(\Phi^{+}_{\rm t}\); first we determine their location, and then we will study their nature. #### 5.2.5 The case \(\mu=0\). It is convenient to single out the case \(\mu=0\); in this case, we simply have that \[\Phi_{\rm t}^{+}=x_{1}^{2}x_{2}, \tag{190}\] which, being independent of \(x_{3}\), is the same as \(\Phi_{\rm t}^{-}\) (and \(\Phi_{\rm t}\) itself). Despite its simplicity, equation (190) is unfit to reveal the critical points of \(\Phi_{\rm t}\) on the equator of \(\mathbb{S}^{2}\) at \(x_{3}=0\). For this purpose, we find it convenient to consider the representation in spherical coordinates introduced in (167), \[\Phi_{\rm t}=\sin\theta\cos^{2}\theta\cos^{3}\phi. \tag{191}\] Hence \[\nabla\Phi_{\rm t}=\pmatrix{-3\sin\theta\sin\phi\cos^{2}\theta\cos^{2}\phi \cr\cos^{3}\theta\cos^{3}\phi-2\cos^{3}\phi\sin^{2}\theta\cos\theta}. \tag{192}\] The first component of the gradient vanishes for \(\phi=\pm\pi/2\) (these corresponds to North and South poles respectively), for \(\phi=0\) (the equator), and for \(\theta=m\pi/2\). Looking also at the second component, we get that the critical points on the equator are located at \[\theta=\pm\pi/2,\quad\theta=\pm\arccos\left(\pm\sqrt{2/3}\right). \tag{193}\] Looking instead at critical points on \(\theta=\pm\pi\), these reduce again to the poles; as for \(\theta=\pm\pi/2\), the whole curve \(\phi\in[-\pi/2,\pi/2]\) is critical; this is just the \(x_{1}=0\) meridian. The stability of these critical points is also easily analyzed by considering the matrix of second derivatives in the angular coordinates. It turns out that critical points at the poles (\(\phi=\pm\pi/2\)) and on the meridian \(x_{1}=0\) (\(\theta=\pm\pi/2\)) have degenerate stability; as for the other critical points on the equator (\(\phi=0\)), those at \(\theta=\arccos(\sqrt{2/3})\) are maxima, those at \(\theta=-\arccos(\sqrt{2/3})\) are minima. This completes the analysis of the \(\mu=0\) case; figure 22 illustrates it. Figure 22: The potential \(\Phi_{\rm t}^{+}\) in (190) (which is the same as \(\Phi_{\rm t}^{-}\) and \(\Phi_{\rm t}\)) exhibits five critical points. #### 5.2.6 The case \(\mu\neq 0\). As noted above while discussing the case \(\mu=0\), the restriction to Northern (or Southern) hemisphere fails to detect critical points lying on the equator. It is thus convenient to analyze first the equatorial region by considering the spherical coordinates representation (24). This yields (also in view of (171) and (179) with \(A_{2}=1\)) \[\Phi_{\rm t}=\cos^{2}\theta\sin\theta\cos^{3}\phi+\mu\sin\phi\sin^{2}\theta\cos ^{2}\phi. \tag{194}\] The gradient in the spherical coordinates reads as \[\nabla\Phi_{\rm t}=\pmatrix{\cos^{2}\phi\cos\theta\left(\frac{1}{2}\cos\phi(3 \cos 2\theta-1)+2\mu\sin\phi\sin\theta\right)\cr\cos\phi\sin\theta\left(\mu\sin \theta\cos^{2}\phi-3\cos^{2}\theta\sin\phi\cos\phi-2\mu\sin^{2}\phi\sin\theta \right)}. \tag{195}\] Since at this stage we _only_ want to identify the critical points lying on the equator, we set \(\phi=0\) in (195), which becomes \[\nabla\Phi_{\rm t}|_{\phi=0}=\pmatrix{\frac{1}{2}\cos\theta\left(3\cos 2\theta -1\right)\cr\mu\sin^{2}\theta}. \tag{196}\] For \(\mu\neq 0\), vanishing of the second component requires \(\theta=0\) or \(\theta=\pm\pi\); but at these points the first component does not vanish. We conclude that for \(\mu\neq 0\) there are no critical points lying _exactly_ on the equator. This analysis assures us that use of Cartesian coordinates and reduction to hemispheres will be able to detect all critical points in the case \(\mu\neq 0\). We shall thus consider \(\Phi_{\rm t}^{+}\) and its gradient \(\nabla\Phi_{\rm t}^{+}\) in the coordinates \((x_{1},x_{2})\). Some standard algebra shows that the equation \(\nabla\Phi_{\rm t}^{+}=0\) has three roots independent of \(\mu\), namely, \[\cases{p_{1}:x_{1}=0,\quad x_{2}=0,\cr p_{2}:x_{1}=0,\quad x_{2}=-\sqrt{2/3}, \cr p_{3}:x_{1}=0,\quad x_{2}=\sqrt{2/3},\cr} \tag{197}\] and two roots depending on \(\mu\), which only exist for \(0<|\mu|\leq\sqrt{2}\), namely, \[p_{4}:x_{1}=-\frac{2}{\sqrt{3}}\cos\xi,\quad x_{2}=\sqrt{\frac{2}{3}}\sin\xi \tag{198}\] \[p_{5}:x_{1}=\frac{2}{\sqrt{3}}\cos\xi,\quad x_{2}=\sqrt{\frac{2}{3}}\sin\xi, \tag{199}\] where \(\xi\) is related to \(\mu\) through the equation \[\xi=\arcsin\left(\mbox{sgn}(\mu)\sqrt{\frac{2}{4-\mu^{2}}}\right) \tag{200}\] and ranges in the interval \(-\pi/2\leqq\xi<-\pi/4\) for \(-\sqrt{2}\leqq\mu<0\) and in the interval \(\pi/4<\xi\leqq\pi/2\) for \(0<\mu\leqq\sqrt{2}\). All critical points \(p_{1}\)-\(p_{5}\) are illustrated in figure 23. In particular, according to (198) and (199), \(p_{4}\) and \(p_{5}\) run on the ellipse \[\frac{3}{4}x_{1}+\frac{3}{2}x_{2}^{2}=1, \tag{201}\] which intersects the unit circle precisely for \(\xi=\pm\pi/4\) and \(\xi=\pm 3\pi/4\). The critical points \(p_{4}\) and \(p_{5}\) are symmetric with respect to the \(x_{2}\)-axis; they are located on the lower half of the ellipse (201) for \(\mu<0\) and on the upper part for \(\mu>0\). For \(\mu=-\sqrt{2}\), both \(p_{4}\) and \(p_{5}\) collapse on \(p_{2}\); as \(\mu\) increases, they separate and move symmetrically until tending to reach the unit circle (equator of \(\mathbb{S}^{2}\)) as \(\mu\to 0\). There, they jump onto the upper part of the ellipse; as \(\mu\) further increases, \(p_{4}\) and \(p_{5}\) move symmetrically towards the \(x_{2}\)-axis, which is reached for \(\mu=\sqrt{2}\), when \(p_{4}\) and \(p_{5}\) coalesce on \(p_{3}\) (see figure 23). **Remark 15**: As noted in section 5.2.5, for \(\mu=0\) the potential \(\Phi_{\rm t}\) has more critical points than those we retrieve from the preceding analysis in the limit as \(\mu\to 0\). In terms of the original parameters \(A_{i}\), we summarize our conclusions as follows: **Proposition 1**: _Let \(A_{1}\) be zero and \(A_{2}\) be nonzero. For \(0<|A_{3}|<\sqrt{2}|A_{2}|\) all the critical points listed above are real, and the potential has five critical points in the Northern hemisphere (and, by symmetry, five critical points in the Southern hemisphere), hence ten critical points in total. For \(|A_{3}|>\sqrt{2}|A_{2}|\) the potential has three critical points in the Northern hemisphere (and, by symmetry, three critical points in the Southern hemisphere), hence six critical points in total._ In the limiting case where \(|A_{3}|=\sqrt{2}|A_{2}|\), all critical points reduce to (197); one of them becomes degenerate, thus hosting a local bifurcation. In summary, **Proposition 2**: _Let \(A_{1}\) be zero and \(A_{2}\) be nonzero. At the bifurcation, i.e., for \(|A_{3}|=\sqrt{2}|A_{2}|\), there are three critical points, \(p_{1}\), \(p_{2}\), and \(p_{3}\), in the Northern hemisphere Figure 23: The critical points of \(\Phi_{\rm t}^{+}\) in (180) for \(A_{2}=1\). Three of them, namely \(p_{1}\), \(p_{2}\), and \(p_{3}\), are independent of \(\mu\); they are marked as black dots. The other critical points, namely \(p_{4}\) and \(p_{5}\), exist only for \(0<|\mu|\leq\sqrt{2}\) and are located within the unit circle (equator of \(\mathbb{S}^{2}\)) either on the lower half of the ellipse (201) for \(\mu<0\) (blue dots), or on the upper half for \(\mu>0\) (red dots). For \(\mu=-\sqrt{2}\), \(p_{4}\) and \(p_{5}\) coalesce on \(p_{2}\), while for \(\mu=\sqrt{2}\) they coalesce on \(p_{3}\). At \(\mu=0\), they reach the equator and jump from one side to the other of the ellipse. (and three mirroring critical points in the Southern one), hence a total of six critical points; either the point \(p_{2}\) or the point \(p_{3}\) is degenerate, depending on the sign of \(A_{3}/A_{2}\)._ The values taken by the potential at these critical points are promptly computed: \[\Phi^{+}_{\rm t}(p_{1}) =0, \tag{202}\] \[\Phi^{+}_{\rm t}(p_{2}) =\Phi^{+}_{\rm t}(p_{3})=\frac{2\mu}{3\sqrt{3}},\] (203) \[\Phi^{+}_{\rm t}(p_{4}) =\Phi^{+}_{\rm t}(p_{5})=\mathrm{sgn}(\mu)\frac{4}{3\sqrt{3}\sqrt {4-\mu^{2}}}. \tag{204}\] These critical values are plotted in figure 24 as functions of \(\mu\). **Remark 16**: Note that \(\Phi^{+}_{\rm t}(p_{2})=\Phi^{+}_{\rm t}(p_{3})\), which corresponds to \(\Phi^{+}_{\rm t}\) being even in \(x_{2}\) on the \(x_{1}=0\) line; and \(\Phi^{+}_{\rm t}(p_{4})=\Phi^{+}_{\rm t}(p_{5})\), which corresponds to the invariance of \(\Phi^{+}_{\rm t}\) under inversion in \(x_{1}\). #### 5.2.7 Nature of critical points. Having identified the critical points, we should enquire if these are maxima, minima, or saddles. In order to ascertain the nature of the critical points, we should consider the matrix of second derivatives \[{\sf H}:=\nabla^{2}\Phi^{+}_{\rm t}=\left(\begin{array}{cc}\frac{\partial^{ 2}\Phi^{+}_{\rm t}}{\partial x^{2}_{1}}&\frac{\partial^{2}\Phi^{+}_{\rm t}}{ \partial x_{1}\partial x_{2}}\\ \frac{\partial^{2}\Phi^{+}_{\rm t}}{\partial x_{1}\partial x_{2}}&\frac{ \partial^{2}\Phi^{+}_{\rm t}}{\partial x^{2}_{2}}\end{array}\right), \tag{205}\] and compute its eigenvalues, or at least their sign, at the critical points \(p_{j}\) identified in section 5.2.6 above. We will stick to our assumption that \(A_{1}=0\) and \(A_{2}=+1\) (with \(A_{3}=\mu\)); the case of negative \(A_{2}\) can be recovered recalling that a change of sign in \(A_{2}\) corresponds to a change of sign in the potential, and that \(\mu=A_{3}/A_{2}\) (so a change of sign in \(A_{2}\) leaving \(A_{3}\) unchanged corresponds to a change of sign in \(\mu\), while a change of sign in both \(A_{2}\) and \(A_{3}\) leaves \(\mu\) unchanged). For the first three critical points the eigenvalues are easily computed and provide simple formulas: \[\left\{\begin{aligned} & p_{1}:\lambda_{1}=0,\quad\lambda_{2}=2\mu,\\ & p_{2}:\lambda_{1}=-4\sqrt{3}\mu,\quad\lambda_{2}=-\sqrt{2/3} \left(2+\sqrt{2}\mu\right),\\ & p_{3}:\lambda_{1}=-4\sqrt{3}\mu,\quad\lambda_{2}=+\sqrt{2/3} \left(2-\sqrt{2}\mu\right).\end{aligned}\right. \tag{206}\] Note that in the degenerate case where \(\mu=0\) the first eigenvalue of all these critical point vanishes, while for \(\mu=\pm\sqrt{2}\) the second eigenvalue vanishes in one. **Remark 17**: This simple analysis is not conclusive for the critical point at the North pole. Actually, the series expansion along \(x_{2}=0\) is flat to all orders, as clear from the explicit expression of \(\Phi_{\rm t}\). The stability of points \(p_{2}\) and \(p_{3}\) is promptly analyzed: * The point \(p_{2}\) is a minimum for \(\mu>0\), a saddle for \(-\sqrt{2}<\mu<0\), and a maximum for \(\mu<-\sqrt{2}\); * The point \(p_{3}\) is a minimum for \(\mu<0\), a saddle for \(0<\mu<\sqrt{2}\), and a maximum for \(\mu>\sqrt{2}\). For the other critical points, \(p_{4}\) and \(p_{5}\), we find it more convenient to characterize their nature by computing the trace and determinant of the Hessian matrix \(\mathsf{H}\) in (205) in terms of the parameter \(\xi\) introduced in (200), \[\tr\mathsf{H}=\frac{2\sqrt{6}(4\cos^{4}\xi-11\cos^{2}\xi+12)}{3\sin\xi(2\cos^{ 2}\xi-1)},\quad\det\mathsf{H}=\frac{16\cos^{2}\xi}{1-2\cos^{2}\xi}, \tag{207}\] where use has also been made of the inverse of the function in (200), \[\mu=\frac{\sqrt{2}}{\sin\xi}\sqrt{2\sin^{2}\xi-1}. \tag{208}\] It is a simple matter to conclude from the study of the signs of \(\tr\mathsf{H}\) and \(\det\mathsf{H}\) that * For \(-\sqrt{2}<\mu<0\) the points \(p_{4}\) and \(p_{5}\) are local minima; * For \(0<\mu<\sqrt{2}\) the points \(p_{4}\) and \(p_{5}\) are local maxima. These results can be confirmed by computing numerically the index for the different critical points; such computations are summarized in table 2, showing the index of critical points \(p_{2}\)-\(p_{5}\) for different intervals of values of \(\mu\) (recall that \(p_{4}\) and \(p_{5}\) only exist for \(0<|\mu|\leq\sqrt{2}\)). **Remark 18**: As for the degenerate critical point \(p_{1}\), we have not computed directly its index, but the Poincare-Hopf theorem requires \(p_{1}\) to be a saddle, as the total index of all critical points must be \(\iota=+2\) on the whole sphere \(\mathbb{S}^{2}\). In figure 25 we present the contour plots of \(\Phi_{\rm t}^{+}\) for different values of \(\mu\). They are accompanied in figure 26 by the corresponding polar plots. ### Full potential We want now to consider a general potential \(\Phi_{\rm s}\), i.e., the superposition of a traceless potential \(\Phi\), see equation (159), and of a trace type potential \(\Phi_{\rm t}\), see equation (160). The rich phenomenology displayed by the traceless part \(\Phi\) can only be enriched by considering also a trace type part; a complete analysis would most likely lead to a rather complicate discussion. Here our study will be confined to a simple, explanatory case. The most striking feature arising from the analysis of traceless tensor potentials is maybe the presence of an exactly tetrahedral phase [44]; we wonder if such a phase can also exist in the presence of a pure trace type contribution. The general traceless potential \(\Phi\) is written as in (159). The situation in which it enjoys full tetrahedral symmetry is obtained in section 7.2 of [44] for \[\alpha_{0}=0,\ \alpha_{1}=0,\ \alpha_{2}=\pm\frac{1}{\sqrt{2}},\ \alpha_{3}=1,\ \beta_{3}=-\frac{1}{2}. \tag{209}\] In this way the oriented traceless potential \(\Phi\) reads as \[\Phi_{\rm T}:=x_{3}^{3}-\frac{3}{2}\left(x_{1}^{2}+x_{2}^{2}\right)x_{3}+\frac {1}{\sqrt{2}}\left(x_{2}^{2}-3x_{1}^{2}\right)x_{2}, \tag{210}\] where we have set \(\alpha_{2}=1/\sqrt{2}\), for definiteness. The four maxima of the potential \(\Phi_{\rm T}\) are located at the vertices of a regular tetrahedron, and more specifically at the points given in three-dimensional Cartesian coordinates by (see [44]) \[\left\{\begin{aligned} p_{1}&=(0,0,1),\\ p_{2}&=\left(0,\frac{2\sqrt{2}}{3},-\frac{1}{3} \right),\ p_{3}=\left(-\sqrt{\frac{2}{3}},-\frac{\sqrt{2}}{3},-\frac{1}{3} \right),\ p_{4}=\left(\sqrt{\frac{2}{3}},-\frac{\sqrt{2}}{3},-\frac{1}{3} \right).\end{aligned}\right. \tag{211}\] #### 5.3.1 Perturbation approach. Here we only consider an extreme case, i.e., that in which one of the two parts (the traceless one) can be considered as dominant, and the other one (the pure trace one) as a perturbation. It should be stressed that we cannot arbitrarily orient both the traceless and the trace type part of the potential at the same time: we can only orient one of these (or Figure 25: Contour plots of the potential \(\Phi_{\rm t}^{+}\) for \(A_{3}=\mu A_{2}\) and \(A_{2}=1\) on the \((x_{1},x_{2})\) plane, for different values of \(\mu\). Figure 26: Polar plots of the potential \(\Phi_{\rm t}\) corresponding to the contour plots in figure 25. Here the protruding lobes designate maxima (and invaginated minima). The origin is \(p_{1}\), a degenerate saddle for all values of \(\mu\), accompanied by a non-degenerate saddle lying on the \(x_{2}\)-axis (either \(p_{2}\) or \(p_{3}\) in (197), depending on the sign of \(\mu\)). their sum). We find it more convenient to orient the traceless part, in particular when considering the trace type part as a perturbation. As mentioned above, we want to investigate if a potential \(\Phi_{\rm s}\) including both the traceless and the pure trace parts can have maxima at the same critical points (211), i.e., display a tetrahedral symmetry for the physical states (identified by maxima; see [44]. It should be noted that we only require the locations of the maxima to be mapped unto one another by the action of the tetrahedral group \(T_{d}\): we are not requiring, in general, that the values of these maxima are the same (as was the case in the tetrahedral potential). Thus, letting \(\Phi_{\rm T}\) be as in (210), we shall write \(\Phi_{\rm s}=\Phi_{\rm T}+\Phi_{\rm t}\) as in (158) with coefficients given by \[\left\{\begin{array}{l}\alpha_{0}=\varepsilon(\delta\alpha_{0}),\ \alpha_{1}= \varepsilon(\delta\alpha_{1}),\ \alpha_{2}=\frac{1}{\sqrt{2}}+\varepsilon(\delta\alpha_{2}),\ \ \alpha_{3}=1+ \varepsilon(\delta\alpha_{3}),\\ \beta_{1}=\varepsilon(\delta\beta_{1}),\ \beta_{2}=\varepsilon(\delta \beta_{2}),\ \beta_{3}=-\frac{1}{2}+\varepsilon(\delta\beta_{3}),\\ A_{1}=\varepsilon(\delta A_{1}),\ A_{2}=\varepsilon(\delta A_{2}),\ A_{3}= \varepsilon(\delta A_{3}).\end{array}\right. \tag{212}\] Here \(\varepsilon\) is a small parameter, all the other newly introduced parameters are expected to be of order one.2 Footnote 2: In general, the prescription \(\alpha_{1}=\beta_{1}=\beta_{2}=0\), which ensures the orientation of the traceless potential, can be violated. We then look for critical points of \(\Phi_{\rm s}\) at first order in \(\varepsilon\), and require the points \(p_{1}\) - \(p_{4}\) in (211) to be still critical points (and, by a perturbation argument, hence necessarily maxima) for \(\Phi_{\rm s}\). Through some standard algebra, we find that this is the case, provided that \[\left\{\begin{array}{l}\delta\alpha_{0}=\frac{\sqrt{2}}{3}\delta \alpha_{1},\\ \delta\beta_{1}=\frac{1}{3}\delta\alpha_{1},\ \delta\beta_{2}=0,\\ \delta A_{1}=4\delta\alpha_{1},\ \delta A_{2}=2\delta\alpha_{2}+\frac{1}{ \sqrt{2}}\delta\alpha_{3}+3\sqrt{2}\delta\beta_{3},\ \delta A_{3}=-\sqrt{2}\delta \alpha_{2}+\frac{5}{2}\delta\alpha_{3}+3\delta\beta_{3},\end{array}\right. \tag{213}\] where \(\delta\alpha_{1}\), \(\delta\alpha_{2}\), \(\delta\alpha_{3}\), and \(\delta\beta_{3}\) are free parameters. We can afford being a bit more restrictive and consider only perturbations of the traceless part that preserve the orientation of this latter by keeping the constraints \[\alpha_{1}=\beta_{1}=\beta_{2}=0. \tag{214}\] We thus arrive at the conditions \[\left\{\begin{array}{l}\delta A_{1}=0,\\ \delta A_{2}=2\delta\alpha_{2}+\frac{1}{\sqrt{2}}\delta\alpha_{3}\ +3\sqrt{2}\, \delta\beta_{3},\\ \delta A_{3}=-\sqrt{2}\delta\alpha_{2}+\frac{5}{2}\delta\alpha_{3}+3\delta \beta_{3},\end{array}\right. \tag{215}\] where \(\delta\alpha_{2}\), \(\delta\alpha_{3}\), and \(\delta\beta_{3}\) are arbitrary constants. **Remark 19**: We have only required _maxima_ to remain at the same points. If we extend this requirement to all critical points, it turns out that \(\Phi_{\rm s}\) must be proportional to \(\Phi_{\rm T}\), thus neutralizing any contribution from a pure trace tensor. Making use of (215), (214), and (212) in (158), we can easily express the symmetric potential \(\Phi_{\rm s}=\Phi_{\rm T}+\Phi_{\rm t}\) in the form \[\eqalign{\Phi_{\rm s}&={1\over\sqrt{2}}\ x_{2}\left(x_{2}^{2}-3x_{1}^{2} \right)-{3\over 2}\left(x_{1}^{2}-x_{2}^{2}\right)x_{3}+x_{3}\left(x_{3}^{2}-3x_{2 }^{2}\right)\cr&+\varepsilon\left[\left(2\delta\alpha_{2}+{1\over\sqrt{2}} \delta\alpha_{3}+3\sqrt{2}\delta\beta_{3}\right)x_{2}x_{1}^{2}+\delta\alpha_{2 }x_{2}\left(x_{2}^{2}-3x_{1}^{2}\right)\right.\cr&\quad\left.+\left(-\sqrt{2 }\delta\alpha_{2}+{5\over 2}\delta\alpha_{3}+3\delta\beta_{3}\right)x_{2}^{2}x_{3}+3 \delta\beta_{3}\left(x_{1}^{2}-x_{2}^{2}\right)x_{3}+\delta\alpha_{3}x_{3} \left(x_{3}^{2}-3x_{2}^{2}\right)\right],} \tag{216}\] which is _not_ equivalent to \(\Phi_{\rm T}\). In cases where the (observable) physics is only described by the _location_ of the maxima (or the minima) of the octupolar potential, we thus have that a perturbation of the tetrahedral potential \(\Phi_{\rm T}\) by a combination of the potentials associated to a traceless and to a trace type tensors can still describe the same physics. **Remark 20**: Physics could also depend on the (relative or absolute) levels of the maxima of the octupolar potential in (216); thus it matters if they are at the same level or not. A tedious, but easy calculation shows that requiring all maxima of \(\Phi_{\rm s}\) in (216) to be equal reduces \(\Phi_{\rm s}\) to a multiple of \(\Phi_{\rm T}\). In other words, the only way to have degenerate maxima at tetrahedral points is with a pure tetrahedral potential. **Remark 21**: We might ask for a smaller degeneration, i.e., require that the potential \(\Phi_{\rm s}\) in (216) takes the same value at the points \(p_{2}\), \(p_{3}\), and \(p_{4}\) in (211), albeit this is allowed to be different from the value taken at the point \(p_{1}\). In this case we have to require \[\delta\beta_{3}=-{1\over\sqrt{6}}\left(2\sqrt{2}\delta\alpha_{2}+\delta\alpha _{3}\right). \tag{217}\] With this prescription, we get \[\Phi_{\rm s}(p_{1})=1+\varepsilon\delta\alpha_{3},\quad\Phi_{\rm s}(p_{2})= \Phi_{\rm s}(p_{3})=\Phi_{\rm s}(p_{4})=1+\varepsilon\left({8\over 9}\sqrt{2} \delta\alpha_{2}+{1\over 9}\delta\alpha_{3}\right). \tag{218}\] We can further set \(\delta\alpha_{3}=0\), so that the value of the potential at the orienting maximum (in the North pole) is unchanged; in this case, setting also \(\delta\alpha_{2}=1\), we get \[\Phi_{\rm s}(p_{1})=1,\quad\Phi_{\rm s}(p_{2})=\Phi_{\rm s}(p_{3})=\Phi_{\rm s }(p_{4})=1+\varepsilon{8\over 9}\sqrt{2}, \tag{219}\] and, more generally, \[\eqalign{\Phi_{\rm s}=\Phi_{\rm T}+\varepsilon\biggl{[}&2\sqrt{2}x_{2}^{2} \sqrt{1-x_{1}^{2}-x_{2}^{2}}+\left(x_{2}^{2}-3x_{1}^{2}\right)x_{2}\cr&\quad +\sqrt{2}\left(x_{1}^{2}-x_{2}^{2}\right)\sqrt{1-x_{1}^{2}-x_{2}^{2}} \biggr{]}.} \tag{220}\] #### 5.3.2 Non-perturbation approach. We can also proceed non-perturbatively. To this end, we consider a general superposition \(\Phi_{\rm s}\) of a traceless potential \(\Phi\) as in (159) and a trace type potential \(\Phi_{\rm t}\) as in (160). We then consider the gradient of \(\Phi_{\rm s}\), evaluate it at the points \(p_{1}\)-\(p_{4}\) in (211), and require it vanishes there. Through standard computations we obtain that this is the case provided some relations hold between the different parameters characterizing the potential. These are as follows2 Footnote 2: The similarity between the last two equations in (221) and (215) should be heeded. \[\cases{\alpha_{0}=\sqrt{2}\beta_{1},\quad\alpha_{1}=3\beta_{1},\quad\beta_{2}=0,\\ A_{1}=12\beta_{1},\\ A_{2}=2\alpha_{2}+\frac{1}{\sqrt{2}}\alpha_{3}+3\sqrt{2}\beta_{3},\\ A_{3}=-\sqrt{2}\alpha_{2}+\frac{5}{2}\alpha_{3}+3\sqrt{2}\beta_{3}.\end{cases} \tag{221}\] Here \(\alpha_{2}\), \(\alpha_{3}\), \(\beta_{1}\), and \(\beta_{3}\) are free parameters. **Remark 22**: Equations (221) guarantee that the tetrahedral points are critical. If we also require that \(p_{2}\)-\(p_{4}\) in (211) are all on the same level set of \(\Phi_{\rm s}\), we are led to the equations \[\alpha_{0}=\alpha_{3}=\beta_{1}=0,\quad\beta_{3}=-\frac{1}{6}(2\sqrt{2}\alpha_ {2}+\alpha_{3}), \tag{222}\] which still leave \(\alpha_{2}\) and \(\alpha_{3}\) as free parameters. By (221), the coefficients of the pure trace part \(\Phi_{\rm t}\) then become3 Footnote 3: Note that we are in the degenerate case \(A_{2}=0\). \[A_{1}=0,\quad A_{2}=0,\quad A_{3}=2\left(\alpha_{3}-\sqrt{2}\alpha_{2}\right). \tag{223}\] By use of both (222) and (223), we give \(\Phi_{\rm s}\) the special form \[\Phi_{\rm s}=\frac{1}{2}\alpha_{3}x_{3}\left(2x_{3}^{2}-x_{1}^{2}-x_{2}^{2} \right)-\alpha_{2}\left[\left(3x_{2}+\sqrt{2}x_{3}\right)x_{1}^{2}+x_{2}^{2} \left(\sqrt{2}x_{3}-x_{2}\right)\right]. \tag{224}\] **Remark 23**: If now we also require that \(\Phi_{\rm s}\) in (224) has in \(p_{1}\) the same value as in \(p_{2}\)-\(p_{4}\), we easily see that it must be \[\alpha_{3}=\sqrt{2}\alpha_{2}, \tag{225}\] which, by (223), makes \(A_{3}\) vanish as well, so that \(\Phi_{\rm s}\) is eventually proportional (through \(\alpha_{3}\)) to \(\Phi_{\rm T}\). **Remark 24**: Consider now the potential \(\Phi_{\rm s}\) in (224), _without_ assuming (225). We have seen that it has critical points at the tetrahedral points (211). It should be noted that while working perturbatively we were guaranteed the critical points at \(p_{1}\)-\(p_{4}\) were, for \(\varepsilon\) sufficiently small, still maxima, in the present case this is not guaranteed. To this end, we compute as in section 5.2.7 the eigenvalues of the Hessian matrix \({\sf H}\) of \(\Phi_{\rm s}\) in (224): \[\cases{p_{1}:\lambda_{1}=\lambda_{2}=-2\left(\sqrt{2}\alpha_{2}+2\alpha_{3} \right);\\ \cr p_{2}\cdot p_{4}:\lambda_{1}=-6\sqrt{2}\alpha_{2},\quad\lambda_{2}=-6 \left(5\sqrt{2}\alpha_{2}+4\alpha_{3}\right).\end{cases} \tag{226}\] These are all real, and we want all of them to be negative. It is easily seen that this is the case, provided that \[\alpha_{2}>0,\quad\alpha_{3}>-\frac{\alpha_{2}}{\sqrt{2}}. \tag{227}\] Correspondingly, we obtain that \(\Phi_{\rm s}\) in (224) satisfies \[\Phi_{\rm s}(p_{1})=\alpha_{3},\quad\Phi_{\rm s}(p_{2})=\Phi_{\rm s}(p_{3})= \Phi_{\rm s}(p_{4})=\frac{1}{9}\left(8\sqrt{2}\alpha_{2}+\alpha_{3}\right). \tag{228}\] So the three degenerate maxima in the Southern hemisphere are higher than the maximum at the North pole if \[\alpha_{3}<\sqrt{2}\alpha_{2}, \tag{229}\] and lower than the maximum at the North pole if in (229) the inequality is reversed. #### 5.3.3 Invariance of the combined potential. To illustrate the subtleties that may be hidden in the fully symmetric potential \(\Phi_{\rm s}\), we consider here an invariance property that determines uniquely the traceless potential \(\Phi\), but fails to determine the combined potential resulting from adding to it a traceless component \(\Phi_{\rm t}\). In [45], we studied the action of the tetrahedron group \(T_{d}\) on traceless type potentials \(\Phi\). Here, we discuss the action of the same on the particular potential \(\Phi_{\rm s}\) given by (224). We will use the notation of [45], in particular the representation of \(T_{d}\) as a group of matrices acting in \(\mathbb{R}^{3}\). It was shown there that the maximal subgroup \(G\) of \(T_{d}\) leaving the North pole fixed is made of the matrices \(\{M_{1},M_{2},M_{3},M_{13},M_{14},M_{15}\}\), in the notation adopted there. Here we will rename these as \(M_{1}\)-\(M_{6}\), which read explicitly as \[\left\{\begin{array}{ll}M_{1}=\pmatrix{1&0&0\cr 0&1&0\cr 0&0&1\cr},&M_{2}= \pmatrix{-1/2&\sqrt{3}/2&0\cr-\sqrt{3}/2&-1/2&0\cr 0&0&1\cr},\\ M_{3}=\pmatrix{-1/2&-\sqrt{3}/2&0\cr\sqrt{3}/2&-1/2&0\cr 0&0&1\cr},&M_{4}= \pmatrix{-1&0&0\cr 0&1&0\cr 0&0&1\cr},\\ M_{5}=\pmatrix{1/2&-\sqrt{3}/2&0\cr-\sqrt{3}/2&-1/2&0\cr 0&0&1\cr},&M_{6}= \pmatrix{1/2&\sqrt{3}/2&0\cr\sqrt{3}/2&-1/2&0\cr 0&0&1\cr}.\end{array}\right. \tag{230}\] The commutation relation among these can be read from [45]; the only nontrivial subgroup is \(G_{0}=\{M_{1},M_{2},M_{3}\}\). It is a simple matter to check that for \(\Phi_{\rm s}\) in (224), \[\Phi_{\rm s}(M\bi{x})=\Phi_{\rm s}(\bi{x})\qquad\forall M\in G. \tag{231}\] That is, \(\Phi_{\rm s}\) is \(G\)-invariant. One might wonder if the converse is also true, i.e., if the requirement of being \(G\)-invariant does uniquely select \(\Phi_{\rm s}\) in (224). To discuss this matter, we start from the general expression for \(\Phi_{\rm s}\) in (158) and require \[\Phi_{\rm s}(M_{i}\bi{x})=\Phi_{\rm s}(\bi{x}), \tag{232}\] for \(i=1,\ldots,6\). Some elementary algebra shows that this amounts to enforcing the conditions \[\left\{\begin{aligned} &\alpha_{0}=0,\quad\alpha_{1}=0,\quad \beta_{1}=0,\quad\beta_{2}=0,\\ & A_{1}=0,\quad A_{2}=0,\quad A_{3}=3(\alpha_{3}+2\beta_{3}). \end{aligned}\right. \tag{233}\] However, by direct computation we see that this choice of parameters does _not_ make \(\Phi_{\rm s}\) in (158) agree with (224), unless we set \[\beta_{3}=-\frac{1}{6}\left(2\sqrt{2}\alpha_{2}+\alpha_{3}\right), \tag{234}\] which, incidentally, implies the third of (223). We thus conclude that the condition of \(G\)-invariance does _not_ determine uniquely \(\Phi_{\rm s}\). **Remark 25**: The \(G\)-invariance condition determines uniquely the traceless potential \(\Phi\), whereas the trace potential \(\Phi_{\rm t}\) is determined up to a multiplicative factor \(A_{3}\). **Remark 26**: It could also be mentioned that the degeneration of values at the critical points \(p_{2}\)-\(p_{4}\) does not only apply to the whole potential \(\Phi_{\rm s}\), but also to its traceless and trace components separately, although these are not separately invariant. ## 6 Other Approaches So far we have privileged descriptions of the properties of an octupolar tensor \({\bf A}\) based upon the octupolar potential \(\Phi\) introduced in (20) and the several variants encountered above. Other approaches to these properties have been proposed in the literature. We devote this section to some of these, trying to establish connections with ours. ### Maxwell multipoles This approach to octupolar tensors in rooted in Maxwell's multipole representation of spherical harmonics [79, pp. 179-214] (see also [28, pp. 514-522]). Our account, phrased in a modern language, follows [131] (see also [32] for a broader perspective). A theorem due to Sylvester [108] (alternative proofs of which can also be found in [7] and [131]) put Maxwell's method on a solid mathematical ground. There is a one-to-one correspondence between a completely symmetric tensor \({\bf A}\in{\cal T}(r,{\sf V})\) and a homogeneous polynomial \(P_{r}({\mathbf{x}})\) of degree \(r\) in \({\mathbf{x}}\in\) with \(\dim{\sf V}=3\) as \[P_{r}({\mathbf{x}})=\sum_{i_{1}i_{2}\ldots i_{r}=1}^{3}A_{i_{1}i_{2}\ldots i_{r}}x _{i_{1}}x_{i_{2}}\ldots x_{i_{r}}. \tag{235}\] Sylvester's theorem says that, given a real homogeneous polynomial \(P_{r}({\mathbf{x}})\) of degree \(r\geqq 2\), there are \(r\) vectors \({\mathbf{a}}_{1},{\mathbf{a}}_{2},\ldots,{\mathbf{a}}_{r}\in{\sf V}\) and a real homogeneous polynomial \(P_{r-2}({\mathbf{x}})\) of degree \(r-2\) such that \[P_{r}({\mathbf{x}})=\prod_{s=1}^{r}({\mathbf{a}}_{s}\cdot{\mathbf{x}})+({\mathbf{x}}\cdot{\mathbf{ x}})P_{r-2}({\mathbf{x}}). \tag{236}\] Building on classical results, Zou and Zheng [131] proved that every tensor \(\mbox{\sf D}^{(m)}\) in the decomposition of a fully symmetric tensor \(\mbox{\sf A}\) in (28) can be represented as \[\mbox{\sf D}^{(m)}=A_{m}\,\overline{\mathbf{a}_{1}\otimes\mathbf{a}_{2}\cdots\mathbf{a}_{m}}\,, \tag{237}\] where, for \(m\leqq r\), \(A_{m}>0\) is a scalar and \(\mathbf{a}_{1},\mathbf{a}_{2},\ldots\mathbf{a}_{m}\) are vectors on the unit sphere \(\mathbb{S}^{2}\) in V determined uniquely by \(\mbox{\sf D}^{(m)}\), to within a change of sign in an even number of them. The poles designated on \(\mathbb{S}^{2}\) by these vectors are called Maxwell's _multipoles_. The connection thus established between fully symmetric traceless tensors and spherical harmonics justifies calling _harmonic_ these tensors, as well as the decomposition in (26) for a generic tensor. This connection is further explored in [4]. Harmonic tensors also play a role in reconstructing the _crystalline orientation function_ for poly-crystalline materials [1, 50, 93]; for this topic the reader is referred to the comprehensive review of Man [75], in particular, to Chapt. 17. **Remark 27**: As shown in the early work of Backus [7], the multipole representation of \(\mbox{\sf A}\), by its very geometric interpretation, can be effective in identifying the symmetries of \(\mbox{\sf A}\). For more recent contributions to the role played by harmonic decomposition of a tensor \(\mbox{\sf A}\) in identifying all symmetry classes it may belong to, the reader is referred to the works [5, 6, 8, 9, 42, 43]. This is, however, a slippery terrain, as witnessed for example by the disagreement between [46] and [130], which for the piezoelectric tensor (see section 2.4.3) found with different methods 14 and 15 symmetry classes, respectively. When applied to \(\overline{\mbox{\sf A}}\,\in{\cal T}(r,{\sf V})\), the representation formula in (237) reads as \[\overline{\mbox{\sf A}}\,=A_{r}\,\overline{\mathbf{a}_{1}\otimes \mathbf{a}_{2}\cdots\mathbf{a}_{r}}\,, \tag{238}\] which easily identifies both the number \(N(r)\) of independent parameters needed to represent \(\overline{\mbox{\sf A}}\,\) and all the invariants allowed in an isotropic scalar-valued function of \(\overline{\mbox{\sf A}}\,\). Customarily, for \(r\geqq 2\) and \(\dim{\sf V}>2\), \(N(r)\) is given by a combinatoric argument as the difference between two binomial coefficients (see, for example, [104, p. 56]), \[N(r)=\binom{r+2}{r}-\binom{r}{r-2}, \tag{239}\] the first representing the number of symmetric arrangements (with repetitions) of \(r\) symbols out of a pool of 3, and the second the number of symmetric arrangements (with repetitions) of \(r-2\) symbols out of the same pool (these latter corresponding to the number of traces). A simple calculation shows that \(N(r)=2r+1\). The same conclusion is reached far more easily from (238) by remarking that \(2r\) parameters are needed to represent the vectors \(\mathbf{a}_{1},\mathbf{a}_{2},\ldots\mathbf{a}_{r}\) on \(\mathbb{S}^{2}\) and one more for \(A_{r}\). **Remark 28**: The case \(\dim{\sf V}=2\) is special. \(N(r)\) is no longer given by (239), but \(N(r)=2\) for all \(r\). Moreover, (238) is replaced by \[\overline{\mbox{\sf A}}\,=A_{r}\,\overline{\underbrace{\mathbf{e} \otimes\cdots\otimes\mathbf{e}}_{r\mbox{\scriptsize\mbox{ times}}}}\,, \tag{240}\] where all vectors \(\mathbf{a}_{i}\) are the same vector \(\mathbf{e}\) on the unit circle \(\mathbb{S}^{1}\) (see [117] for an explicit construction of \(\mathbf{e}\) when \(r=3\)). Similarly, since \(\overleftrightarrow{\bf A}\) is fully determined by \(A_{r}\) and the multipoles \(\mathbf{a}_{1},\mathbf{a}_{2},\ldots,\mathbf{a}_{r}\), the classical theorem of Cauchy [20] (see also [112, p. 29]) for the representation of isotropic scalar-valued functions depending on a finite number of vectors requires that the complete list of invariants consists of \(A_{r}\) and the following \(r(r-1)/2\) scalars \[\alpha_{ij}:=\mathbf{a}_{i}\cdot\mathbf{a}_{j}\quad 1\leqq i<j\leqq r. \tag{241}\] Then, in the special case where \(r=3\), the total number of scalar invariants of \(\overleftrightarrow{\bf A}\) is immediately seen to be 4, in agreement with [103] (see remark 1). Although the multipole representation of \(\overleftrightarrow{\bf A}\) in (238) can more easily determine the _number_ of invariants, their explicit identification may be more difficult. **Remark 29**: The number of isotropic invariants for symmetric traceless tensors \({\sf D}^{(3)}\) and \({\sf D}^{(4)}\), of rank 3 and 4 in three space dimensions, were derived in [103] and [13] from the determination of the appropriate integrity bases, and found to be 4 and 9, respectively. As shown in [131], the direct derivation of this number from the harmonic decomposition in (238) agrees with that in [103] for \({\sf D}^{(3)}\), but it does _not_ with that in [103, 103] for \({\sf D}^{(4)}\), as it would predict 7 invariants for the latter instead of 9. This suggests that the invariants in the integrity bases for \({\sf D}^{(4)}\) in [13, 103] are not independent. A table of other inconsistencies similar to this can be found in [131]. A large number of studies are devoted to this issue (which is not central to our review). Some, such as [2, 115], are especially relevant to the mechanics of composite materials. The octupolar potential \(\Phi\) has played a special role in our review. We now wish to show how \(\Phi\) would be expressed for the harmonic representation in (238) for an octupolar tensor \(\overleftrightarrow{\bf A}\) in three space dimensions. A direct computation based on (49) shows that \[\begin{split}\Phi(\mathbf{x})&=A_{3}\Big{\{}(\mathbf{a}_{1} \cdot\mathbf{x})(\mathbf{a}_{2}\cdot\mathbf{x})(\mathbf{a}_{3}\cdot\mathbf{x})\\ &-\frac{1}{5}(\mathbf{x}\cdot\mathbf{x})[(\mathbf{a}_{1}\cdot\mathbf{x})(\mathbf{a}_ {2}\cdot\mathbf{a}_{3})+(\mathbf{a}_{2}\cdot\mathbf{x})(\mathbf{a}_{3}\cdot\mathbf{a}_{1})+(\mathbf{a }_{3}\cdot\mathbf{x})(\mathbf{a}_{1}\cdot\mathbf{a}_{2})]\Big{\}}.\end{split} \tag{242}\] It is instructive to see what becomes of \(\Phi\) in (242) for special choices of the unit vectors \(\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\) and how these special forms of \(\Phi\) relate to those described above in our analysis. First, given a Cartesian frame \((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\), we consider the case where \(\mathbf{a}_{1}=\mathbf{a}_{2}=\mathbf{a}_{3}=\mathbf{e}_{3}\). It follows from (242) that \(\Phi\) then reduces to \[\Phi(\mathbf{x})=x_{3}\left(x_{3}^{2}-\frac{3}{2}x_{1}^{2}-\frac{3}{2}x_{2}^{2} \right), \tag{243}\] where we have set \(A_{3}=5/2\) so that \(\Phi(\mathbf{e}_{3})=1\). With this normalization, the polar plot of \(\Phi\) is just the same as the one in figure 2a. If we now take \(\mathbf{a}_{i}=\mathbf{e}_{i}\), for \(i=1,2,3\), (242) delivers \[\Phi(\mathbf{x})=3\sqrt{3}x_{1}x_{2}x_{3}, \tag{244}\] where we have chosen \(A_{3}=3\sqrt{3}\) so that the maximum value of \(\Phi\) on \(\mathbb{S}^{2}\) be \(\Phi=1\). Figure 27 illustrates the polar plot of the function in (244) in the conventional representation adopted here: it only differs by a rigid rotation from the polar plot shown in figure 5a corresponding to the tetrahedral symmetry \(T_{d}\) studied in section 3.4. The minima and maxima of \(\Phi\) in (244) are attained at the unit vectors \(\bi{n}_{\alpha}\) defined in (61) and illustrated in figure 1. This can also be seen by considering the _tetrahedral_ tensor \[\mathbf{T}:=T\sum_{\alpha=1}^{4}\bi{n}_{\alpha}\otimes\bi{n}_{\alpha}\otimes \bi{n}_{\alpha}, \tag{245}\] with \(T\) a normalizing scalar. Since \(\sum_{\alpha=1}^{4}\bi{n}_{\alpha}=\mathbf{0}\), \(\mathbf{T}\) is a symmetric traceless octupolar tensor. The octupolar potential \(\Phi_{\mathrm{T}}\) associated with it is given by \[\Phi_{\mathrm{T}}(\bi{x}) :=\mathbf{T}\cdot(\bi{x}\otimes\bi{x}\otimes\bi{x})=T[(\bi{n}_{1} \cdot\bi{x})^{3}+(\bi{n}_{2}\cdot\bi{x})^{3}+(\bi{n}_{3}\cdot\bi{x})^{3}+(\bi{ n}_{4}\cdot\bi{x})^{3}] \tag{246}\] \[=-\frac{8T}{\sqrt{3}}x_{1}x_{2}x_{3},\] which reduces to (244) for \(T=-9/8\). Comparing (244) and (210) would also be instructive. ### Curie potential We have often said that the octupolar potential \(\Phi\), which has been our major tool in this review, identifies completely an octupolar tensor only if this is fully symmetric. Thus, for example, the octupolar potential associated with a piezoelectric tensor \(\mathbf{A}\), as defined in section 2.4.3, would fail to capture all its details. In particular, the definition of generalized eigenvalues and eigenvectors of \(\mathbf{A}\) given in section 2.3 would be missed. A strategy has recently been developed in [24, 63, 64, 122] to overcome this difficulty and to attempt at providing a similar treatment for both piezoelectric and fully symmetric octupolar tensors. Here, we briefly present this strategy, following mainly [24]. Figure 27: Polar plot of the octupolar potential in (244). Its maxima and minima fall on the tetrahedral vectors \(\bi{n}_{\alpha}\) defined in (61) and shown in figure 1. We start by defining the _Curie_ potential \(\Phi_{\rm C}\) of a piezoelectric tensor \({\bf A}\) in three space dimensions,1 Footnote 1: The theory summarized in [24] applies to a general piezoelectric tensor \({\bf A}\in{\cal T}(3,{\sf V})\) with \(\dim{\sf V}=n\). Here, we present the simplified version for \(n=3\), as it is more germane to the rest of our analysis. \[\Phi_{\rm C}(\mathbf{x},\mathbf{y}):={\bf A}\cdot\mathbf{x}\otimes\mathbf{y}\otimes\mathbf{y}, \tag{247}\] which is a mapping \(\Phi_{\rm C}:\mathbb{S}^{2}\times\mathbb{S}^{2}\rightarrow\mathbb{R}\). In components relative to a Cartesian frame \((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\), equation (247) reads as \[\Phi_{\rm C}(\mathbf{x},\mathbf{y})=A_{ijk}x_{i}y_{j}y_{k}. \tag{248}\] Reasoning as in section 2.3 (see also equations (102) and (103)), the critical points of \(\Phi_{\rm C}\) on \(\mathbb{S}^{2}\times\mathbb{S}^{2}\) can be viewed as critical points of the unconstrained potential \[\Phi_{\lambda,\mu}(\mathbf{x},\mathbf{y}):=\Phi_{\rm C}( \mathbf{x},\mathbf{y})-\frac{1}{2}\lambda\left(x_{1}^{2}+x_ {2}^{2}+x_{3}^{2}\right)-\mu\left(y_{1}^{2}+y_{2}^{2}+y_{3}^{2}\right), \tag{249}\] where \(\lambda\) and \(\mu\) are independent Lagrange multipliers. Such unconstrained critical points are solutions of the following system of equations, \[\cases{A_{ijk}y_{j}y_{k}=\lambda x_{i},\cr A_{ijk}x_{i}y_{j}=\mu y_{k}.} \tag{250}\] Multiplying by \(x_{i}\) the first and by \(y_{k}\) the second (summing over repeated indices), and enforcing the constraints that require both \(x\) and \(y\) on \(\mathbb{S}^{2}\), we easily conclude that \(\lambda=\mu\) and their common value is precisely the value of \(\Phi_{\rm C}\) at the corresponding critical point. A pair \((\mathbf{x},\mathbf{y})\) that solves (250) is said to consist of a _left_ and a _right C-eigenvector_ of \({\bf A}\), respectively, and \(\lambda=\mu\) is the corresponding _C-eigenvalue_ (here the prefix \(C\) stands for _Curie_) [24]. A number of facts have been established for the C-eigenvalues of a piezoelectric octupolar tensor \({\bf A}\) (see Theorems 2.3 and 2.5 of [24]): 1. C-eigenvalues and associated left and right C-eigenvectors of \({\bf A}\) do exist. 2. If \(\lambda\) is a C-eigenvalue of \({\bf A}\) and \((\mathbf{x},\mathbf{y})\) are the corresponding left and right C-eigenvectors, then \[{\bf A}\cdot\mathbf{x}\otimes\mathbf{y}\otimes\mathbf{y}=\lambda.\] (251) Moreover, the triples \((\lambda,\mathbf{x},-\mathbf{y})\), \((-\lambda,-\mathbf{x},\mathbf{y})\), and \((-\lambda,-\mathbf{x},-\mathbf{y})\) also designate C-eigenvalues and corresponding C-eigenvectors of \({\bf A}\). 3. Let \(\lambda_{1}\) denote the largest C-eigenvalue of \({\bf A}\) and let \((\mathbf{x}_{1},\mathbf{y}_{1})\) denote the corresponding left and right C-eigenvectors. Then \[\lambda_{1}=\max\{{\bf A}\cdot\mathbf{x}\otimes\mathbf{y} \otimes\mathbf{y}:\mathbf{x},\mathbf{y}\in\mathbb{S }^{2}\}.\] (252) Moreover, \(\lambda_{1}\mathbf{x}_{1}\otimes\mathbf{y}_{1}\otimes\mathbf{y}_{1}\) is the rank-one tensor that best approximates \({\bf A}\), that is, it solves the following optimization problem, \[\min\{\|{\bf A}-\lambda\mathbf{x}\otimes\mathbf{y}\otimes \mathbf{y}\|^{2}:\lambda\in\mathbb{R},\mathbf{x},\mathbf{y}\in\mathbb{S}^{2}\},\] (253) where \[\|\mathbf{A}\|:=\sqrt{A_{ijk}A_{ijk}}\] (254) designates the Frobenius norm. 4. If a piezoelectric tensor \(\mathbf{A}\) has finitely many classes of C-eigenvalues in the complex field \(\mathbb{C}\), their number counted with multiplicity is 13.* Footnote *: This applies to \(\mathbf{A}\in\mathcal{T}(3,\mathsf{V})\) with \(\dim\mathsf{V}=3\). In general, for \(\dim\mathsf{V}=n\), this number is \((3^{n}-1)/2\), which is derived in [23] by an extension of (54). Property (III) establishes a connection between C-eigenvalues and the best one-rank approximation of \(\mathbf{A}\). We can think of applying recursively this approximation algorithm, as suggested in [126] (where it was called _incremental_ rank-one approximation), so that the second iterate would deliver the best one-rank approximation \(\lambda_{2}\boldsymbol{x}_{2}\otimes\boldsymbol{y}_{2}\otimes\boldsymbol{y}_ {2}\) to \(\mathbf{A}_{1}:=\mathbf{A}-\lambda_{1}\boldsymbol{x}_{1}\otimes\boldsymbol{y}_ {1}\otimes\boldsymbol{y}_{2}\), and so on; the existence of C-eigenvalues established in (I) guarantees that this task can be accomplished at each step, up to the \(p\)-th iterate, when \(\mathbf{A}_{p}\) is itself rank-one. According to the definition given in [126], a piezoelectric tensor \(\mathbf{A}\) is said to be _orthogonally decomposable_ if it can be written as the following finite sum, \[\begin{split}\mathbf{A}=&\sum_{i=i}^{p}\lambda_{i} \boldsymbol{x}_{i}\otimes\boldsymbol{y}_{i}\otimes\boldsymbol{y}_{i},\ \lambda_{i}>0,\ \boldsymbol{x}_{i},\boldsymbol{y}_{i}\in\mathbb{S}^{2}\\ &\mbox{with}\ \boldsymbol{x}_{i}\cdot\boldsymbol{x}_{j}= \boldsymbol{y}_{i}\cdot\boldsymbol{y}_{j}=0\quad\forall i\neq j.\end{split} \tag{255}\] It was proved in [126] that an orthogonally decomposable tensor \(\mathbf{A}\) possesses a unique decomposition (255) and this is correctly identified by the incremental rank-one approximation algorithm (a different proof of this result can also be found in [62]). **Remark 30**: The _singular value decomposition_ of a second-rank tensor \(\boldsymbol{L}\in\mathcal{T}(2,\mathsf{V})\) with \(\dim\mathsf{V}=n\) amounts to represent it in the form \[\boldsymbol{L}=\boldsymbol{U}\boldsymbol{S}\boldsymbol{V}^{\mathsf{T}}, \tag{256}\] where \[\boldsymbol{S}=\sum_{i=1}^{n}\sigma_{i}\boldsymbol{e}_{i}\otimes\boldsymbol{e }_{i}\quad\mbox{with}\quad\sigma_{i}\geqq 0\quad\mbox{and}\quad\boldsymbol{e}_{i} \cdot\boldsymbol{e}_{j}=\delta_{ij} \tag{257}\] and \(\boldsymbol{U}\), \(\boldsymbol{V}\) are orthogonal tensors (such that \(\boldsymbol{U}\boldsymbol{U}^{\mathsf{T}}=\boldsymbol{V}\boldsymbol{V}^{ \mathsf{T}}=\boldsymbol{I}\)). This result has a long history (neatly recounted in [106]) that started with the works of Beltrami [10] and Jordan [57, 58]. What makes it relevant to our topic is that (256) was also proved in [33] as resulting from a rank-one approximation of \(\boldsymbol{L}\), much in the same spirit as (255), which could thus be seen as a possible extension of (256). **Remark 31**: The appropriate version of (255) valid for a generic orthogonally decomposable octupolar tensor \(\mathbf{A}\) is \[\begin{split}\mathbf{A}=&\sum_{i=i}^{p}\lambda_{i} \boldsymbol{x}_{i}\otimes\boldsymbol{y}_{i}\otimes\boldsymbol{z}_{i},\ \lambda_{i}>0,\ \boldsymbol{x}_{i},\boldsymbol{y}_{i}, \boldsymbol{z}_{i}\in\mathbb{S}^{2}\\ &\mbox{with}\ \boldsymbol{x}_{i}\cdot\boldsymbol{x}_{j}= \boldsymbol{y}_{i}\cdot\boldsymbol{y}_{j}=\boldsymbol{z}_{i}\cdot \boldsymbol{z}_{j}=0\ \forall i\neq j.\end{split} \tag{258}\] The applicability of the incremental rank-one approximation algorithm to establish (258) was also proved in [126] **Remark 32**: Introducing for a general octupolar tensor \(\mathbf{A}\) the generalized potential \[\Phi_{\mathrm{G}}(\boldsymbol{x},\boldsymbol{y},\boldsymbol{z}):=\mathbf{A} \cdot\boldsymbol{x}\otimes\boldsymbol{y}\otimes\boldsymbol{z}, \tag{259}\] one could easily justify the rank-one approximation algorithm delivering (258) as resulting from the search for the maximum of \(\Phi_{\mathrm{G}}\) over \(\mathbb{S}^{2}\times\mathbb{S}^{2}\times\mathbb{S}^{2}\), which entrains a further generalized notion of eigenvalues and associated eigenvectors \((\lambda,\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\) of \(\mathbf{A}\). **Remark 33**: Even when the orthogonal decompositions in (255) and (258) do not apply, the one-rank approximation algorithm is still meaningful. In that case, the orthogonality conditions in both (255) and (258) fail to hold and the decompositions formally delivered by these equations no longer represent \(\mathbf{A}\); they feature the best approximations to \(\mathbf{A}\) provided by its generalized eigenvalues and eigenvectors. ## 7 Selected Applications The applications of octupolar tensors in physics are countless. Apart from the specific fields that in section 2.4 served as our motivation for this review, other fields have witnessed new or renewed formulations of theories that use octupolar (as well as higher-rank tensors). Here we give short accounts on just exemplary few of these fields, pausing longer on liquid crystal science, which is where our interest on the topic of this review originated. ### Gravitation In this context, octupolar tensors appear in the description of cubic-order spin effects in the dynamics of gravitational waves [77]. Also, they feature in computing invariants connected with tidal interactions that influence the late dynamics of compact binary systems, which have the potential of constituting the prime targets of a network of gravitational-wave detectors [11]. ### Spin states Majorana [74] introduced a geometrical picture to represent quantum states. In this representation, a pure spin-\(j\) state is mapped onto \(2j\) points on the unit sphere \(\mathbb{S}^{2}\) (which is in this context is also called the _Bloch sphere_). Recently, a generalization of this picture was proposed in [47], which applies to both pure and mixed spin-\(j\) states; this extended representation employs a symmetric tensor of rank \(2j\) in dimension \(4\) (which is thus an octupolar tensor for fermions with \(j=3/2\)). Along the same lines, the reader will find it useful to consult the works [15, 48, 91]. ### Liquid crystals In classical liquid crystal theory, the nematic director field \(\bi{n}\) describes the average orientation of the molecules that constitute the medium; the elastic distortions of \(\bi{n}\) are locally measured by its gradient \(\nabla\bi{n}\), which may become singular where the director exhibits _defects_ arising from a degradation of molecular order. The orientation of \(\bi{n}\) should be physically indistinguishable from the orientation of \(-\bi{n}\); this notion of invariance embodies the _nematic_ symmetry. In this short account we follow [82], to which the reader is referred for any further details. The two main descriptors, \(\bi{n}\) and \(\nabla\bi{n}\), can be combined into the third-rank octupolar tensor \[\bi{A}:=\,\overline{\nabla\bi{n}\otimes\bi{n}}\,, \tag{260}\] It is worth noticing that \(\bi{A}\) defined in (260) is invariant under the change of orientation of \(\bi{n}\), and so it duly enjoys the nematic symmetry, which makes it a good candidate for measuring intrinsically the local distortions of a director field. Selinger [101], extending earlier work [71], suggested a new interpretation of the elastic modes for nematic liquid crystals described by the Oseen-Frank elastic free energy, which penalizes in a quadratic fashion the distortions of \(\bi{n}\) away from any uniform state. The Oseen-Frank energy density \(W_{\rm OF}\) is defined as (see, e.g., [31, Chap. 3] and [116, Chap. 3]) \[\begin{split} W_{\rm OF}&:=\frac{1}{2}K_{11}({\rm div }\bi{n})^{2}+\frac{1}{2}K_{22}(\bi{n}\cdot{\rm curl}\,\bi{n})^{2}+\frac{1}{2} K_{33}|\bi{n}\times{\rm curl}\,\bi{n}|^{2}\\ &+K_{24}[{\rm tr}(\nabla\bi{n})^{2}-({\rm div}\,\bi{n})^{2}],\end{split} \tag{261}\] where \(K_{11}\), \(K_{22}\), \(K_{33}\), and \(K_{24}\) are the _splay_, _twist_, _bend_, and _saddle-splay_ constants, respectively, each associated with a corresponding elastic mode.1 Footnote 1: The saddle-splay term is a null Lagrangian [38] and an integration over the bulk reduces it to a surface energy. Here, however, the surface-like nature of \(K_{24}\) will not be exploited. The decomposition of \(W_{\rm OF}\) in independent elastic modes proposed in [101] is achieved through a new decomposition of \(\nabla\bi{n}\). If we denote by \(\bi{P}(\bi{n})\) and \(\bi{W}(\bi{n})\) the projection onto the plane orthogonal to \(\bi{n}\) and the skew-symmetric tensor with axial vector \(\bi{n}\), respectively, then \[\nabla\bi{n}=-\bi{b}\otimes\bi{n}+\frac{1}{2}T\bi{W}(\bi{n})+\frac{1}{2}S\bi{ P}(\bi{n})+\bi{D}, \tag{262}\] where \(\bi{b}:=-(\nabla\bi{n})\bi{n}=\bi{n}\times{\rm curl}\,\bi{n}\) is the _bend_ vector, \(T:=\bi{n}\cdot{\rm curl}\,\bi{n}\) is the _twist_ (a pseudoscalar), \(S:={\rm div}\,\bi{n}\) is the _splay_ (a scalar), and \(\bi{D}\) is a symmetric tensor such that \(\bi{D}\bi{n}=\bi{0}\) and \({\rm tr}\,\bi{D}=0\). The properties of \(\bi{D}\) guarantee that when \(\bi{D}\neq\bi{0}\) it can be represented as \[\bi{D}=q\left(\bi{n}_{1}\otimes\bi{n}_{1}-\bi{n}_{2}\otimes\bi{n}_{2}\right), \tag{263}\] where \(q\) is the _positive_ eigenvalue of \(\bi{D}\). We shall call \(q\) the _octupolar splay_ for a reason that shall soon be clear. The choice of sign for \(q\) identifies (to within the orientation) the eigenvectors \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) of \(\mathbf{D}\) orthogonal to \(\mathbf{n}\). Since \(\tr\mathbf{D}^{2}=2q^{2}\), we easily obtain from (262) that \[2q^{2}=\tr(\nabla\mathbf{n})^{2}+\frac{1}{2}T^{2}-\frac{1}{2}S^{2}. \tag{264}\] \(W_{\mathrm{OF}}\) can then be given the form \[W_{\mathrm{OF}}=\frac{1}{2}(K_{11}-K_{24})S^{2}+\frac{1}{2}(K_{22}-K_{24})T^{2 }+\frac{1}{2}K_{33}b^{2}+K_{24}(2q^{2}), \tag{265}\] where all quadratic contributions are independent from one another. The first advantage of such an expression is that it explicitly shows when the free energy is positive semi-definite; this is the case when the following inequalities, due to Ericksen [39], are satisfied, \[K_{11}\geqq K_{24}\geqq 0,\quad K_{22}\geqq K_{24}\geqq 0,\quad K_{33}\geqq 0. \tag{266}\] Whenever \(q>0\), the frame \((\mathbf{n}_{1},\mathbf{n}_{2},\mathbf{n})\) is identified to within a change of sign in either \(\mathbf{n}_{1}\) or \(\mathbf{n}_{2}\); requiring that \(\mathbf{n}=\mathbf{n}_{1}\times\mathbf{n}_{2}\), we reduce this ambiguity to a simultaneous change in the orientation of \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\). In this frame, \[\mathbf{P}(\mathbf{n})=\mathbf{I}-\mathbf{n}\otimes\mathbf{n}\quad\text{and}\quad\mathbf{ W}(\mathbf{n})=\mathbf{n}_{2}\otimes\mathbf{n}_{1}-\mathbf{n}_{1}\otimes\mathbf{n}_{2}. \tag{267}\] Since \(\mathbf{b}\cdot\mathbf{n}\equiv 0\), we can represent \(\mathbf{b}\) as \(\mathbf{b}=b_{1}\mathbf{n}_{1}+b_{2}\mathbf{n}_{2}\). The frame \((\mathbf{n}_{1},\mathbf{n}_{2},\mathbf{n})\) is called the _distortion frame_ and \((S,T,b_{1},b_{2},q)\) the _distortion characteristics_ of the director field \(\mathbf{n}\)[118]. In terms of these, (262) can also be written as \[\begin{split}\nabla\mathbf{n}&=\left(\frac{S}{2}+q \right)\mathbf{n}_{1}\otimes\mathbf{n}_{1}+\left(\frac{S}{2}-q\right)\mathbf{n}_{2}\otimes \mathbf{n}_{2}-b_{1}\mathbf{n}_{1}\otimes\mathbf{n}-b_{2}\mathbf{n}_{2}\otimes\mathbf{n}\\ &+\frac{1}{2}T\left(\mathbf{n}_{2}\otimes\mathbf{n}_{1}-\mathbf{n}_{1} \otimes\mathbf{n}_{2}\right).\end{split} \tag{268}\] Both (262) and (268) show an intrinsic decomposition of \(\nabla\mathbf{n}\) into four genuine bulk contributions, namely, bend, splay, twist, and octupolar splay. The octupolar tensor \(\mathbf{A}\) defined in (260) revealed itself as a convenient tool to illustrate director distortions [82]. Having, however, symmetrized \(\mathbf{A}\), we have implicitly renounced to represent \(T\), so no sign of twist will be revealed by \(\mathbf{A}\). This is the only piece of lost information. **Remark 34**: \(T\) is a measure of chirality, and so it cannot be associated with a symmetric tensor. By forming the completely skew-symmetric part of \(\nabla\mathbf{n}\otimes\mathbf{n}\), one would obtain the tensor \(-\frac{1}{6}T\mathbf{\epsilon}\), where \(\mathbf{\epsilon}\) is Ricci's alternator, the most general skew-symmetric, third-rank tensor in three dimensions. Letting \(\mathbf{x}=x_{1}\mathbf{n}_{1}+x_{2}\mathbf{n}_{2}+x_{3}\mathbf{n}\) be a point on the unit sphere \(\mathbb{S}^{2}\) referred to the distortion frame \((\mathbf{n}_{1},\mathbf{n}_{2},\mathbf{n})\), with the aid of (268), the octupolar potential \(\Phi\) defined by (82) can be written for \(\mathbf{A}\) in (260) as follows \[\begin{split}\Phi(\mathbf{x})&=\left(\frac{S}{2}+q \right)x_{1}^{2}x_{3}+\left(\frac{S}{2}-q\right)x_{2}^{2}x_{3}-b_{1}x_{1}x_{3}^ {2}-b_{2}x_{2}x_{3}^{2}\\ &+\frac{1}{5}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2})(b_{1}x_{1}+b_{2}x_{2 }-Sx_{3}).\end{split} \tag{269}\] As expected, \(\Phi\) does not depend on the twist \(T\), but it does depend on the octupolar splay \(q\). A thorough analysis of \(\Phi\) in (269) is performed in [82]. Here, we only describe the very special cases where one and only one elastic mode is exhibited. _Splay._ When splay is the only active mode, the choice of \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) in the plane orthogonal to \(\mathbf{n}\) is arbitrary. This fact reverberates in the symmetries of the octupolar potential and also in its critical points. In this case, \[\Phi(\mathbf{x})=\frac{1}{10}S(3x_{1}^{2}x_{3}+3x_{2}^{2}x_{3}-2x_{3}^{3}). \tag{270}\] Graphically, \(\Phi(\mathbf{x})\) is depicted in figure 28a, which is nothing but figure 2a turned upside down. Octupolar splay.When both \(S=0\) and \(b=0\), but \(q>0\), the potential \(\Phi\) reduces to \[\Phi(\mathbf{x})=q(x_{1}^{2}-x_{2}^{2})x_{3}. \tag{271}\] Figure 28b shows that \(\Phi(\mathbf{x})\) has four identical lobes, spatially distributed at the vertices of a regular tetrahedron; this is just the same plot as in figures 5a and 27, but differently oriented in the reference frame. Accordingly, its maxima are the four points \[\mathbf{x}_{1,2}=\frac{1}{\sqrt{3}}(\pm\sqrt{2}\mathbf{n}_{1}+\mathbf{n})\quad\mbox{and} \quad\mathbf{x}_{3,4}=\frac{1}{\sqrt{3}}(\pm\sqrt{2}\mathbf{n}_{2}-\mathbf{n}), \tag{272}\] each with value \(2q/(3\sqrt{3})\). _Bend.For pure bend, we can choose \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) such that \(\mathbf{b}=b\mathbf{n}_{1}\) with \(b>0\). Then the potential,_ \[\Phi(\mathbf{x})=\frac{1}{5}bx_{1}\left(x_{1}^{2}+x_{2}^{2}-4x_{3}^{2}\right)= \frac{b}{5}x_{1}\left(1-5x_{3}^{2}\right), \tag{273}\] Figure 28: Polar plots of the octupolar potential \(\Phi\) in (269) for pure elastic modes. Dashed lines are associated with maxima (and conjugated minima). Reprinted with permission from [82]. has three lobes: two larger, with equal height \(16b/(15\sqrt{15})\) at \[\mathbf{x}_{1,2}=\frac{1}{\sqrt{15}}\left(-2\mathbf{n}_{1}\pm\sqrt{11}\mathbf{n}\right), \tag{274}\] and one smaller at \(\mathbf{x}_{3}=\mathbf{n}_{1}\) with height \(b/5\). As shown in figure 28c, the polar plot of \(\Phi\) is invariant under both a rotation by angle \(\pi\) around \(\mathbf{n}_{1}\) and the mirror symmetry with respect to the plane containing \((\mathbf{n}_{1},\mathbf{n})\). **Remark 35**: In a phenomenological theory for modulated nematic liquid crystal phase recently proposed in [98], octupolar order plays a central role, as molecules are envisioned as stretched tetrahedra. Motivated in part by the properties of the distortion tensor \(\mathbf{D}\) in (263), the authors of this study describe octupolar order through a third-rank tensor, which in our formalism can be written as \[\mathbf{A}=\mathbf{\Omega}\otimes\mathbf{n}, \tag{275}\] where \(\mathbf{n}\) is the nematic director and \(\mathbf{\Omega}\) is a second-rank symmetric traceless tensor that annihilates \(\mathbf{n}\). The tensor \(\mathbf{A}\) in (275) falls in yet another category of third-rank tensors, which we have not explicitly considered, but is amenable to the method outlined here. In three space dimensions, this tensor is represented by 4 scalar parameters; it can be associated with the following octupolar potential on \(\mathbb{S}^{2}\times\mathbb{S}^{2}\), \[\Phi_{\mathrm{O}}(\mathbf{x},\mathbf{y}):=\mathbf{A}\cdot\mathbf{x}\otimes\mathbf{x}\otimes \mathbf{y}=(\mathbf{x}\cdot\mathbf{\Omega}\mathbf{x})(\mathbf{n}\cdot\mathbf{y}). \tag{276}\] ## 8 Conclusion Strictly speaking, an _octupolar tensor_\(\mathbf{A}\) is a third-rank symmetric traceless tensor, which is also called a _harmonic_ tensor in some literature. There is an impressive body of works devoted to this special class of tensors and their application to diverse fields of physics. Here, we endeavoured to review some of these works in an attempt to broaden the scope where this specific mathematical tool can be placed. Not only have we considered fully symmetric tensors, but also partly symmetric ones and fully general tensors. Of course, the more general was the setting, the less simple were the results. In the diverse territories we have traversed we found guidance in the unifying concept of _octupolar potential_\(\Phi\), which, being a scalar-valued function representable on the unit sphere, added geometrical charm to a somewhat algid algebra. Seeing diverse approaches displayed before us, a number of questions come naturally to mind, none necessarily with an easy answer. Many--we are sure--have already been heeded by the reader. Here, we mention just two of these, which have especially attracted our attention. First, one wonders whether there is a systematic way to relate the generalized eigenvectors of \(\mathbf{A}\) to its multipoles. Second, one would like to explore further the geometric properties enjoyed by the octupolar potential \(\Phi\) defined for a non-symmetric \(\mathbf{A}\). We hope that these and other issues may be addressed in the future as a result of our attempt to put octupolar tensors within a unifying setting. We trust that practitioners from the diverse fields touched upon in this review may take even a modest advantage from the perspectives we have offered. Should this be the case, our effort would not have been completely in vain. We are grateful to Rebecca Gillan from IOP for having invited this review and for her patience in tolerating the long delays that this project has suffered from various interferences; her kind perseverance has been one of the major drives for the completion of this work. Both authors are members of the Italian _Gruppo Nazionale per la Fisica Matematica_ (GNFM), an articulation of the Italian _Istituto Nazionale di Alta Matematica_ (INdAM). G.G. thanks the _Santa Marinella Research Institute_ (SMRI), where his part of the present work was carried out.
2301.12597
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi
2023-01-30T00:56:51Z
http://arxiv.org/abs/2301.12597v3
# BLIP-2: Bootstrapping Language-Image Pre-training ###### Abstract The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions. Machine Learning, ICML ## 1 Introduction Vision-language pre-training (VLP) research has witnessed a rapid advancement in the past few years, where pre-trained models with increasingly larger scale have been developed to continuously push the state-of-the-art on various downstream tasks (Radford et al., 2021; Li et al., 2021, 2022; Wang et al., 2022; Alayrac et al., 2022; Wang et al., 2022). However, most state-of-the-art vision-language models incur a high computation cost during pre-training, due to end-to-end training using large-scale models and datasets. Vision-language research sits at the intersection between vision and language, therefore it is naturally expected that vision-language models can harvest from the readily-available unimodal models from the vision and natural language communities. In this paper, we propose a _generic_ and _compute-efficient_ VLP method by bootstrapping from off-the-shelf pre-trained vision models and language models. Pre-trained vision models offer high-quality visual representation. Pre-trained language models, in particular _large language models_ (LLMs), offer strong language generation and zero-shot transfer abilities. To reduce computation cost and counteract the issue of catastrophic forgetting, the unimodal pre-trained models remain frozen during the pre-training. In order to leverage pre-trained unimodal models for VLP, it is key to facilitate cross-modal alignment. However, since LLMs have not seen images during their unimodal pre-training, freezing them makes vision-language alignment in particular challenging. In this regard, existing methods (_e.g_. Frozen (Tsimpoukelli et al., 2021), Flamingo (Alayrac et al., 2022)) resort to an image-to-text generation loss, which we show is insufficient to bridge the modality gap. To achieve effective vision-language alignment with frozen unimodal models, we propose a Querying Transformer (Q-Former) pre-trained with a new two-stage pre-training strategy. As shown in Figure 1, Q-Former is a lightweight transformer which employs a set of learnable query vectors to extract visual features from the frozen image encoder. It acts as an information bottleneck between the frozen image Figure 1: Overview of BLIP-2’s framework. We pre-train a lightweight Querying Transformer following a two-stage strategy to bridge the modality gap. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen LLM, which enables zero-shot instructed image-to-text generation (see Figure 4 for more examples). encoder and the frozen LLM, where it feeds the most useful visual feature for the LLM to output the desired text. In the first pre-training stage, we perform vision-language representation learning which enforces the Q-Former to learn visual representation most relevant to the text. In the second pre-training stage, we perform vision-to-language generative learning by connecting the output of the Q-Former to a frozen LLM, and trains the Q-Former such that its output visual representation can be interpreted by the LLM. We name our VLP framework as BLIP-2: Bootstrapping Language-Image Pre-training with frozen unimodal models. The key advantages of BLIP-2 include: * BLIP-2 effectively leverages both frozen pre-trained image models and language models. We bridge the modality gap using a Q-Former pre-trained in two-stages: representation learning stage and generative learning stage. BLIP-2 achieves state-of-the-art performance on various vision-language tasks including visual question answering, image captioning, and image-text retrieval. * Powered by LLMs (_e_.\(g\). OPT (Zhang et al., 2022), FlanT5 (Chung et al., 2022)), BLIP-2 can be prompted to perform zero-shot image-to-text generation that follows natural language instructions, which enables emerging capabilities such as visual knowledge reasoning, visual conversation, etc. (see Figure 4 for examples). * Due to the use of frozen unimodal models and a lightweight Q-Former, BLIP-2 is more compute-efficient than exisiting state-of-the-arts. For example, BLIP-2 outperforms Flamingo (Alayrac et al., 2022) by 8.7% on zero-shot VQAv2, while using 54\(\times\) fewer trainable parameters. Furthermore, our results show that BLIP-2 is a generic method that can harvest more advanced unimodal models for better VLP performance. ## 2 Related Work ### End-to-end Vision-Language Pre-training Vision-language pre-training aims to learn multimodal foundation models with improved performance on various vision-and-language tasks. Depending on the downstream task, different model architectures have been proposed, including the dual-encoder architecture (Radford et al., 2021; Jia et al., 2021), the fusion-encoder architecture (Tan and Bansal, 2019; Li et al., 2021), the encoder-decoder architecture (Cho et al., 2021; Wang et al., 2021; Chen et al., 2022), and more recently, the unified transformer architecture (Li et al., 2022; Wang et al., 2022). Various pre-training objectives have also been proposed over the years, and have progressively converged to a few time-tested ones: image-text contrastive learning (Radford et al., 2021; Yao et al., 2022; Li et al., 2021, 2022), image-text matching (Li et al., 2021, 2022; Wang et al., 2021), and (masked) language modeling (Li et al., 2021, 2022; Yu et al., 2022; Wang et al., 2022). Most VLP methods perform end-to-end pre-training using large-scale image-text pair datasets. As the model size keeps increasing, the pre-training can incur an extremely high computation cost. Moreover, it is inflexible for end-to-end pre-trained models to leverage readily-available unimodal pre-trained models, such as LLMs (Brown et al., 2020; Zhang et al., 2022; Chung et al., 2022). ### Modular Vision-Language Pre-training More similar to us are methods that leverage off-the-shelf pre-trained models and keep them frozen during VLP. Some methods freeze the image encoder, including the early work which adopts a frozen object detector to extract visual features (Chen et al., 2020; Li et al., 2020; Zhang et al., 2021), and the recent LiT (Zhai et al., 2022) which uses a frozen pre-trained image encoder for CLIP (Radford et al., 2021) pre-training. Some methods freeze the language model to use the knowledge from LLMs for vision-to-language generation tasks (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Chen et al., 2022). The key challenge in using a frozen LLM is to align visual features to the text space. To achieve this, Frozen (Tsimpoukelli et al., 2021) finetunes an image encoder whose outputs are directly used as soft prompts for the LLM. Flamingo (Alayrac et al., 2022) inserts new cross-attention layers into the LLM to inject visual features, and pre-trains the new layers on billions of image-text pairs. Both methods adopt the language modeling loss, where the language model generates texts conditioned on the image. Different from existing methods, BLIP-2 can effectively and efficiently leverage both frozen image encoders and frozen LLMs for various vision-language tasks, achieving stronger performance at a lower computation cost. ## 3 Method We propose BLIP-2, a new vision-language pre-training method that bootstraps from frozen pre-trained unimodal models. In order to bridge the modality gap, we propose a Querying Transformer (Q-Former) pre-trained in two stages: (1) vision-language representation learning stage with a frozen image encoder and (2) vision-to-language generative learning stage with a frozen LLM. This section first introduces the model architecture of Q-Former, and then delineates the two-stage pre-training procedures. ### Model Architecture We propose Q-Former as the trainable module to bridge the gap between a frozen image encoder and a frozen LLM. It extracts a fixed number of output features from the image encoder, independent of input image resolution. As shown in Figure 2, Q-Former consists of two transformer submod ules that share the same self-attention layers: (1) an image transformer that interacts with the frozen image encoder for visual feature extraction, (2) a text transformer that can function as both a text encoder and a text decoder. We create a set number of learnable query embeddings as input to the image transformer. The queries interact with each other through self-attention layers, and interact with frozen image features through cross-attention layers (inserted every other transformer block). The queries can additionally interact with the text through the same self-attention layers. Depending on the pre-training task, we apply different self-attention masks to control query-text interaction. We initialize Q-Former with the pre-trained weights of \(\text{BERT}_{\text{base}}\)Devlin et al. (2019), whereas the cross-attention layers are randomly initialized. In total, Q-Former contains 188M parameters. Note that the queries are considered as model parameters. In our experiments, we use 32 queries where each query has a dimension of 768 (same as the hidden dimension of the Q-Former). We use \(Z\) to denote the output query representation. The size of \(Z\) (\(32\times 768\)) is much smaller than the size of frozen image features (_e.g_. \(257\times 1024\) for ViT-L/14). This bottleneck architecture works together with our pre-training objectives into forcing the queries to extract visual information that is most relevant to the text. ### Bootstrap Vision-Language Representation Learning from a Frozen Image Encoder In the representation learning stage, we connect Q-Former to a frozen image encoder and perform pre-training using image-text pairs. We aim to train the Q-Former such that the queries can learn to extract visual representation that is most informative of the text. Inspired by BLIP Li et al. (2022), we jointly optimize three pre-training objectives that share the same input format and model parameters. Each objective employs a different attention masking strategy between queries and text to control their interaction (see Figure 2). **Image-Text Contrastive Learning** (ITC) learns to align image representation and text representation such that their mutual information is maximized. It achieves so by contrasting the image-text similarity of a positive pair against those of negative pairs. We align the output query representation \(Z\) from the image transformer with the text representation \(t\) from the text transformer, where \(t\) is the output embedding of the [CLS] token. Since \(Z\) contains multiple output embeddings (one from each query), we first compute the pairwise similarity between each query output and \(t\), and then select the highest one as the image-text similarity. To avoid information leak, we employ a unimodal self-attention mask, where the queries and text are not allowed to see each other. Due to the use of a frozen image encoder, we can fit more samples per GPU compared to end-to-end methods. Therefore, we use in-batch negatives instead of the momentum queue in BLIP. **Image-grounded Text Generation** (ITG) loss trains the Q-Former to generate texts, given input images as the condition. Since the architecture of Q-Former does not allow direct interactions between the frozen image encoder and the text tokens, the information required for generating the text must be first extracted by the queries, and then passed to the text tokens via self-attention layers. Therefore, the queries are forced to extract visual features that capture all the information about the text. We employ a multimodal causal self-attention mask to control query-text interaction, similar to the one used in UniLM Dong et al. (2019). The queries can attend to each other but not the text tokens. Each text token can attend to all queries and its previous text tokens. We also replace the [CLS] token with a new [DEC] token as the first text token to signal the decoding task. **Image-Text Matching** (ITM) aims to learn fine-grained alignment between image and text representation. It is a binary classification task where the model is asked to predict whether an image-text pair is positive (matched) or negative (unmatched). We use a bi-directional self-attention mask where all queries and texts can attend to each other. The output query embeddings \(Z\) thus capture multimodal information. We feed each output query embedding into a two-class linear classifier to obtain a logit, and average the logits across all queries as the output matching score. We adopt the hard negative mining strategy from Li et al. (2021, 2022) to create informative negative pairs. Figure 2: (**Left**) Model architecture of Q-Former and BLIP-2’s first-stage vision-language representation learning objectives. We jointly optimize three objectives which enforce the queries (a set of learnable embeddings) to extract visual representation most relevant to the text. (**Right**) The self-attention masking strategy for each objective to control query-text interaction. ### Bootstrap Vision-to-Language Generative Learning from a Frozen LLM In the generative pre-training stage, we connect Q-Former (with the frozen image encoder attached) to a frozen LLM to harvest the LLM's generative language capability. As shown in Figure 3, we use a fully-connected (FC) layer to linearly project the output query embeddings \(Z\) into the same dimension as the text embedding of the LLM. The projected query embeddings are then prepended to the input text embeddings. They function as _soft visual prompts_ that condition the LLM on visual representation extracted by the Q-Former. Since the Q-Former has been pre-trained to extract language-informative visual representation, it effectively functions as an information bottleneck that feeds the most useful information to the LLM while removing irrelevant visual information. This reduces the burden of the LLM to learn vision-language alignment, thus mitigating the catastrophic forgetting problem. We experiment with two types of LLMs: decoder-based LLMs and encoder-decoder-based LLMs. For decoder-based LLMs, we pre-train with the language modeling loss, where the frozen LLM is tasked to generate the text conditioned on the visual representation from Q-Former. For encoder-decoder-based LLMs, we pre-train with the prefix language modeling loss, where we split a text into two parts. The prefix text is concatenated with the visual representation as input to the LLM's encoder. The suffix text is used as the generation target for the LLM's decoder. ### Model Pre-training **Pre-training data.** We use the same pre-training dataset as BLIP with 129M images in total, including COCO (Lin et al., 2014), Visual Genome (Krishna et al., 2017), CC3M (Sharma et al., 2018), CC12M (Changpinyo et al., 2021), SBU (Ordonez et al., 2011), and 115M images from the LAION400M dataset (Schuhmann et al., 2021). We adopt the CapFilt method (Li et al., 2022) to create synthetic captions for the web images. Specifically, we generate 10 captions using the BLIP\({}_{\rm large}\) captioning model, and rank the synthetic captions along with the original web caption based on the image-text similarity produced by a CLIP ViT-L/14 model. We keep top-two captions per image as training data and randomly sample one at each pre-training step. **Pre-trained image encoder and LLM.** For the frozen image encoder, we explore two state-of-the-art pre-trained vision transformer models: (1) ViT-L/14 from CLIP (Radford et al., 2021) and (2) ViT-G/14 from EVA-CLIP (Fang et al., 2022). We remove the last layer of the ViT and uses the second last layer's output features, which leads to slightly better performance. For the frozen language model, we explore the unsupervised-trained OPT model family (Zhang et al., 2022) for decoder-based LLMs, and the instruction-trained FlanT5 model family (Chung et al., 2022) for encoder-decoder-based LLMs. **Pre-training settings.** We pre-train for 250k steps in the first stage and 80k steps in the second stage. We use a batch size of 2320/1680 for ViT-L/ViT-G in the first stage and a batch size of 1920/1520 for OPT/FlanT5 in the second stage. During pre-training, we convert the frozen ViTs' and LLMs' parameters into FP16, except for FlanT5 where we use BFloat16. We found no performance degradation compared to using 32-bit models. Due to the use of frozen models, our pre-training is more computational friendly than existing large-scale VLP methods. For example, using a single 16-A100(40G) machine, our largest model with ViT-G and FlanT5-XXL requires less than 6 days for the first stage and less than 3 days for the second stage. The same set of pre-training hyper-parameters are used for all models. We use the AdamW (Loshchilov and Hutter, 2017) optimizer with \(\beta_{1}=0.9\), \(\beta_{1}=0.98\), and a weight decay of 0.05. We use a cosine learning rate decay with a peak learning rate of 1e-4 and a linear warmup of 2k steps. The minimum learning rate at the second stage is 5e-5. We use images of size 224\(\times\)224, augmented with random resized cropping and horizontal flipping. Figure 3: BLIP-2’s second-stage vision-to-language generative pre-training, which bootstraps from frozen large language models (LLMs). (**Top**) Bootstrapping a decoder-based LLM (e.g. OPT). (**Bottom**) Bootstrapping an encoder-decoder-based LLM (e.g. FlanT5). The fully-connected layer adapts from the output dimension of the Q-Former to the input dimension of the chosen LLM. Figure 4: Selected examples of **instructed zero-shot image-to-text generation** using a BLIP-2 model w/ ViT-G and FlanT\({}_{\text{XXL}}\), where it shows a wide range of capabilities including visual conversation, visual knowledge reasoning, visual commensense reasoning, storytelling, personalized image-to-text generation, etc. ## 4 Experiment Table 1 provides an overview of the performance of BLIP-2 on various zero-shot vision-language tasks. Compared to previous state-of-the-art models, BLIP-2 achieves improved performance while requiring substantially fewer number of trainable parameters during vision-language pre-training. ### Instructed Zero-shot Image-to-Text Generation BLIP-2 effectively enables a LLM to understand images while preserving its capability in following text prompts, which allows us to control image-to-text generation with instructions. We simply append the text prompt after the visual prompt as input to the LLM. Figure 4 shows examples to demonstrate a wide range of zero-shot image-to-text capabilities including visual knowledge reasoning, visual commensense reasoning, visual conversation, personalized image-to-text generation, etc. **Zero-shot VQA**. We perform quantitative evaluation on the zero-shot visual question answering task. For OPT models, we use the prompt "Question: {} Answer:". For FlanT5 models, we use the prompt "Question: {} Short answer:". During generation, we use beam search with a beam width of 5. We also set the length-penalty to -1 which encourages shorter answers that align better with human annotation. As shown in Table 2. BLIP-2 achieves state-of-the-art result on the VQAv2 (Goyal et al., 2017) and GQA (Hudson and Manning, 2019) datasets. It outperforms Flamingo80B by 8.7% on VQAv2, despite having 54x fewer trainable parameters. On the OK-VQA (Marino et al., 2019) dataset, BLIP-2 comes secondary to Flamingo80B. We hypothesis that this is because OK-VQA focuses more on open-world knowledge than visual understanding, and the 70B Chinchilla (Hoffmann et al., 2022) language model from Flamingo80B possesses more knowledge than the 11B FlanT5\({}_{\text{XXL}}\). We make a promising observation from Table 2: **a stronger image encoder or a stronger LLM both lead to better performance.** This observation is supported by several facts: (1) ViT-G outperforms ViT-L for both OPT and FlanT5. (2) Within the same LLM family, larger models outperform smaller ones. (3) FlanT5, an instruction-tuned LLM, outperforms the unsupervised-trained OPT on VQA. This observation validates BLIP-2 as a **generic vision-language pre-training method** that can efficiently harvest the rapid advances in vision and natural language communities. **Effect of Vision-Language Representation Learning.** The first-stage representation learning pre-trains the Q-Former to learn visual features relevant to the text, which reduces the burden of the LLM to learn vision-language \begin{table} \begin{tabular}{l l|c c c c c c} \hline \hline \multirow{2}{*}{Models} & \#Trainable & \#Total & \multirow{2}{*}{VQAv2} & \multirow{2}{*}{OK-VQA} & \multirow{2}{*}{GQA} \\ & Params & Params & val & test-dev & test & test-dev \\ \hline VL-T5\({}_{\text{no-vqa}}\) & 224M & 269M & 13.5 & - & 5.8 & 6.3 \\ FewVLM (Jin et al., 2022) & 740M & 785M & 47.7 & - & 16.5 & 29.3 \\ Frozen (Tsimpoukelli et al., 2021) & 40M & 7.1B & 29.6 & - & 5.9 & - \\ VLKD (Dai et al., 2022) & 406M & 832M & 42.6 & 44.5 & 13.3 & - \\ Flamingo3B (Alayrac et al., 2022) & 1.4B & 3.2B & - & 49.2 & 41.2 & - \\ Flamingo9B (Alayrac et al., 2022) & 1.8B & 9.3B & - & 51.8 & 44.7 & - \\ Flamingo80B (Alayrac et al., 2022) & 10.2B & 80B & - & 56.3 & **50.6** & - \\ \hline BLIP-2 ViT-L OPT\({}_{\text{2.7B}}\) & 104M & 3.1B & 50.1 & 49.7 & 30.2 & 33.9 \\ BLIP-2 ViT-G OPT\({}_{\text{2.7B}}\) & 107M & 3.8B & 53.5 & 52.3 & 31.7 & 34.6 \\ BLIP-2 ViT-G OPT\({}_{\text{7.6B}}\) & 108M & 7.8B & 54.3 & 52.6 & 36.4 & 36.4 \\ BLIP-2 ViT-L FlanT5\({}_{\text{XL}}\) & 103M & 3.4B & 62.6 & 62.3 & 39.4 & 44.4 \\ BLIP-2 ViT-G FlanT5\({}_{\text{XL}}\) & 107M & 4.1B & 63.1 & 63.0 & 40.7 & 44.2 \\ BLIP-2 ViT-G FlanT5\({}_{\text{XXL}}\) & 108M & 12.1B & **65.2** & **65.0** & 45.9 & **44.7** \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of BLIP-2 results on various **zero-shot** vision-language tasks. Compared with previous state-of-the-art models. BLIP-2 achieves the highest zero-shot performance while requiring the least number of trainable parameters during vision-language pre-training. \begin{table} \begin{tabular}{l l|c c c c c} \hline \hline \multirow{2}{*}{Models} & \#Trainable & \#Total & \multirow{2}{*}{VQAv2} & \multirow{2}{*}{OK-VQA} & \multirow{2}{*}{GQA} \\ & Params & Params & val & test-dev & test & test-dev \\ \hline VL-T5\({}_{\text{no-vqa}}\) & 224M & 269M & 13.5 & - & 5.8 & 6.3 \\ FewVLM (Jin et al., 2022) & 740M & 785M & 47.7 & - & 16.5 & 29.3 \\ Frozen (Tsimpoukelli et al., 2021) & 40M & 7.1B & 29.6 & - & 5.9 & - \\ VLKD (Dai et al., 2022) & 406M & 832M & 42.6 & 44.5 & 13.3 & - \\ Flamingo3B (Alayrac et al., 2022) & 1.4B & 3.2B & - & 49.2 & 41.2 & - \\ Flamingo9B (Alayrac et al., 2022) & 1.8B & 9.3B & - & 51.8 & 44.7 & - \\ Flamingo80B (Alayrac et al., 2022) & 10.2B & 80B & - & 56.3 & **50.6** & - \\ \hline BLIP-2 ViT-L OPT\({}_{\text{2.7B}}\) & 104M & 3.1B & 50.1 & 49.7 & 30.2 & 33.9 \\ BLIP-2 ViT-G OPT\({}_{\text{2.7B}}\) & 107M & 3.8B & 53.5 & 52.3 & 31.7 & 34.6 \\ BLIP-2 ViT-G OPT\({}_{\text{7.6B}}\) & 108M & 7.8B & 54.3 & 52.6 & 36.4 & 36.4 \\ BLIP-2 ViT-L FlanT5\({}_{\text{XL}}\) & 103M & 3.4B & 62.6 & 62.3 & 39.4 & 44.4 \\ BLIP-2 ViT-G FlanT5\({}_{\text{XL}}\) & 107M & 4.1B & 63.1 & 63.0 & 40.7 & 44.2 \\ BLIP-2 ViT-G FlanT5\({}_{\text{XXL}}\) & 108M & 12.1B & **65.2** & **65.0** & 45.9 & **44.7** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with state-of-the-art methods on zero-shot visual question answering. alignment. Without the representation learning stage, Q-Former relies solely on the vision-to-language generative learning to bridge the modality gap, which is similar to the Perceiver Resampler in Flamingo. Figure 5 shows the effect of representation learning on generative learning. Without representation learning, both types of LLMs give substantially lower performance on zero-shot VQA. In particular, OPT suffers from catastrophic forgetting where performance drastically degrades as training proceeds. ### Image Captioning We finetune BLIP-2 models for the image captioning task, which asks the model to generate a text description for the image's visual content. We use the prompt "a photo of" as an initial input to the LLM and trains the model to generate the caption with the language modeling loss. We keep the LLM frozen during finetuning, and updates the parameters of the Q-Former together with the image encoder. We experiment with ViT-G and various LLMs. Detailed hyperparameters can be found in the appendix. We perform finetuning on COCO, and evaluate on both COCO test set and zero-shot transfer to NoCaps Agrawal et al. (2019) validation set. The results are shown in Table 3. BLIP-2 achieves state-of-the-art performance with significant improvement on NoCaps over existing methods, demonstrating strong generalization ability to out-domain images. ### Visual Question Answering Given annotated VQA data, we finetune the parameters of the Q-Former and the image encoder while keeping the LLM frozen. We finetune with the open-ended answer generation loss, where the LLM receives Q-Former's output and the question as input, and is asked to generate the answer. In order to extract image features that are more relevant to the question, we additionally condition Q-Former on the question. Specifically, the question tokens are given as input to the Q-Former and interact with the queries via the self-attention layers, which can guide the Q-Former's cross-attention layers to focus on more informative image regions. Following BLIP, our VQA data includes the training and validation splits from VQAv2, as well as training samples from Visual Genome. Table 4 demonstrates the state-of-the \begin{table} \begin{tabular}{l c c} \hline \hline Models & \#Trainable & \multicolumn{2}{c}{VQAv2} \\ & Params & val & test-dev \\ \hline \multicolumn{3}{l}{_Open-ended generation models_} \\ ALBEF Li et al. (2021) & 314M & 75.84 & 76.04 \\ BLIP Li et al. (2022) & 385M & 78.25 & 78.32 \\ OFA Wang et al. (2022) & 930M & 82.00 & 82.00 \\ Flamingo80B Alayrac et al. (2022) & 10.6B & 82.00 & 82.10 \\ **BLIP-2** ViT-G FlanT5x. & 1.2B & 81.55 & 81.66 \\ **BLIP-2** ViT-G OPT\({}_{\rm 27.7B}\) & 1.2B & 81.59 & 81.74 \\ **BLIP-2** ViT-G OPT\({}_{\rm 6.7B}\) & 1.2B & **82.19** & **82.30** \\ \hline \hline \multicolumn{3}{l}{_Closed-ended classification models_} \\ VinvVL & 345M & 76.52 & 76.60 \\ SimVLM Wang et al. (2021) & \(\sim\)1.4B & 80.03 & 80.34 \\ CoCa Yu et al. (2022) & 2.1B & 82.30 & 82.30 \\ BETT-3 Wang et al. (2022) & 1.9B & **84.19** & **84.03** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with state-of-the-art models fine-tuned for visual question answering. Figure 5: Effect of vision-language representation learning on vision-to-language generative learning. Without representation learning, the Q-Former fails the bridge the modality gap, leading to significantly lower performance on zero-shot VQA. \begin{table} \begin{tabular}{l c|c c c c c c c c|c c} \hline \hline \multirow{2}{*}{Models} & \#Trainable & \multicolumn{6}{c}{NoCaps Zero-shot (validation set)} & \multirow{2}{*}{ \begin{tabular}{c} COCO Fine-tuned \\ Karpathy test \\ \end{tabular} } \\ & Params & C & S & C & S & C & S & C & S & B@4 & C \\ \hline OSCAR Li et al. (2020) & 345M & - & - & - & - & - & 80.9 & 11.3 & 37.4 & 127.8 \\ VinVL Zhang et al. (2021) & 345M & 103.1 & 14.2 & 96.1 & 13.8 & 88.3 & 12.1 & 95.5 & 13.5 & 38.2 & 129.3 \\ BLIP Li et al. (2022) & 446M & 114.9 & 15.2 & 112.1 & 14.9 & 115.3 & 14.4 & 113.2 & 14.8 & 40.4 & 136.7 \\ OFA Wang et al. (2022) & 930M & - & - & - & - & - & - & - & **43.9** & 145.3 \\ Flamingo Alayrac et al. (2022) & 10.6B & - & - & - & - & - & - & - & - & 138.1 \\ SimVLM Wang et al. (2021) & \(\sim\)1.4B & 113.7 & - & 110.9 & - & 115.2 & - & 112.2 & - & 40.6 & 143.3 \\ \hline BLIP-2 ViT-G OPT\({}_{\rm 2.7B}\) & 1.1B & 123.0 & 15.8 & 117.8 & 15.4 & 123.4 & **15.1** & 119.7 & 15.4 & 43.7 & **145.8** \\ BLIP-2 ViT-G OPT\({}_{\rm 6.7B}\) & 1.1B & **123.7** & 15.8 & 119.2 & 15.3 & 124.4 & 14.8 & 121.0 & 15.3 & 43.5 & 145.2 \\ BLIP-2 ViT-G PlanT5XL & 1.1B & **123.7** & **16.3** & **120.2** & **15.9** & **124.8** & **15.1** & **121.6** & **15.8** & 42.4 & 144.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with state-of-the-art image captioning methods on NoCaps and COCO Caption. All methods optimize the cross-entropy loss during finetuning. C: CIDEr, S: SPICE, B@4: BLEU@4. art results of BLIP-2 among open-ended generation models. ### Image-Text Retrieval Since image-text retrieval does not involve language generation, we directly finetune the first-stage-pretrained model w/o LLM. Specifically, we finetune the image encoder together with Q-Former on COCO using the same objectives (_i.e_. ITC, ITM, and ITG) as pre-training. We then evaluate the model for both image-to-text retrieval and text-to-image retrieval on COCO and Flickr30K (Plummer et al., 2015) datasets. During inference, we follow Li et al. (2021, 2022) which first select \(k=128\) candidates based on the image-text feature similarity, followed by a re-ranking based on pairwise ITM scores. We experiment with both ViT-L and ViT-G as the image encoder. Detailed hyperparameters can be found in the appendix. The results are shown in Table 5. BLIP-2 achieves state-of-the-art performance with significant improvement over existing methods on zero-shot image-text retrieval. The ITC and ITM losses are essential for image-text retrieval as they directly learn image-text similarity. In Table 6, we show that the ITG (image-grounded text generation) loss is also beneficial for image-text retrieval. This result supports our intuition in designing the representation learning objectives: the ITG loss enforces the queries to extract visual features most relevant to the text, thus improving vision-language alignment. ## 5 Limitation Recent LLMs can perform in-context learning given few-shot examples. However, our experiments with BLIP-2 do not observe an improved VQA performance when providing the LLM with in-context VQA examples. We attribute the lack of in-context learning capability to our pre-training dataset, which only contains a single image-text pair per sample. The LLMs cannot learn from it the correlation among multiple image-text pairs in a single sequence. The same observation is also reported in the Flamingo paper, which uses a close-sourced interleaved image and text dataset (M3W) with multiple image-text pairs per sequence. We aim to create a similar dataset in future work. BLIP-2's image-to-text generation could have unsatisfactory results due to various reasons including inaccurate knowledge from the LLM, activating the incorrect reasoning path, or not having up-to-date information about new image content (see Figure 7). Furthermore, due to the use of frozen models, BLIP-2 inherits the risks of LLMs, such as outputting offensive language, propagating social bias, or leaking private information. Remediation approaches include using instructions to guide model's generation or training on a filtered dataset with harmful content removed. ## 6 Conclusion We propose BLIP-2, a generic and compute-efficient method for vision-language pre-training that leverages frozen pretrained image encoders and LLMs. BLIP-2 achieves state-of-the-art performance on various vision-language tasks while having a small amount of trainable parameters during \begin{table} \begin{tabular}{l|c c c} \hline COCO finetuning & Image \(\rightarrow\) Text & Text \(\rightarrow\) Image \\ objectives & R@1 & R@5 & R@1 & R@5 \\ \hline ITC + ITM & 84.5 & 96.2 & 67.2 & 87.1 \\ ITC + ITM + ITG & 85.4 & 97.0 & 68.3 & 87.7 \\ \hline \end{tabular} \end{table} Table 6: The image-grounded text generation (ITG) loss improves image-text retrieval performance by enforcing the queries to extract language-relevant visual features. \begin{table} \begin{tabular}{l l|c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\#Trainable} & \multicolumn{5}{c|}{Flickr30K Zero-shot (1K test set)} & \multicolumn{5}{c}{COCO Fine-tuned (5K test set)} \\ & & \multicolumn{5}{c|}{Image \(\rightarrow\) Text} & \multicolumn{5}{c|}{Text \(\rightarrow\) Image} & \multicolumn{5}{c}{Image \(\rightarrow\) Text} & \multicolumn{5}{c}{Text \(\rightarrow\) Image} \\ & & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline \multicolumn{12}{l}{_Dual-encoder models_} & & & & & & & & & & & & & \\ CLIP (Radford et al., 2021) & 428M & 88.0 & 98.7 & 99.4 & 68.7 & 90.6 & 95.2 & - & - & - & - & - & - \\ ALIGN (Jia et al., 2021) & 820M & 88.6 & 98.7 & 99.7 & 75.7 & 93.8 & 96.8 & 77.0 & 93.5 & 96.9 & 59.9 & 83.3 & 89.8 \\ FILIP (Yao et al., 2022) & 417M & 89.8 & 99.2 & 99.8 & 75.0 & 93.4 & 96.3 & 78.9 & 94.4 & 97.4 & 61.2 & 84.3 & 90.6 \\ Florence (Yuan et al., 2021) & 893M & 90.9 & 99.1 & - & 76.7 & 93.6 & - & 81.8 & 95.2 & - & 63.2 & 85.7 & - \\ BEIT-3(Wang et al., 2022b) & 1.9B & 94.9 & 99.9 & **100.0** & 81.5 & 95.6 & 97.8 & 84.8 & 96.5 & 98.3 & 67.2 & **87.7** & **92.8** \\ \hline \multicolumn{12}{l}{_Fusion-encoder models_} & & & & & & & & & & & & \\ UNITER (Chen et al., 2020) & 303M & 83.6 & 95.7 & 97.7 & 68.7 & 89.2 & 93.9 & 65.7 & 88.6 & 93.8 & 52.9 & 79.9 & 88.0 \\ OSCAR (Li et al., 2020) & 345M & - & - & - & - & - & - & 70.0 & 91.1 & 95.5 & 54.0 & 80.8 & 88.5 \\ VinVL (Zhang et al., 2021) & 345M & - & - & - & - & - & - & 75.4 & 92.9 & 96.2 & 58.8 & 83.5 & 90.3 \\ \hline \multicolumn{12}{l}{_Dual encoder + Fusion encoder reranking_} & & & & & & & & & & & \\ ALBEF (Li et al., 2021) & 233M & 94.1 & 99.5 & 99.7 & 82.8 & 96.3 & 98.1 & 77.6 & 94.3 & 97.2 & 60.7 & 84.3 & 90.5 \\ BLIP (Li et al., 2022) & 446M & 96.7 & **100.0** & **100.0** & 86.7 & 97.3 & 98.7 & 82.4 & 95.4 & 97.9 & 65.1 & 86.3 & 91.8 \\ **BLIP-2** ViT-L & 474M & 96.9 & **100.0** & **100.0** & 88.6 & 97.6 & **98.9** & 83.5 & 96.0 & 98.0 & 66.3 & 86.5 & 91.8 \\ **BLIP-2** ViT-G & 1.2B & **97.6** & **100.0** & **100.0** & **89.7** & **98.1** & **98.9** & **85.4** & **97.0** & **98.5** & **68.3** & **87.7** & 92.6 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison with state-of-the-art image-text retrieval methods, finetuned on COCO and zero-shot transferred to Flickr30K. pre-training. BLIP-2 also demonstrates emerging capabilities in zero-shot instructed image-to-text generation. We consider BLIP-2 as an important step towards building a multimodal conversational AI agent.
2309.00969
High-efficiency, high-speed, and low-noise photonic quantum memory
We present a demonstration of simultaneous high-efficiency, high-speed, and low-noise operation of a photonic quantum memory. By leveraging controllable collisional dephasing in a neutral barium atomic vapor, we demonstrate a significant improvement in memory efficiency and bandwidth over existing techniques. We achieve greater than 95% storage efficiency and 26% total efficiency of 880 GHz bandwidth photons, with $\mathcal{O}(10^{-5})$ noise photons per retrieved pulse. These ultrabroad bandwidths enable rapid quantum information processing and contribute to the development of practical quantum memories with potential applications in quantum communication, computation, and networking.
Kai Shinbrough, Tegan Loveridge, Benjamin D. Hunt, Sehyun Park, Kathleen Oolman, Thomas O. Reboli, J. Gary Eden, Virginia O. Lorenz
2023-09-02T15:34:35Z
http://arxiv.org/abs/2309.00969v1
# High-efficiency, high-speed, and low-noise photonic quantum memory ###### Abstract We present a demonstration of simultaneous high-efficiency, high-speed, and low-noise operation of a photonic quantum memory. By leveraging controllable collisional dephasing in a neutral barium atomic vapor, we demonstrate a significant improvement in memory efficiency and bandwidth over existing techniques. We achieve greater than 95% storage efficiency and 26% total efficiency of 880 GHz bandwidth photons, with \(\boldsymbol{\mathcal{O}(10^{-5})}\) noise photons per retrieved pulse. These ultrabroad bandwidths enable rapid quantum information processing and contribute to the development of practical quantum memories with potential applications in quantum communication, computation, and networking. *Corresponding author(s). E-mail(s): [email protected]; [email protected]; Photonic quantum memories play a vital role in many quantum information processing applications by enabling the on-demand storage and retrieval of traveling qubits -- quantum states of light [1, 2, 3, 4, 5]. Achieving high-efficiency, high-speed, and low-noise operation of such memories is of utmost importance for the practical realization of these applications. In recent years, significant progress has been made in the development of efficient photonic quantum memories using various physical systems, including solid-state materials, cold atomic gases, and warm atomic vapors [4, 6, 7, 8]. As we show, a trade-off exists in these atomic ensemble-based systems between memory bandwidth and storage efficiency, owing to the mismatch between broad photon bandwidths and typically narrow atomic linewidths. This trade-off presents a problem for high-speed photonic quantum information processing utilizing quantum memories, and for interfacing quantum memories with typically broadband quantum light sources based on parametric down-conversion or four-wave mixing, which are the workhorse for many quantum optics experiments [9, 10, 11, 12, 13, 14]. To overcome this limitation, we present a novel approach to atomic ensemble-based quantum memory that relies on homogeneous collisional broadening of an intermediate atomic state via noble gas perturbers at controllable pressure. This collisional broadening reduces linewidth-bandwidth mismatch and thereby enhances memory efficiency in the broadband regime. The contributions of this work are twofold. First, we demonstrate a generic and scalable approach to enhancing memory efficiency in atomic-vapor quantum memories based on collisional broadening. This demonstration is performed in a new medium -- atomic barium vapor -- with an orbital \(\Lambda\)-system, and our approach leads to a measured storage efficiency of 95.6(3)%. Second, we present a comprehensive characterization of our system's performance, including measurement of memory efficiency, lifetime, noise level, and full reconstruction of retrieved photon amplitude and phase. This characterization reveals a regime of operation for atomic ensemble memories we name Near-Off-Resonant Memory (NORM) operation, which leads to enhancements in total efficiency of around 5% in our system. Throughout, we highlight the unique advantages of our choice of atomic species for this application, including the absence of four-wave mixing noise, the telecom-compatibility of the control field wavelength, and the small collisional dephasing rate and long natural lifetime of the chosen storage state (0.25 seconds in the bare atom [15]). As a proof of principle, we store and retrieve weak coherent states with \(\lesssim\)1 average photon per pulse; expansion of this experiment to storage of single-photon Fock states is straightforward, and may be the subject of future work. The achievements of this work pave the way for the realization of practical, high-speed photonic quantum memories, with potential applications in quantum communication, computing, and networking. In the following sections, we describe the principles behind our approach, the experimental setup, our measurement results, and we discuss the implications and future prospects of our work. ## 1 Results ### Collisional broadening as a resource for improving broadband memory efficiency. It is well-known that the resonant optical depth, \(d\), of a three-level atomic-ensemble quantum memory sets an upper bound on the maximum achievable storage efficiency of the memory, of the form \(\eta_{\rm opt}\approx 1-2.9/d\)[16, 17, 18]. Optical depth, proportionate to atom number, is therefore a resource for increasing memory efficiency. Here we propose and demonstrate experimentally another resource for atomic-ensemble quantum memories: the intermediate-state homogeneous linewidth. The value of this resource is most clear when considering the so-called absorb-then-transfer (ATT) memory protocol [19, 17, 19, 20, 21, 22]. In this quantum storage protocol, resonant linear absorption maps a photonic qubit--the signal field--onto an atomic polarization of the form \(P\sim\sum_{j=1}^{N}b_{j}e^{ik_{s}z_{j}}|g_{1}\cdots e_{i}\cdots g_{N}\rangle\), where the sum runs over atoms 1 to \(N\), each with a spatially dependent amplitude (\(b_{j}\)) and phase (\(k_{s}z_{j}\)). This collective Dicke state involving the ground state \(|g\rangle\) and the excited state \(|e\rangle\) is then mapped via a \(\pi\)-pulse optical control field onto a long-lived "spin-wave" state of the form \(B\sim\sum_{j=1}^{N}c_{j}e^{i(k_{s}-k_{c})z_{j}}|g_{1}\cdots s_{i}\cdots g_{N}\rangle\), involving the storage state \(|s\rangle\) (see Fig. 1b). The protocol thus has two stages: resonant linear absorption, and \(\pi\)-pulse population transfer. The efficiency of the second stage is ensured simply by accurate tuning of the control field pulse area, but the efficiency of the first stage depends critically on both the resonant optical depth and linewidth of the \(|g\rangle\rightarrow|e\rangle\) transition. A complete discussion of the dependence of linear absorption on these two memory parameters can be found in Ref. [6], but the fundamental physical intuition is clear: When the signal field bandwidth is broader than the transition linewidth, the frequency components of the signal field outside of the linewidth are not absorbed by the ensemble, and therefore contribute to transmission loss or memory inefficiency. This loss can be compensated by either increasing optical depth or increasing the transition linewidth, but in the ultra-broadband regime where the signal bandwidth is much greater than than the atomic linewidth (\(\delta_{s}\gg\gamma\)), increasing linewidth is significantly more effective than Figure 1: **Enhancement in memory efficiency due to collisional broadening.****a**, Comparison of storage efficiencies and signal field bandwidths for atomic ensemble quantum memories in the broadband regime (\(>10\) MHz). References are specified by first author initials and publication year; for complete reference information see Supplementary Information. **b**, Atomic energy level structure for \(\Lambda\)-type quantum memory (\(\Delta\), detuning; \(\Gamma\), excited state linewidth; red arrow, signal field; black arrow, control field). **c**, Measured collisionally broadened excited state linewidths for the \({}^{1}S_{0}\)\(\rightarrow\)\({}^{1}P_{1}\) (\(|g\rangle\rightarrow|e\rangle\)) transition in barium as a function of argon (Ar) buffer gas pressure (inset: absorption spectra). **d**, Spectral waveforms of the signal field before storage (blue, left panel), after quantum storage (grey, left panel), before retrieval (grey, right panel), and after retrieval (blue, right panel). increasing optical depth. A larger homogeneous linewidth is also a resource for the resonant protocols of electromagnetically induced transparency (EIT) and Autler-Townes Splitting (ATS), but whether a given memory is 'linewidth limited' or 'optical-depth limited' in general depends on the specific parameters of the system. Another factor motivates the use of homogeneous linewidth broadening as a resource for improving memory efficiency: Optical transitions in warm atomic vapors are typically dominated by Doppler broadening, which introduces inhomogeneous dephasing during the storage and retrieval operations and therefore decreases the coherent reemission probability of the memory. Introducing an intentional source of homogeneous broadening larger than the Doppler linewidth removes this source of memory inefficiency while simultaneously increasing memory bandwidth and decreasing protocol time. In general, this approach may also reduce memory lifetime, as the storage state may also undergo collisional broadening beyond its Doppler linewidth. In our experiment however, the collisional cross-section of the atoms in the storage state is at least an order of magnitude smaller than the atoms in the excited state. Therefore, for a fixed buffer gas pressure the collisional broadening of the excited state is significantly larger than the collisional broadening of the storage state. Experimentally, we are able to demonstrate Doppler-limited memory lifetimes while simultaneously taking advantage of large collisionally broadened excited state linewidths (see Sec. 1.2 and Methods). In order to harness linewidth-broadening as a resource, we begin with a Doppler-broadened vapor of neutral barium in a home-built heat pipe oven vapor cell (described further in Methods). We introduce argon buffer gas perturbers into the vapor cell at controllable pressure, with a ratio of roughly \(10^{4}\) argon atoms per gaseous barium atom. The argon perturbers interact with the barium atoms via impact broadening at timescales longer than \(\mathcal{O}(1-10)\) ns, depending on argon pressure, and interact via quasi-static broadening at shorter timescales (further details in Methods). In both cases, the collisional broadening of the intermediate \({}^{1}P_{1}\) excited state in the barium atoms is homogeneous and of order \(\Gamma=100\) GHz (full width at half maximum), significantly in excess of the 1-10 GHz temperature-dependent Doppler linewidth. We perform a modified version of the ATT protocol outlined above, where both signal and control fields are detuned \(\Delta=5\Gamma\) below resonance. This modification ensures less than 1% absorption of the signal field in the absence of the control field, such that when the control field is turned on, any increase in absorption (or decrease in transmission) is attributable to quantum storage. We summarize the key results of this demonstration in Fig. 1. Fig. 1a presents a direct comparison of the storage efficiency and bandwidth achieved with our approach with the storage efficiencies and bandwidths achieved with existing techniques based on either lifetime-broadened, Doppler-broadened, or inhomogeneously broadened three-level quantum memories. These existing techniques employ a variety of different storage protocols [colors in Fig. 1a], but in all cases the region of high-efficiency operation has to date been limited to \(\leq\mathcal{O}(1)\) GHz. In the ultrabroadband regime investigated in this work, storage efficiencies have previously been limited to roughly 25%. We attribute the significant increase in storage efficiency demonstrated in this work to the use of a collisionally broadened linewidth [Fig. 1b-c] as an additional resource. Fig. 1c shows measured collisionally broadened linewidths in our system between 300 and 400 GHz, centered on the 553.5 nm \({}^{1}S_{0}\rightarrow{}^{1}P_{1}\) (\(|g\rangle\rightarrow|e\rangle\)) transition in barium, that are linearly dependent upon argon (Ar) buffer gas pressure as expected for collisional broadening. Fig. 1d shows the raw spectrally resolved storage and retrieval data in our system, from which we extract \(95.6\pm 0.3\%\) storage efficiency and \(26\pm 1\%\) total (end-to-end) efficiency near zero time delay. The total efficiency of our memory is limited by available control field power and can be improved significantly with a higher pulse energy control field. We note that a trend similar to the one shown in Fig. 1a also exists for total efficiency as a function of signal bandwidth, though the trend is considerably noisier, in part due to system-specific inefficiencies such as poor phasematching and insufficient available control field power. ### Memory characterization and performance. In addition to the near-resonant storage and retrieval demonstrated in Fig. 1d, we perform several experiments to characterize the performance of our photonic quantum memory. First, we measure the response of the memory to increasing control field power during the storage operation. Shown in Fig. 2a, at 800 \({}^{\circ}\)C we observe a maximum in storage efficiency near \(\pi\) control field pulse area, as expected for the ATT protocol. At 900 \({}^{\circ}\)C, with higher optical depth and larger collisional broadening we are able to achieve larger control field pulse areas with the same total available control field power, and we observe an optimal control field pulse area of \(1.25\pi\). The experimental data in Fig. 2a are fit to a numerical model based on the Maxwell-Bloch equations (see Methods) for Ar pressures of 670 and 13 mbar for 800 and 900 \({}^{\circ}\)C, respectively. The dashed horizontal lines represent the optimal bound on storage efficiency for a given temperature and independently measured optical depth (\(\eta_{\rm opt}\)). We achieve near saturation of this bound after a half Rabi oscillation, confirming the coherence of our memory and the applicability of the Maxwell-Bloch model. Next, we turn to the coherence lifetime of our memory. The collisional broadening we employ is not intrinsically state-selective; in addition to collisional broadening of the intermediate excited state, the presence of noble gas perturbers also leads to collisional broadening of the metastable or storage state. The linewidth of this state determines the coherence time of our memory, and as such a tradeoff exists between increasing memory efficiency and maintaining a long coherence time. To this end we measure the coherence lifetime of our memory as a function of argon buffer gas pressure, shown in Fig. 2b. The horizontal lines in Fig. 2b represent the limit to memory lifetime imposed by Doppler broadening at each temperature, due to the thermal motion of barium atoms which decoheres the spatially varying phase of the spin-wave. We observe qualitatively different behavior for the two vapor cell temperatures investigated in this work. At 800 \({}^{\circ}\)C, we observe a short memory lifetime at low argon pressure, which then increases to a maximum around 200 mbar before decaying according to an inverse model indicative of collisional broadening. At low pressures, we believe the memory lifetime may be reduced due to the formation of a sub-ensemble of weakly bound barium-argon molecules [23, 24, 25], but further work is needed to investigate this effect. At 900 \({}^{\circ}\)C, we observe a memory lifetime that asymptotes to the Doppler limit at low pressure and that follows the expected collisional model for increasing argon pressure. As the memory efficiency is unchanged for this range of argon pressures, we are able to achieve Doppler-limited memory lifetimes while still benefiting from the enhancement to memory efficiency due to collisional broadening. The reason for this is an asymmetry in the collisional cross sections of the \({}^{1}P_{1}\) excited state and the \({}^{1}D_{2}\) storage state, where the excited state collisional cross section is significantly larger. This means that for fixed argon pressure, the excited state will experience significantly larger collisional broadening than the storage state, thus allowing for high-efficiency memory operation without a significant trade off in memory lifetime. The memory lifetimes demonstrated in this work are almost a factor of 2 longer than previous ultrabroadband photonic quantum memories [26, 27, 28, 29]. We note that the radiative lifetime of the \({}^{1}D_{2}\) state in the bare atom is 0.25 sec [15], which leaves significant room for improvement of our memory lifetime using recently developed dephasing-protection techniques [30], whereas previous ultrabroadband memories have been limited by more fundamental constraints on memory lifetime [26, 27, 28, 29]. We also measure the carrier-frequency dependence of our memory. Keeping the signal and control fields in two-photon resonance, we vary the detuning, \(\Delta\), shown in Fig. 1b, and measure the total, end-to-end efficiency of our memory. At both temperatures we observe a maximum total efficiency at non-zero detuning, an effect we name Near-Off-Resonant Memory (NORM) operation. This regime of memory operation occurs when the adiabaticity of the control field is less than the adiabaticity of the memory set by the optical depth, excited state linewidth, and signal bandwidth (for more details, see Methods). Most often, as in our experiment, this occurs when control field power is limited. We note this adiabaticity criterion is a sufficient, but not necessary condition for NORM operation as many sets of control field parameters do not possess a well-defined adiabaticity according to our definition. In this regime it is beneficial to reduce the light-matter coupling of the signal field and atomic ensemble by detuning the signal field slightly off resonance. This operation regime has been observed experimentally in previous work [31, 32], but we explore the effect theoretically for the first time, and offer an intuitive explanation in the Methods section. Our model fit is in good agreement with the data in Fig. 2c, where we observe an optimal detuning around \(\Delta=5\Gamma\) at 800 \({}^{\circ}\)C (190 mbar) and \(\Delta>25\Gamma\) at 900 \({}^{\circ}\)C (270 mbar). At 900 \({}^{\circ}\)C we note the enhanced memory efficiency at 1550 nm control field wavelength, which highlights the telecom-compatibility of this memory. With optical pumping of the initial population into the storage state, we can in principle implement the same efficiency storage and retrieval at 1550 nm signal field wavelength. At 1550 nm, blackbody radiation of the heat pipe oven will introduce additional noise, but experimentally we measure blackbody radiation below the current noise level, which is dominated by two-photon absorption of the control field and fluorescence (see Sec. 1.3). Theoretically blackbody radiation noise can be reduced to arbitrarily low levels by increasing the distance between heat pipe and collection optics (thereby decreasing the collected solid angle). **Fig. 2****Telecom-compatible memory characterization.****a**, Optimization of storage efficiency with respect to control field pulse area at 800 \({}^{\circ}\)C (orange) and 900 \({}^{\circ}\)C (blue). Horizontal lines represent the theoretical optimal bound on storage efficiency at each temperature. Statistical errors are smaller than the marker size. Solid curves are numerical fits based on the Maxwell-Bloch equations (see Methods). **b**, Memory lifetime as a function of argon (Ar) buffer gas pressure. Horizontal lines represent the limit on memory lifetime set by Doppler or motional dephasing at each temperature. Error bars represent statistical error propagated through a decay model fit (see Methods). Solid curves are fits to an inverse function representing collisional broadening. **c**, Total (end-to-end) memory efficiency measured as a function of detuning, showing near-off-resonant memory (NORM) operation. Error bars represent systematic error due to control field power variation. Solid curves: numerical fit to Maxwell-Bloch model (see Methods). ### Single-photon-level retrieval with full amplitude and phase reconstruction. We now turn to the noise performance of our memory. Most \(\Lambda\)-type quantum memories are limited in noise performance by four-wave mixing (FWM) noise, wherein the control field coupling the storage and excited states also acts off-resonantly along the ground to excited state transition, generating spurious Stokes and anti-Stokes photons. The anti-Stokes photons generated in this process overlap exactly with the retrieved signal field in time, frequency, polarization, and spatial mode, but carry none of the quantum information stored in the original signal field. Several techniques have been developed to mitigate FWM noise in \(\Lambda\)-type quantum memories [33, 34, 35, 36], typically at the expense of additional optical fields, cavities, or more complex beam routing. By contrast, our barium \(\Lambda\)-type quantum memory is intrinsically FWM noise free. This is due to the large, 340 THz ground-storage state splitting of the \({}^{1}S_{0}\) and \({}^{1}D_{2}\) states. As this splitting is larger than the excited-storage state splitting (\({}^{1}P_{1}\)-\({}^{1}D_{2}\), 200 THz) defining the resonant control field frequency, to first order the control field does not possess sufficient energy per photon to excite FWM noise. In Fig. 3a we show the measured signal-to-noise ratio of our memory - defined as the ratio of the average retrieved signal field photon number to the average noise photon number -- as a function of the average input photon number of a weak coherent state. We fit this measured data to a linear function in order to extract a signal-to-noise ratio of SNR = 1800 at an average of 1 input photon per pulse. This represents a retrieved single-photon fidelity of \(\mathcal{F}=1-1/(\text{SNR}+1)=0.9994\), which is the highest noise-limited fidelity of any \(\Lambda\)-type quantum memory to date [6]. We believe the noise performance of our memory is limited by two-photon absorption of our control field and fluorescence from either a high-lying atomic orbital or the silica windows of our heat pipe oven, both of which are near the dark count rate of our detectors. The ultra-low noise performance of our memory combined with its ultrabroad bandwidth allows us to perform a novel fidelity characterization experiment. In Figs. 3b and 3c we show the predicted and reconstructed amplitude and phase of our output signal field, respectively. The predicted amplitude and phase is generated via integration of the Maxwell-Bloch equations with an input signal field generated via the Fourier transform of our measured signal spectrum, assuming a flat spectral phase (see Methods). The output field has two components, the field transmitted after the storage control field pulse, and the field retrieved via the retrieval pulse. The amplitude and phase in Fig. 3c are reconstructed from measured spectral interferograms between the output field and a known reference (see Methods). This spectral interference technique relies on the low-noise and broad bandwidth operation of our memory that make high-resolution measurement of spectral interference experimentally feasible. The agreement between the output field amplitude and phase in Figs. 3b and 3c is imperfect due to the limited stability and acquisition time of our interferometer, the assumption of a flat input phase, and the resolution of our spectrometer, but the agreement is sufficient to accurately extract linear and quadratic components of the retrieved signal field temporal phase, which are necessary for spectral-temporal compression and shaping of the retrieved field, which may be necessary for some applications. ## 2 Discussion In this work, we have demonstrated a novel and scalable approach to enhancing atomic-vapor quantum memory efficiency via tunable collisional broadening. This approach overcomes the trade-off between memory bandwidth and storage efficiency present in the literature, constituting a significant advance in photonic quantum memory performance. Our barium-based quantum memory exhibits simultaneously near optimal storage efficiency, a Doppler-limited lifetime, a telecom-wavelength control field, and the highest noise-limited fidelity of any \(\Lambda\)-type quantum memory to date. Our memory operates in a Near-Off-Resonant Memory or NORM regime, which balances resonant reabsorption loss and weak off-resonant light-matter coupling, and the absence of four-wave mixing noise combined with the broad bandwidth of our signal field allows for full amplitude and phase reconstruction of the output signal field via spectral interference. Taken as a whole, this performance makes our memory a promising candidate for applications in quantum communication, multiphoton state preparation, and local quantum processing, but further work is needed to transform our memory into a practically useable device. We discuss these limitations and future improvements here. The most important bottleneck for broadband ensemble-based quantum memories lies in their limited storage lifetime [6]. A common figure of merit often quoted for ensemble-based memories is the time-bandwidth product -- \(\mathrm{TBP}=T\times BW\), where \(T\) is the \(1/e\) memory lifetime and \(BW\) is the signal field bandwidth. TBP represents the Figure 3: **Noise performance and full signal field amplitude and phase reconstruction.****a**, Measured ratio of average retrieved photon number to average noise photon number (signal-to-noise ratio, SNR) **b**, Predicted amplitude and phase of photonic field after the memory, including transmitted (left) and retrieved (right) components. **c**, Experimental amplitude and phase of the photonic field after the memory, reconstructed from spectral interference with a known reference. memory lifetime in multiples of the pulse duration, and is therefore a rough estimate of how many operations could be performed in a photonic quantum processor during the storage time at a clock rate comparable to the signal bandwidth. The time-bandwidth product in this work of \(\mathrm{TBP}=980\) is comparable to the state-of-the-art for broadband ensemble memories [29, 37, 38, 39], but this simple calculation conceals the fact that clock rates beyond a few GHz are not compatible with contemporary fast electronics and electro-optics, and may not be usable in a practical device. The time-clock-rate product \(\mathrm{TRP}=T\times R\), where \(R\) is clock rate, represents the number of clock cycles for which a quantum memory can store a photonic qubit. For our memory the TRP is only \(\sim 1\) for a standard 2 GHz CPU clock, and is even lower for most quantum photon pair source repetition rates, which are typically in the MHz range [40, 41, 42]. This means that in a quantum processor with 2 GHz clock rate, our memory can only store a photonic qubit for 1 clock cycle. A TRP equal to (but ideally much greater than) 1 is a prerequisite for a useful memory device, and our memory is the first in the ultrabroadband regime to meet this threshold. As noted in Sec. 1.2, our barium memory has significant room for improvement in memory lifetime (\(T\)), and therefore in TRP. The lifetime of barium in the bare atom is 0.25 sec [15]; if our memory lifetime could saturate this bound, this would correspond to a 2-GHz TRP of \(5\times 10^{8}\). Recent work on mitigating Doppler dephasing in atomic ensemble quantum memories [30] has direct application to this work, and may lead to significant improvement of our TRP. Our memory TRP could also be improved by increasing the clock-rate \(R\); currently, broadband single-photon sources tend to operate at 1-10 MHz count rates, often limited by detector saturation, but in principle future improvements could bring these count rates up to the \(\mathcal{O}(100)\) GHz bandwidth of our memory. With these two improvements (longer memory lifetime and higher repetition rate sources), the TRP for our memory could in principle improve to a bound of \(\mathcal{O}(10^{10})\). Instead of using the \(1/e\) lifetime in calculating TRP, which is a defined relative to the end-to-end efficiency at zero storage time, one could further consider an absolute lifetime in the calculation of TRP, e.g., the lifetime defined by 90% absolute end-to-end efficiency, which is a considerably more demanding metric (no broadband quantum memory to-date can achieve \(>\)1 TRP with this threshold). In the context of scalability, one practical limitation of this work in particular is the use of a high-temperature heat pipe oven vapor cell with large spatial extent (\(\sim\)1 ft) and power consumption (1000 W) (see Methods). Cold atomic ensemble quantum memories tend to have similar practical limitations, but miniaturization of these experiments is in principle possible [43, 44, 45, 46]. Due to the low vapor pressure and high melting temperature of barium, the generation of a thermal vapor of sufficient optical depth requires long propagation lengths and/or temperatures above 900 C, for which creating long-lifetime vapor cells is difficult [47]. Light-induced or electrically-induced atomic desorption [48, 49] and laser ablation [50, 51] are alternative methods that may be used to generate a dense cloud of barium vapor, and may be more amenable to miniaturization, though the optical depths generated in these processes tend to be low (\(<\)10). Work on a compact, high-density, low-power source of atomic barium is on-going. In conclusion, we have experimentally demonstrated a simultaneously high-efficiency, high-speed, and low-noise atomic ensemble quantum memory. Our approach to increasing memory efficiency via tunable collisional broadening is resource-efficient, and leads to record efficiencies in the ultrabroadband regime. We have carefully considered the strengths and limitations of this work, and have suggested avenues for future improvement. With these improvements, the quantum memory developed in this work may serve as a critical enabling technology for quantum applications in communication, metrology, and computing. Acknowledgments.This work was supported by NSF grant Nos. 1640968, 1806572, and 2207822; and NSF Award DMR1747426. We thank Andrey Mironov and Kavita Desai for helpful discussion related to barium-argon molecule formation; Ran Finkelstein, Eilon Poem, and Ofer Firstenberg for helpful discussion related to mitigation of Doppler dephasing; Yujie Zhang and Dong Beom Kim for helpful discussion related to the apparatus and measurement; and Ernest Northern and Jim Brownfield for expert machining of the heat-pipe oven. ## 3 Methods ### Open-ended barium heat pipe oven. We employ a home-built stainless steel open-ended heat pipe oven [52, 53, 54] to generate a neutral ensemble of atomic barium in the presence of controllable argon buffer gas pressure (0-1333 mbar). The heat pipe is loaded with natural abundance solid barium metal fragments (American Elements) at room temperature under argon atmosphere, which melt and vaporize when the heat pipe is brought to 800-900 \({}^{\circ}\)C (Ba melting point: 727 \({}^{\circ}\)C) via an external resistive heater. A stainless steel mesh wick is inserted into the main chamber of the heat pipe to ensure convective flow and to prevent "hot spots." The two regions of the chamber before the windows are water cooled to 18 \({}^{\circ}\)C, and two \({}^{1}\!/\!{}_{4}\) inch diameter apertures are inserted into the chamber, one on each end, to reduce the flow of barium vapor toward the windows. The heated region of the oven is 12 inches in length, and is surrounded by three clamshell resistive heaters. Two heaters with power consumption of 218 W heat the top of the heat pipe, and one heater with 600 W power consumption heats the bottom of the heat pipe. ### Experimental setup. We provide a simplified experimental diagram in Fig. 4. We employ an \(\mathcal{O}(1)\) mJ pulse energy, \(\mathcal{O}(100)\) fs, 1 kHz repetition rate, 800 nm Ti:Sapphire amplified laser system (Spectra-Physics) cascaded with a tunable white-light seeded amplified wavelength converter (Light Conversion) to produce \(\mathcal{O}(100)\) uJ control field pulses between 1400 and 1700 center wavelength. We use a frequency-resolved optical gating (FROG) device (Mesa Photonics) to verify our control field pulses are Fourier-transform limited. A small fraction of this control field is split off and used to generate our signal field via sum-frequency generation with an 877 nm continuous wave diode laser in a room-temperature \(\beta\)-barium-borate (BBO) crystal (Newlight Photonics). The phasematching function of the sum-frequency generation process sets the signal field spectrum and 880 GHz full-width at half-maximum (FWHM) bandwidth (500 fs Fourier-limited FWHM duration). The control field is split into two pulses with controllable delay (retrieval delay in Fig. 4) before being focused and overlapped with the signal field on a dichroic mirror (Semrock). The signal field is also split into two pulses, one of which is sent to the heat pipe while one is reserved to act as a reference for the spectral interference measurements. The signal and control field waist radii in the center of the heat pipe oven are 109(3) and 247(4) \(\mu\)m, respectively. After the heat pipe, the signal field is split from the control field with a dichroic mirror and 4 cascaded interference filters (Semrock), each with \(>\)93% transmission at the signal field wavelength, before being recombined with the reference field and coupled into single-mode fiber. The in-fiber signal field is sent to either a high-quantum-efficiency spectrometer (Oxford Instruments, Andor) or a Si avalanche photodiode (Excelitas) and time-to-digital converter (ID Quantique). ### Collisionally broadened \({}^{1}p_{1}\) state linewidth measurements. We perform three separate spectroscopic measurements of the \({}^{1}S_{0}\) to \({}^{1}P_{1}\) transition at varying temperature and argon buffer gas pressure: white-light spectroscopy, scanning narrowband spectroscopy (147 MHz bandwidth, 3 ns probe), and coherent femtosecond spectroscopy (4.4 THz, 100 fs probe). The optical depth and collisionally broadened linewidth extracted from each method agree within measurement error with each other and with previous work [55]. Measured peak optical depths range from 25 to 50, depending on the heat pipe temperature, and measured \({}^{1}P_{1}\) state linewidth is \(\mathcal{O}(100)\) Figure 4: **Experimental schematic.**Simplified diagram of the barium quantum memory experiment. BS: Beam Splitter; QWP: Quarter Wave Plate; HWP: Half Wave Plate; PBS: Polarizing Beam Splitter; APD: Avalanche Photodiode. GHz, and depends linearly on argon buffer gas pressure, as shown in Fig. 1c. The equivalent optical depth of our system in a natural-linewidth (\(\Gamma_{\mathrm{nat}}=120\) MHz [56]) atomic ensemble (the so-called 'cold OD') is roughly \(d\Gamma/\Gamma_{\mathrm{nat}}=10^{5}\). From kinetic theory [57, 58] and the Van der Waals radii of barium and argon, we calculate the barium-barium and barium-argon diffusion coefficients, mean free path, and mean time between collisions for varying argon pressure and system temperature. For the experimental settings in this work (argon pressures between 1-1000 mbar and temperatures 800-900 \({}^{\circ}\)C), the expected mean time between collisions is \(\mathcal{O}(1-10)\) ns. At longer timescales probed via white light spectroscopy and scanning narrowband spectroscopy, we probe the impact broadening regime where phase discontinuities in the time domain emission (or absorption) of the radiating dipoles accounts for the broadened spectral line, whereas for the shorter timescales probed via coherent femtosecond spectroscopy, we probe the quasistatic broadening regime where the presence of many perturbers requires an averaging over the atom-perturber potential energy surface, and therefore line broadening [59, 60, 23, 24, 25, 25, 61]. ### \({}^{1}D_{2}\) coherence lifetime measurements. In Fig. 5, we provide the data and model fits used to extract the memory lifetimes shown in Fig. 2b. Each lifetime measurement is performed by increasing the argon buffer gas pressure inside the heat pipe to a target setpoint and measuring the total (end-to-end) memory efficiency as a function of varying retrieval delay, as shown in Fig. 4. We fit each set of data to either a single exponential decay or a single Gaussian decay, depending on whether the argon pressure is above or below an arbitrary cutoff of 25 mbar. Above this threshold, the experimental data are well described by exponential decay, and below this threshold, the experimental data are better fit to a Gaussian decay function characteristic of Doppler dephasing. The error bars in Fig. 5 represent systematic uncertainty due to control field power and beam pointing fluctuations, which can be quite large in our experiment, and the propagated error in Fig. 2b represents the fitting uncertainty in the presence of these systematics. ### Maxwell-Bloch equations. We model our experiment using the well-known Maxwell-Bloch equations [62, 16, 17, 63, 22]: \[\partial_{z}A(z,\tau) =-\sqrt{d}P(z,\tau) \tag{1}\] \[\partial_{\tau}P(z,\tau) =-\bar{\gamma}P(z,\tau)+\sqrt{d}A(z,\tau)-i\frac{\Omega(\tau)}{2} B(z,\tau)\] (2) \[\partial_{\tau}B(z,\tau) =-\gamma_{B}B(z,\tau)-i\frac{\Omega^{*}(\tau)}{2}P(z,\tau), \tag{3}\] where \(z\) represents the one-dimensional spatial coordinate of the atomic ensemble normalized to the ensemble length [i.e., \(z=0\) (\(z=1\)) represents the beginning (end) of the ensemble]; \(\tau=t-z/c\) represents time measured in the comoving frame of the signal photon (\(t\) represents time in the lab frame) normalized to the excited-state coherence decay rate \(\gamma=\Gamma/2\) (\(\Gamma\) is the total excited-state population decay rate, or the linewidth of the \(\left|g\right\rangle\leftrightarrow\left|e\right\rangle\) transition); \(A(z,\tau)\) is the spatially and temporally dependent signal photonic field; \(P(z,\tau)\) and \(B(z,\tau)\), referred to as the atomic polarization and spin wave fields, respectively, are macroscopic field operators representing the atomic coherences \(\left|g\right\rangle\leftrightarrow\left|e\right\rangle\) and \(\left|g\right\rangle\leftrightarrow\left|s\right\rangle\), which are delocalized across the length of the medium and are shown in Fig. 1b as orange and blue shaded regions, respectively; \(d\) is the resonant optical depth of the memory; \(\bar{\gamma}=(\gamma-i\Delta)/\gamma\) is the normalized complex detuning, where the detuning \(\Delta\) is shown schematically in Fig. 1b; and \(\Omega(\tau)\) is the control field Rabi frequency coupling the \(\left|e\right\rangle\) and \(\left|s\right\rangle\) states. All atomic population is assumed to start in the ground state, and the metastable storage state is assumed to have a coherence decay rate \(\gamma_{B}\) that is much smaller than the excited state decay rate (\(\gamma_{B}\ll 1\), in normalized units). The set of equations (1)-(3) define a map between the input photonic field \(A(z=0,\tau)\) and either the output spin-wave field \(B(z,\tau\rightarrow\infty)\), in the case of the quantum storage operation, or the output photonic field \(A(z=1,\tau)\), in the case of storage and retrieval operations. In quantum storage, the storage efficiency is defined as \(\eta=\int_{0}^{1}dz{\left|B(z,\infty)\right|^{2}}\big{/}\int_{-\infty}^{\infty }d\tau{\left|A(0,\tau)\right|^{2}}\), which is well-approximated by \(\eta\approx\int_{-\infty}^{\infty}d\tau{\left|A(1,\tau)\right|^{2}}\big{/} \int_{-\infty}^{\infty}d\tau{\left|A(0,\tau)\right|^{2}}\) when all of the signal field population entering the atomic system is transferred into the storage state (i.e., no spontaneous emission loss). This condition is met, for example, in the absorb-then-transfer protocol when there is unit efficiency transfer between \(P\) and \(B\) fields (ensured by a \(\pi\)-pulse-area control field) and no excited-state decay during the storage operation (ensured when the storage-control-field delay \(\Delta\tau^{\rm{ctrl}}\) is much shorter than the decay time, \(\Gamma\), which is trivially the case in the broadband regime when the pulse duration \(\tau_{\rm{FWHM}}\ll 1/\Gamma\) and \(\Delta\tau^{\rm{ctrl}}\sim\tau_{\rm{FWHM}}\)). This condition is also met in the idealized EIT regime with Figure 5: **Coherence lifetime measurements.** Coherence lifetime measurements for the barium \({}^{1}S_{0}-^{1}D_{2}\) ensemble superposition state at 900 \({}^{\circ}\)C for the varying argon buffer gas pressures in Fig. 2b. Measured efficiencies are normalized to a maximum value of 100%. Storage time refers to the time delay between storage and retrieval control field pulses, set by the retrieval delay (\(\tau_{\rm{ret}}\)) in Fig. 4. Solid curves are fits to either exponential or Gaussian decay (see Methods section 3.4). Dotted horizontal line denotes the \(1/e\) point for total efficiency. complete adiabatic elimination of the atomic polarization field, and in the off-resonant regime when the detuning is sufficiently large that no linear absorption takes place in the absence of the control field, and the presence of the control field maps population only to the storage state. This approximation to \(\eta\) is useful as it allows for measurement of storage efficiency through photon counting alone. After storage and retrieval, the total efficiency \(\eta_{\mathrm{tot}}=\int_{-\infty}^{\infty}d\tau\big{|}A(1,\tau>\tau_{\mathrm{ ret}})\big{|}^{2}\Big{/}\int_{-\infty}^{\infty}d\tau\big{|}A(0,\tau)\big{|}^{2}\), where \(\tau_{\mathrm{ret}}\) is the retrieval delay or storage time, can also be measured via photon counting by subtracting the photon counts corresponding to transmission during the storage operation ('leaked' photons) from the total output photon counts. Eq. (1)-(3) can also be used as a numerical fitting function.The fits performed in Fig. 2a and 2c assume Fourier-limited signal and control fields with experimentally measured bandwidth, control field power, signal-control-field delay, detuning, and waist radii; literature values for the control field transition dipole matrix element, and the signal and control field center frequencies; and use the optical depth and excited-state coherence decay rate as fit parameters. ### Near-Off-Resonant Memory (NORM) operation. It may be unexpected _a priori_ that an ensemble-based memory with a given optical depth, linewidth, and control field parameters possesses can possess a finite, non-zero optimal detuning. Especially in the ultrabroadband regime considered in this work, where we employ the absorb-then-transfer protocol that is limited in memory efficiency by linear absorption, one would naively expect the maximal memory efficiency always occurs on resonance, where linear absorption is maximized. Here we aim to provide some physical intuition as to why this is not the case. Our intuition relies on understanding the control fields used in ensemble-based memory to possess their own adiabaticity, distinct from the memory adiabaticity. As in the rest of the literature, we define the free-space memory adiabaticity as \(\chi=d\tau_{\mathrm{FWHM}}\gamma\)[16, 17, 63], and gives an indication of how slowly varying the signal field is Fig. 6: **Simulations of near-off-resonant-memory operation.****a-c**, Total memory efficiencies plotted as a function of detuning for the absorb-then-transfer (ATT) (**a**), Autler-Townes splitting (ATS) (**b**), and electromagnetically induced transparency (EIT) (**c**) protocols. Solid curves show the memory efficiencies for each protocol when applied in the ATT (blue), ATS (orange), and EIT (green) memory parameter regimes (see Methods section 3.6). Maximal efficiency occurs at non-zero detuning when the memory adiabaticity (dependent on memory regime) is larger than the effective adiabaticity for a particular control field (dependent on memory protocol). relative to the response of the medium defined by \(d\) and \(\gamma\). As in Refs. [22, 64], we define the memory parameters \(\mathcal{M}\equiv(d,\tau_{\mathrm{FWHM}}\gamma)\), and for simplicity we limit our analysis to the case of Gaussian control fields, where the optimal Gaussian control field for a given set of memory parameters is uniquely defined by \(\mathcal{G}(\mathcal{M})\). For a given control field, we can invert this function and find the corresponding optimal memory parameters \(\mathcal{M}^{\prime}(\mathcal{G})=(d^{\prime},\tau_{\mathrm{FWHM}}\gamma^{ \prime})\) and effective adiabaticity for a particular control field \(\chi^{\prime}\). When \(\chi^{\prime}<\chi\) (the control-field adiabaticity is less than the memory adiabaticity) we observe NORM operation -- a maximal total memory efficiency at finite, non-zero detuning. In this case, the memory's linear absorption is stronger than what is optimal for the control field being used, and it is beneficial to increase the detuning slightly to effectively reduce the linear absorption of the memory. Another way to explain this behavior is in terms of reabsorption loss. In optically thick atomic ensembles, some fraction of the retrieved signal field must propagate through the atomic ensemble before reaching the free-space output port of the memory. This propagation introduces additional loss to the memory operation, as the retrieved signal field has some probability of being re-absorbed by the ensemble. This is a well-known source of memory inefficiency [65], and is strongest on resonance where the signal field overlaps with the absorption spectrum of the ensemble. It is therefore often beneficial to detune from resonance to avoid reabsorption loss, but increasing the detuning without increasing the control field strength can also decrease the light-matter coupling and therefore the memory efficiency. A tradeoff between these two effects leads to a maximal memory efficiency at non-zero two-photon detuning. We note, however, that this explanation is incomplete as it does not account for a control field sufficiently strong that it opens a transparency window at the signal frequency and eliminates reabsorption loss. The description above in terms of memory and control-field adiabaticities is more general and complete. In Fig. 6, we numerically simulate several combinations of memory parameters and control field parameters that elucidate NORM operation and verify our intuition. We choose three sets of memory parameters that correspond to the three resonant memory regimes: \(\mathcal{M}=(5,0.1)\), \((7.5,0.4)\), and \((50,1.5)\) for the ATT, ATS, and EIT regimes, respectively. For each set of memory parameters, we find the three sets of unique, optimal Gaussian control field parameters \(\mathcal{G}(\mathcal{M})=\left(\theta,\Delta\tau^{\mathrm{ctrl}},\tau^{ \mathrm{ctrl}}_{\mathrm{FWHM}}\right)\), where \(\theta\) is the control field pulse area (units of \(\pi\)), \(\Delta\tau^{\mathrm{ctrl}}\) is the control field delay relative to the signal field (units of \(\tau_{\mathrm{FWHM}}\)), and \(\tau^{\mathrm{ctrl}}_{\mathrm{FWHM}}\) is the control field duration (intensity FWHM, units of \(\tau_{\mathrm{FWHM}}\)). As the control field parameters define the memory protocol, each \(\mathcal{G}(\mathcal{M})\) constitutes a different memory protocol; we use \(\mathcal{G}(\mathcal{M})=(1.0789,0.76176,0.52137)\), \((2.63177,-0.23817,1.23829)\), and \((10.05845,-0.54359,1.33658)\) for the ATT, ATS, and EIT protocols, respectively. Fig 6a shows numerical applications of the ATT protocol in the ATT (blue), ATS (orange), and EIT (green) regimes, and Fig 6b and c show the same for the ATS and EIT protocols, respectively. Clear near-off-resonant-memory operation occurs in Fig 6a and b when applying a less adiabatic memory protocol in a more adiabatic memory regime. In some cases, e.g., when using the ATT and ATS protocols in the EIT regime, the memory efficiency is near zero on resonance and only increases to appreciable values for near-resonant detunings. As one would expect, each memory protocol is most efficient in its respective regime. Our experiment and the data shown in Fig. 2c are most similar to the ATS regime in Fig. 6a, where we employ the ATT protocol (due to the available control field power and pulse duration) in a regime more suitable to the ATS (or a mixed ATT-ATS) protocol. ### Amplitude and phase reconstruction via spectral interferometry. In order to reconstruct the retrieved signal field amplitude and phase from spectral interference measurements, we first consider the general case of two ultrafast pulses with electric fields \(A_{1}(\tau)\) [\(A_{1}(\omega)\)] and \(A_{2}(\tau)\) [\(A_{2}(\omega)\)] in the temporal (spectral) domain. We take \(A_{1}(\tau)\) to be a reference pulse with known amplitude and phase and \(A_{2}(\tau)\) to be a modified pulse similar in spectral bandwidth to \(A_{1}(\tau)\) but with differing amplitude \(|A_{2}(\tau)|\) and temporal phase \(\phi_{2}(\tau)\), which we aim to measure. If we combine \(A_{1}(\tau)\) and \(A_{2}(\tau)\) on a beamsplitter with time delay \(\Delta\tau\), the resulting interference spectrum in one output port of the beamsplitter can be recorded on a spectrometer and used to reconstruct \(A_{2}(\tau)\). The details of this process are as follows. The Fourier decomposition of \(A_{j}(\tau)\) (for \(j=1,2\)) is given by \(A_{j}(\tau)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}d\omega\,|A_{j}( \omega)|e^{i[\omega\tau+\phi_{j}(\omega)]}\), where \(A_{j}(\omega)=|A_{j}(\omega)|e^{i\phi_{j}(\omega)}\). The Fourier decomposition of the signal impinging on the spectrometer is therefore: \(\frac{1}{2\sqrt{\pi}}\int_{-\infty}^{\infty}d\omega\,\big{[}i|A_{1}(\omega)| e^{i\phi_{1}(\omega)}\,+\,|A_{2}(\omega)|e^{i[\omega\Delta\tau+\phi_{2}( \omega)]}\big{]}e^{i\omega\tau}\), where we have assumed (without loss of generality) that \(A_{1}(\tau)\) is reflected at the beamsplitter and \(A_{2}(\tau)\) is transmitted. The spectrometer detects the spectral intensity of the signal impinging on it: \[S(\omega)=\left|A_{1}(\omega)\right|^{2}+\left|A_{2}(\omega)\right|^{2}+2|A_{ 1}(\omega)||A_{2}(\omega)|\sin{[\omega\Delta\tau+\phi_{\rm{dif}}(\omega)]}, \tag{4}\] where \(\phi_{\rm{dif}}(\omega)=\phi_{2}(\omega)-\phi_{1}(\omega).\) Both \(\left|A_{1}(\omega)\right|^{2}\) and \(\left|A_{2}(\omega)\right|^{2}\) (and therefore \(\left|A_{1}(\omega)\right|\) and \(\left|A_{2}(\omega)\right|\) by applying the square root) can be measured trivially by blocking the Figure 7: **Spectral Interferometry.****a**, Non-unit visibility measured spectral interference between the two interferometer paths with the memory off (solid blue curve) occurs at finite integration time due to averaging of many short-duration interference spectra with varying phase (shaded blue curves). **b**, Measured spectral interference visiblity (markers) as a function of spectrometer integration time. Error bars represent statistical error from photon counting statistics propagated through a sinusoidal fit. Solid curve is a fit to the data based on a time-varying Gaussian phase distribution (see Methods). **c-d**, Spectral interference between the transmitted and reference (**c**), and transmitted, retrieved, and reference (**d**) pulses with the memory on. Shaded regions designate the expected bounds for non-unit interference visiblity according to our model (see Methods). opposite input port of the beamsplitter and recording the single-path spectrum. If \(\Delta\tau\) is known, and is smaller than the inverse of the spectrometer resolution, everything in Eq. (4) is known except for \(\phi_{\mathrm{dif}}(\omega)\). We can therefore measure the interferogram \(S(\omega)\), fit to Eq. (4), extract \(\phi_{\mathrm{dif}}(\omega)\) [and therefore \(\phi_{2}(\omega)\) for known \(\phi_{1}(\omega)\)], and reconstruct \(A_{2}(\omega)\) and \(A_{2}(\tau)\) using the relations above. In experiment, we follow this procedure by splitting the signal field as generated via sum-frequency generation (see Sec. 3.2) into two paths; one path is sent to the barium heat pipe quantum memory for storage and retrieval, and the other path propagates in free space and acts as a reference. The two paths are recombined on a beamsplitter before being sent to a single-photon-level spectrometer with \(\sim\)0.04 nm resolution. With the memory off (heat pipe at 20 \({}^{\circ}\)C, where the number density of gas phase barium atoms is negligible), we observe spectral interference between the two paths as shown in Fig. 6(a). At spectrometer integration times less than 0.1 sec we achieve near unit visibility spectral interference, but the statistical error do to photon counting statistics is significant and makes spectral reconstruction of the signal field with the memory on impossible. We must therefore consider longer spectrometer integration times, where the interference visibility is significantly below unity. Measured interference visibilities as a function of spectrometer integration time are shown in Fig. 6(b), along with a model fit derived from first principles: We define the time-averaged visibility \(\overline{V}=(\overline{I_{\mathrm{max}}}-\overline{I_{\mathrm{min}}})/( \overline{I_{\mathrm{max}}}+\overline{I_{\mathrm{min}}})\) in terms of the time-averaged maximum and minimum intensities, \(\overline{I_{\mathrm{max}}}\) and \(\overline{I_{\mathrm{min}}}\), respectively. We assume that one can convert the spectral interference in, e.g., Fig. 6(a) into a purely sinusoidal interference spectrum and an envelope function (which, in experiment, we measure by blocking one interferometer path at a time and summing the resulting single-path spectra). In this case, the time-averaged maximum and minimum sinusoid intensities are \(\overline{I_{\mathrm{max}}}=\int d\phi\,P(\phi,t)\sin^{2}(\phi+\pi/2)\) and \(\overline{I_{\mathrm{min}}}=\int d\phi\,P(\phi,t)\sin^{2}(\phi)\), for a phase distribution \(P(\phi,t)\) at integration time \(t\). For a Gaussian phase distribution, \(P(\phi,t)\sim e^{-\phi^{2}/[2\sigma(t)^{2}]}\), the time-dependence is contained solely in the parameter \(\sigma(t)\), and leads to an time-averaged visibility \(\overline{V}=e^{-2\sigma(t)^{2}}\). The exact time dependence of \(\sigma(t)\) depends on the specific experimental apparatus. We achieve reasonable agreement with experimental data using the general form \(\sigma(t)=f_{1}t^{f_{2}}\) for fit parameters \(f_{1}\) and \(f_{2}\), which evaluate to \(f_{1}=0.06\) and \(f_{2}=0.3\) in our experiment, as shown in Fig. 6(b). Using this model for non-unit visibility interference, along with the measured spectral intensities of the transmitted, retrieved, and reference fields, we estimate the expected upper and lower bounds for non-unit visibility spectral interference shown in Fig. 6(c) and d. Fig. 6(c) shows the measured interference spectrum after combining the transmitted and reference pulses on a beamsplitter after the memory. We isolate the transmitted pulse from the retrieved pulse experimentally by blocking the retrieval control field. The interference spectrum shows significantly asymmetrical visibility, as expected for a transmitted pulse with linear temporal phase relative to the reference pulse. When unblocking the retrieval control field, we observe the interference spectrum in Fig. 6(d), which again has an asymmetrical interference visibility, but with a significantly more complicated interference pattern. We use the method described above to reconstruct the amplitude and phase (relative to the reference pulse) of the transmitted and retrieved signal field pulses, leading to the results shown in Fig. 4(c). ### Signal-field frequency dependence. As a further characterization step, we measure the signal-field frequency dependence of our memory. We keep the center frequency of our control field fixed at around \(5\Gamma\) off resonance and vary the center frequency of our signal field, scanning over the near-resonant coupling spectrum due to our Gaussian control field. In Fig. 8 we report the measured storage, retrieval, and total memory efficiencies along with fits to independent Gaussian functions. The storage efficiency spectrum is approximately a factor of \(\sqrt{2}\) wider than the retrieval and total efficiency spectra as the storage operation requires the action of a single control field pulse, whereas the retrieval (and therefore total) operation requires the action of two control field pulses.
2306.16756
Comment on "Multitime quantum communication: Interesting but not counterfactual"
In a recent paper, Robert Griffiths [Phys. Rev. A 107, 062219 (2023)] analyzed a protocol for transmission of information between two parties introduced by Salih et al. [Phys. Rev. Lett. 110, 170502 (2013)]. There is a considerable controversy about the counterfactuality of this protocol, and Griffiths suggested to resolve it by introducing a new measure of channel usage, which he called "Cost". I argue that this measure is not appropriate because the original interaction-free measurement protocol which triggered the definition of the concept of counterfactuality is not counterfactual according to this measure.
Lev Vaidman
2023-06-29T07:53:57Z
http://arxiv.org/abs/2306.16756v1
# Comment on "Multitime quantum communication: Interesting but not counterfactual" ###### Abstract In a recent paper, Robert Griffiths [Phys. Rev. A **107**, 062219 (2023)] analyzed a protocol for transmission of information between two parties introduced by Salih et al. [Phys. Rev. Lett. **110**, 170502 (2013)]. There is a considerable controversy about the counterfactuality of this protocol, and Griffiths suggested to resolve it by introducing a new measure of channel usage, which he called "Cost". I argue that this measure is not appropriate because the original interaction-free measurement protocol which triggered the definition of the concept of counterfactuality is not counterfactual according to this measure. Griffiths [1] analyzed counterfactuality of the communication protocol [2]. The term 'counterfactual' for describing quantum protocols was coined by Penrose [3] in describing interaction-free measurement (IFM) introduced by Elitzur and Vaidman [4]:"Counterfactuals are things that might have happened, although they did not in fact happen." In a successful run of the IFM, the presence of an opaque object was found with the help of a probe that could have been adsorbed by the object, but actually it was not. Jozsa [5] applied this idea to 'counterfactual computation', a setup in which one particular outcome of a computation becomes known despite the fact that the computer did not run the algorithm. The controversy arose when Hosten et al. [6] modified the Jozsa setup claiming to achieve counterfactuality for all outcomes of the computation. In the language of the IFM, Hosten et al. protocol finds both the presence and the absence of an opaque object in a counterfactual manner. The difficulty to define the counterfactuality of the protocol for the case of absence of the object is that we cannot say that the probe was not present because it was not absorbed by the object. Instead, the argument for counterfactuality was that the probe was not present in a particular place because if it were there, it could not have reached the final detector. Vaidman [7] pointed out that this classical way of considering the location of the quantum probe leads to a contradiction with the symmetry of the quantum description of the probe in the two places, one in which the probe is claimed to be absent and the other in which everyone agrees that it was present. Instead of this classical physics argument, Vaidman suggested an operational definition of the presence of the probe as the place where it left a trace similar to the trace of a probe that was well localized there. According to this definition, the Hosten et al. protocol was not counterfactual. Salih et al. [2] applied the Hosten et al. idea for "counterfactual communication" claiming that in their communication protocol the particle was not present in the transmission channel. Vaidman objected again [8], claiming that it is counterfactual only according to the classical physics argument, which cannot be accepted due to associated contradiction, and that it is not counterfactual according to the trace criterion. The controversy continued with numerous publications [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25], but essentially all of them were about counterfactuality in the case of finding that the place is empty, not about the counterfactuality of the original interaction-free measurement of the presence of an object. In particular, when the transmitted bit was 1, corresponding to the blocking of Bob's channel, the trace left in the communication channel was exactly zero, so the protocol was counterfactual according to both definitions. The controversy was only about the case of bit 0, when Bob did not block the channel. In this case, some trace was left in the channel and the discussion was about its size and about the justification to name the protocol "counterfactual" when the trace was small but not vanishing. A separate question in discussions of the protocols, apart from the counterfactuality, was the efficiency of the protocols. Sometimes, the particle did not return to Alice, and these events corresponded to the failure of the protocol. The original IFM protocol had efficiency of only \(\frac{1}{4}\), while in the Salih et al. protocol, depending on parameters, efficiency could (theoretically) be arbitrarily close to 1. In the event of failure, the particle was in the transmission channel. It is an essential part of the counterfactual phenomenon, we get information without the particle being in the transmission channel due to the possibility of the particle being there, even though in the legitimate events of the communication protocol, the particle was not there. In the IFM case in the legitimate events the detector in the dark port of the Mach-Zehnder interferometer clicked and in the Salih et al. protocol these were the clicks of detectors \(D_{1}\) and \(D_{2}\) (but not \(D_{3}\)). Griffiths in his paper tried to clarify the controversy by analyzing the presence of the probe in the communication channel, but he missed the target. Contrary to the literature on this subject, he attributed the term "counterfactual" to the issue of the efficiency of the protocol. He writes: "The term "counterfactual" in the original SLAZ paper has the following significance.... if the number of steps in an SLAZ protocol is sufficiently large, the magnitude of the amplitude sent through the channel in each step can be made very small and vanishes in the limit as the number of steps tends to infinity." Griffiths introduced a new criterion that quantified the presence of the probe in the communication channel: "a well-defined measure of channel usage here called "Cost", equal to the absolute square of the amplitude sent through the channel". The problem is that Cost measures the average usage of the communication channel including the cases in which the communication fails and the usage of the channel should not have been taken into account. In the IFM [4], which uses a balanced Mach-Zehnder interferometer, the "absolute square of the amplitude sent through the channel" is \(\frac{1}{2}\), that is, according to the Cost criterion, the protocol is not counterfactual, in spite of the fact that it _defined_ the term 'counterfactual'. Therefore, Griffith's analysis of the presence of the particle in the transmission channel of the Salih et al. protocol based on Cost might be interesting, but it sheds no light on the question of the counterfactuality of communication protocols. This work has been supported in part by the U.S.-Israel Binational Science Foundation Grant No. 735/18 and the Israel Science Foundation Grant No. 2064/19.
2304.03244
Exploring fully heavy scalar tetraquarks $QQ\overline{Q}\overline{Q}$
The masses, current couplings and widths of the fully heavy scalar tetraquarks $X_{\mathrm{4Q}}=QQ\overline{Q}\overline{Q}$, $Q=c, b$ are calculated by modeling them as four-quark systems composed of axial-vector diquark and antidiquark. The masses $m^{(\prime)}$ and couplings $ f^{(\prime)}$ of these tetraquarks are computed in the context of the QCD sum rule method by taking into account a nonperturbative term proportional to the gluon condensate $\langle \alpha _{s}G^{2}/ \pi \rangle$. Results $ m=(6570 \pm 55)~\mathrm{MeV}$ and $m^{\prime}=(18540 \pm 50)~\mathrm{MeV}$ are used to fix kinematically allowed hidden-flavor decay channels of these states. It turns out that, the processes $X_{\mathrm{4c}}\rightarrow J/\psi J/\psi $, $X_{\mathrm{4c}}\rightarrow \eta _{c}\eta _{c}$, and $X_{\mathrm{4c }}\rightarrow \eta _{c}\chi _{c1}(1P)$ are possible decay modes of $X_{ \mathrm{4c}}$. The partial widths of these channels are evaluated by means of the couplings $g_{i}, i=1,2,3$ which describe strong interactions of tetraquark $X_{\mathrm{4c}}$ and mesons at relevant vertices. The couplings $ g_{i}$ are extracted from the QCD three-point sum rules by extrapolating corresponding form factors $g_{i}(Q^2) $ to the mass-shell of a final meson. The mass of the scalar tetraquark $X_{\mathrm{4b}}$ is below the $\eta_b \eta_b$ and $\Upsilon(1S)\Upsilon(1S)$ thresholds, therefore it does not fall apart to these bottomonia, but transforms to conventional particles through other mechanisms. Comparing $m=(6570 \pm 55)~\mathrm{MeV}$ and $ \Gamma _{\mathrm{4c}}=(110 \pm 21)~\mathrm{MeV}$ with parameters of structures observed by the LHCb, ATLAS and CMS collaborations, we interpret $ X_{4c}$ as the resonance $X(6600)$ reported by CMS. Comparisons are made with other theoretical predictions.
S. S. Agaev, K. Azizi, B. Barsbay, H. Sundu
2023-04-06T17:26:31Z
http://arxiv.org/abs/2304.03244v2
# Exploring fully heavy scalar tetraquarks \(QQQQ\) ###### Abstract The masses, current couplings and widths of the fully heavy scalar tetraquarks \(X_{4\mathrm{Q}}=QQ\overline{Q}\overline{Q}\), \(Q=c,b\) are calculated by modeling them as four-quark systems composed of axial-vector diquark and antidiquark. The masses \(m^{(\prime)}\) and couplings \(f^{(\prime)}\) of these tetraquarks are computed in the context of QCD sum rule method by taking into account a nonperturbative term proportional to a gluon condensate \(\langle\alpha_{s}G^{2}/\pi\rangle\). Results obtained for \(m=(6570\pm 55)\) MeV and \(m^{\prime}=(18540\pm 50)\) MeV are used to fix kinematically allowed decay channels of these states. It turns out that, processes \(X_{4\mathrm{c}}\to J/\psi J/\psi\), \(X_{4\mathrm{c}}\to\eta_{\mathrm{c}}\eta_{\mathrm{c}}\), and \(X_{4\mathrm{c}}\to\eta_{\mathrm{c}}\chi_{\mathrm{c1}}(1P)\) are possible decay modes of \(X_{4\mathrm{c}}\). Partial widths of these channels are evaluated by means of couplings \(g_{i},i=1,2,3\) which describe strong interactions of tetraquark \(X_{4\mathrm{c}}\) and mesons at relevant vertices. The couplings \(g_{i}\) are extracted from QCD three-point sum rules by extrapolating corresponding form factors \(g_{i}(Q^{2})\) to the mass-shell of the final meson. The mass of the scalar tetraquark \(X_{4\mathrm{b}}\) is below the \(\eta_{\eta\eta}\) and \(\Upsilon(1S)\Upsilon(1S)\) thresholds, therefore \(X_{4\mathrm{b}}\) is strong-interaction stable particle. Comparing \(m=(6570\pm 55)\) MeV and \(\Gamma_{4\mathrm{c}}=(110\pm 21)\) MeV with parameters of structures observed by LHCb, ATLAS and CMS collaborations, we interpret \(X_{4\mathrm{c}}\) as a resonance \(X(6600)\) reported by CMS. Comparisons are made with other theoretical predictions. ## I Introduction Conventional hadron spectroscopy encompasses variety of quark-antiquark mesons and three-quark (antiquark) baryons with different contents and spin-parities. But existence of multiquark particles composed of more than three valence partons is not forbidden by any physical theory or model. Features of such exotic states became object of theoretical studies just after invention of quark-parton model and non-abelian field theory of strong interactions. Quantitative investigations of multiquark hadrons started from analyses performed by Jaffe in Refs. [1; 2] using MIT quark-bag model. In Ref. [1] he made an assumption about four-quark \(q^{2}\overline{q}^{2}\) nature of light mesons from the lowest scalar nonet to explain the mass hierarchy of these particles. Another intriguing result is connected with a state composed of six light quarks \(S=uuddss\)[2]. This double-strange multiquark compound would be stable against strong decays provided such particle really exists. Then hexaquark \(S\) may transform to ordinary hadrons only through weak processes and, as a result, have mean lifetime \(\tau\approx 10^{-10}\)s, which is considerably longer than that of conventional mesons. Stability against strong and/or electromagnetic decays is an important question of exotic mesons's physics: Stable four-quark particles (tetraquarks) with long mean lifetime may be discovered in various hadronic processes relatively easily. Therefore, theoretical investigations of such tetraquarks were and remain on agenda of high energy physics. Compounds containing heavy \(QQ\) diquarks (\(Q=c\) or \(b\) ) and light antidiquarks are real candidates to stable exotic mesons. A group of hypothetical particles \(QQ\overline{Q}^{\prime}\overline{Q}^{(\prime)}\) and \(QQ\overline{qq}\) were explored already in Refs. [3; 4; 5], in which it was shown that exotic mesons built of only heavy quarks are unstable particles. But states with content \(QQ\overline{qq}\) may form stable structures if the ratio \(m_{Q}/m_{q}\) is large. Conclusions about stable nature of the isoscalar axial-vector tetraquark \(T^{-}_{bb\overline{c}\overline{d}}\) was also made in Ref. [6], whereas four-quark mesons with heavy diquarks \(bc\) and \(cc\) may be either stable or unstable particles. More detailed analysis of fully heavy four-quark mesons \(X_{4\mathrm{c}}=cc\overline{cc}\), \(X_{2\mathrm{bc}}=bc\overline{bc}\overline{e}\) and \(X_{4\mathrm{b}}=bb\overline{bb}\) were performed in Refs. [7; 8; 9; 10; 11; 12], in which different features of these particles were explored by means of numerous methods and schemes. For instance, in Ref. [7] masses of fully heavy tetraquarks were found by solving nonrelativistic Schrodinger equation. In accordance with this article scalar and axial-vector tetraquarks \(X_{4\mathrm{c}}\), \(X_{2\mathrm{bc}}\) are under the di-\(J/\psi\) and \(J/\psi\Upsilon(1S)\) thresholds, and only tensor particles can be seen in di-\(J/\psi\) and \(J/\psi\Upsilon(1S)\) invariant mass distributions. At the same time, all fully beauty exotic mesons \(X_{4\mathrm{b}}\) reside below \(\Upsilon(1S)\Upsilon(1S)\) threshold, and cannot be observed in this mass distribution. Masses of scalar tetraquarks \(X_{4\mathrm{c}}\) and \(X_{4\mathrm{b}}\) were estimated also in Ref. [8]. Results obtained there \(m(X_{4\mathrm{c}})=(6192\pm 25)\) MeV and \(m(X_{4\mathrm{b}})=(18826\pm 25)\) MeV allowed the authors to study decay channels and productions of these particles. Because \(m(X_{4\mathrm{c}})\) is below di-\(J/\psi\) but above \(\eta_{\mathrm{c}}\eta_{\mathrm{c}}\) thresholds, \(X_{4\mathrm{c}}\) does not decay to \(J/\psi\) mesons, while a process \(X_{4\mathrm{c}}\to\eta_{\mathrm{c}}\eta_{\mathrm{c}}\) is kinematically allowed mode. Similarly, \(X_{4\mathrm{b}}\) cannot decay to a pair of mesons \(\Upsilon(1S)\Upsilon(1S)\), whereas \(X_{4\mathrm{b}}\to\eta_{\mathrm{b}}\eta_{\mathrm{b}}\) is the possible channel. Interesting predictions about particles \(X_{4\mathrm{c}}\) and \(X_{4\mathrm{b}}\) were made in Ref. [10], in which masses of states \(cc\overline{cc}\), and \(bb\overline{bb}\) with different spin-parities were calculated by applying the sum rule method. It was demonstrated that masses of the scalar \(J^{\rm PC}=0^{++}\) tetraquarks \(X_{\rm 4c}\) and \(X_{\rm 4b}\), except ones with \(C\otimes C\) structures, vary inside limits \(6.44-6.59\) GeV and \(18.45-18.59\) GeV, respectively. Subsequently, \(X_{\rm 4c}\) decays to \(\eta_{c}\eta_{c}\), \(J/\psi J/\psi\), and \(\eta_{c}\chi_{c1}(1P)\) meson pairs, whereas \(X_{\rm 4b}\) is strong-interaction stable particle: Presumably a scalar four-quark state \(X_{\rm 4b}\) built of pseudoscalar diquark and antidiquark can decay to \(\eta_{b}\eta_{b}\) and \(\Upsilon(1S)\Upsilon(1S)\) mesons. In accordance with Ref. [11], the scalar and tensor \(X_{\rm 4c}\), have masses \(5.99\) GeV and \(6.09\) GeV, respectively, and can decay to mesons \(\eta_{c}\eta_{c}\), whereas di-\(J/\psi\) channel remains forbidden. Experimental studies of two charmonia or bottomonia productions in \(pp\) and \(p\overline{p}\) collisions provided valuable information on nature and decay channels of fully heavy exotic mesons. Thus, a pair of \(J/\psi\) mesons were observed by LHCb, CMS and D0 collaborations [13; 14; 15], respectively. The \(J/\psi\Upsilon(1S)\) and \(\Upsilon(1S)\Upsilon(1S)\) pairs were fixed and investigated by D0 and CMS experiments [16; 17]. In the four-quark picture such final states imply production of intermediate states \(cc\overline{c}\), \(bc\overline{c}\) and \(bb\overline{b}\) with their subsequent decays to couples of heavy conventional mesons. The discovery of the doubly charmed baryon \(\Xi_{cc}^{++}=ccu\) by the LHCb collaboration [18] gave strong impetus to investigations of doubly and fully heavy tetraquarks. Thus, the mass of \(\Xi_{cc}^{++}\) was used as an input parameter to estimate the mass of the axial-vector tetraquark \(T_{bb;\overline{b}\overline{b}\overline{d}}^{--}\)[19]. Conclusions about strong-interaction stable nature of the tetraquarks \(bb\overline{b}\overline{d}\), \(bb\overline{u}\overline{s}\), and \(bb\overline{d}\overline{s}\) were made on basis of heavy-quark symmetry as well [20]. Weak decays of stable double-heavy tetraquarks were explored in our articles [21; 22; 23; 24; 25; 26; 27]. In these papers, we calculated masses and current couplings of the tetraquarks \(bb\overline{u}\overline{d}\), \(bb\overline{u}\overline{s}\) and \(bc\overline{u}\overline{d}\) with spin-parities \(J^{\rm P}=0^{+}\), \(1^{+}\), as well as parameters of the scalar state \(bs\overline{u}\overline{d}\). We evaluated full width of these structures by considering their numerous semileptonic and nonleptonic weak decay channels. The class of fully heavy exotic mesons \(Q\overline{Q}^{(\prime)}\overline{Q}^{(\prime)}\) were explored in Refs. [28; 29; 30; 31; 32]. Predictions of some of these papers [30; 31] confirm in a modified form the results discussed above. But there are also publications which contradict to such conclusions. In fact, using lattice simulations the authors of Ref. [28] did not find evidence for tetraquarks \(X_{\rm 4b}\) with different spin-parities below the lowest thresholds in relevant channels. Recently, LHCb reported about new structures in the di-\(J/\psi\) mass distribution extracted from \(pp\) data at c.m. energies 7, 8, and 13 TeV [33]. The LHCb observed a threshold enhancement in nonresonant di-\(J/\psi\) production from 6.2 to 6.8 GeV with center at 6.49 GeV. A narrow peak at about 6.9 GeV, and a resonance around 7.2 GeV were seen as well. The narrow state labeled \(X(6900)\) has parameters \[m_{1}^{\rm LHCb} = (6905\pm 11\pm 7)~{}{\rm MeV},\] \[\Gamma_{1}^{\rm LHCb} = (80\pm 19\pm 33)~{}{\rm MeV}, \tag{1}\] when assuming no interference with nonresonant single-parton scattering (NRSPS) continuum, and \[m_{2}^{\rm LHCb} = (6886\pm 11\pm 11)~{}{\rm MeV},\] \[\Gamma_{2}^{\rm LHCb} = (168\pm 33\pm 69)~{}{\rm MeV}, \tag{2}\] while ones takes into account interference of NRSPS with a threshold enhancement. This experimental information was detailed and extended by the ATLAS and CMS collaborations [34; 35]. In Ref. [34] ATLAS announced about three resonances \(X(6200)\), \(X(6600)\), and \(X(6900)\) in the di-\(J/\psi\) channel with the parameters \[m_{0}^{\rm ATL} = 6220\pm 50^{+40}_{-50}~{}{\rm MeV},\] \[\Gamma_{0}^{\rm ATL} = 310\pm 120^{+70}_{-80}~{}{\rm MeV}, \tag{3}\] \[m_{1}^{\rm ATL} = 6620\pm 30^{+20}_{-10}~{}{\rm MeV},\] \[\Gamma_{1}^{\rm ATL} = 310\pm 90^{+60}_{-110}~{}{\rm MeV}, \tag{4}\] and \[m_{2}^{\rm ATL} = 6870\pm 30^{+60}_{-10}~{}{\rm MeV},\] \[\Gamma_{2}^{\rm ATL} = 120\pm 40^{+30}_{-10}~{}{\rm MeV}. \tag{5}\] The resonance \(X(7300)\) with the mass and width \[m_{3}^{\rm ATL} = 7220\pm 30^{+20}_{-30}~{}{\rm MeV},\] \[\Gamma_{3}^{\rm ATL} = 100^{+130+60}_{-70-50}~{}{\rm MeV}, \tag{6}\] was fixed in the \(J/\psi\psi^{\prime}\) channel. The \(X(6200)\) and \(X(6600)\) are new resonances which belong to an enhancement in a \(6.2-6.8\) GeV region observed by LHCb. It seems reasonable to suppose that LHCb fixed a superposition of these structures. The resonance \(X(7300)\) is close to structure at 7.2 GeV reported by LHCb. Resonances \(X(6600)\), \(X(6900)\) and \(X(7300)\) discovered and studied by CMS have the following masses and widths \[m_{1}^{\rm CMS} = (6552\pm 10\pm 12)~{}{\rm MeV},\] \[\Gamma_{1}^{\rm CMS} = (124\pm 29\pm 34)~{}{\rm MeV}, \tag{7}\] \[m_{2}^{\rm CMS} = (6927\pm 9\pm 5)~{}{\rm MeV},\] \[\Gamma_{2}^{\rm CMS} = (122\pm 22\pm 19)~{}{\rm MeV}, \tag{8}\] and \[m_{3}^{\rm CMS} = (7287\pm 19\pm 5)~{}{\rm MeV},\] \[\Gamma_{3}^{\rm CMS} = (95\pm 46\pm 20)~{}{\rm MeV}, \tag{9}\] respectively. The CMS measured with a nice precision parameters of the structure seen by the LHCb collaboration around 7.2 GeV. Summing up, we can state that there are four resonances in the range \(6.2-7.3\) GeV discovered by different collaborations in the di-\(J/\psi\) and \(J/\psi\psi^{\prime}\) mass distributions. One of them \(X(6900)\) was confirmed by all of three collaborations, whereas \(X(6600)\)-only by ATLAS and CMS. Observations made by LHCb stimulated further detailed studies of fully heavy exotic mesons [36; 37; 38; 39; 40; 41; 42; 43]. Needless to say, that all models and technical tools available in high energy physics were activated to explore these problems. Interesting results concerning properties of fully heavy tetraquarks were obtained using the sum rule method in Refs. [36; 37; 38; 39]. For example, depending on a type of interpolating current, the mass of the scalar tetraquark \(cc\overline{c}\overline{c}\) was found within limits \(6.44-6.47\) GeV [36]. Fully heavy diquark-antidiquark and hadronic molecules were analyzed also in Ref. [39], in which the resonance \(X(6900)\) was interpreted as a molecule \(\chi_{c0}\chi_{c0}\) or/and a tetraquark built of pseudoscalar diquark and antidiquark. The LHCb data were considered in Ref. [41] in the framework of a coupled-channel approach: It was argued that in the di-\(J/\psi\) system exists a near-threshold state \(X(6200)\) with spin-parities \(0^{++}\) or \(2^{++}\). Coupled-channel effects may also generate a pole structure, which in Ref. [43] was identified with the resonance \(X(6900)\). Analysis performed there allowed the authors to predict also a bound state \(X(6200)\), and broad and narrow resonances \(X(6680)\) and \(X(7200)\), respectively. Information of the ATLAS and CMS collaborations considerably clarified the situation with experimental statuses of structures above the di-\(J/\psi\) threshold, and generated new interesting assumptions about their nature [44; 45; 46; 47; 48; 49]. Indeed, in Ref. [44] the \(X(6200)\) was assigned to be the ground-level tetraquark state with \(J^{\rm PC}=0^{++}\) or \(1^{+-}\), whereas its first radial excitation was interpreted as \(X(6600)\). Using the relativized Godfrey-Isgur diquark model, the authors of Ref. [47] proposed to consider the resonances starting from \(X(6200)\) as the \(1S,\,\,1P/2S,\,\,1D/2P,\) and \(2D/3P/4S\) tetraquark states. Similar interpretations were suggested in the context of the relativistic quark model as well [45]. As is seen, there are numerous alternatives to describe structures reported by the different collaborations. In present article, we address problems of these new data, and explore to this end the fully charmed tetraquark \(X_{4\rm c}\) with \(J^{\rm PC}=0^{++}\) by calculating its mass, current coupling and width. We model \(X_{4\rm c}\) as a diquark-antidiquark structure, and apply the two-point sum rule method to derive relevant correlation function including a nonperturbative term \(\sim\langle\alpha_{s}G^{2}/\pi\rangle\). This allows us to determine the mass \(m\) and coupling \(f\) of the tetraquark \(X_{4\rm c}\), and also fix its possible decay channels. It turns out, that processes \(X_{4\rm c}\to J/\psi J/\psi,\,\,X_{4\rm c}\to\eta_{c}\eta_{c}\), and \(X_{4\rm c}\to\eta_{c}\chi_{c1}(1P)\) are allowed decay modes of \(X_{4\rm c}\). To calculate their partial widths, we make use of three-point sum rule approach, and compute strong form factors \(g_{i}(q^{2}),\ i=1,2,3\) describing interaction of particles at vertices \(X_{4\rm c}J/\psi J/\psi,\,\,X_{4\rm c}\eta_{c}\eta_{c}\), and \(X_{4\rm c}\eta_{c}\chi_{c1}(1P)\), respectively. Obtained predictions for \(g_{i}(q^{2})\), after extrapolation to the mass-shell one of final mesons, are used to calculate widths of aforementioned decay channels, and estimate full width \(\Gamma_{4\rm c}\) of the tetraquark \(X_{4\rm c}\). Such rather detailed information about \(X_{4\rm c}\) places its comparison with available data on strong bases and leads to more reliable conclusions. We evaluate also the mass \(m^{\prime}\) of the state \(X_{4\rm b}\) and show that in the \(C\gamma_{\mu}\otimes\gamma^{\mu}C\) diquark-antidiquark model \(X_{4\rm b}\) is a strong-interaction stable compound. This article is structured in the following way: In Section II, we calculate masses and current couplings of the tetraquarks \(X_{4\rm c}\) and \(X_{4\rm b}\). Strong decay of \(X_{4\rm c}\) to \(J/\psi J/\psi\) is considered in Sec. III. Partial widths of the processes \(X_{4\rm c}\to\eta_{c}\eta_{c}\) and \(X_{4\rm c}\to\eta_{c}\chi_{c1}(1P)\) are computed in Sec. IV. Here, we find also the full width \(\Gamma_{4\rm c}\) of the tetraquark \(X_{4\rm c}\). Last section is reserved for discussion of results and concluding notes. ## II Spectroscopic parameters of the tetraquarks \(X_{4\rm c}\) and \(X_{4\rm b}\) In this section, we calculate the masses \(m^{(\prime)}\) and current couplings \(f^{(\prime)}\) of the tetraquarks \(X_{4\rm c}\) and \(X_{4\rm b}\) by means of the two-point sum rule approach [50; 51]. It is a powerful nonperturbative method developed to investigate features of conventional mesons and baryons. But QCD sum rule method can be also applied to study multiquark hadrons, such as tetraquarks and pentaquarks. To derive sum rules necessary for extracting the masses and current couplings of the scalar tetraquarks \(X_{4\rm c}\) and \(X_{4\rm b}\), we begin from analysis of the two-point correlation function \[\Pi(p)=i\int d^{4}xe^{ipx}\langle 0|{\cal T}\{J(x)J^{\dagger}(0)\}|0\rangle. \tag{10}\] where, \({\cal T}\) is the time-ordered product of two currents, and \(J(x)\) is the interpolating currents for these states. We model the tetraquarks \(X_{4\rm c}\) and \(X_{4\rm b}\) as structures formed by the axial-vector diquark \(Q^{T}C\gamma_{\mu}Q\) and axial-vector antidiquark \(\overline{Q}\gamma_{\mu}C\overline{Q}^{T}\). Corresponding interpolating current is given by the formula \[J(x)=Q_{a}^{T}(x)C\gamma_{\mu}Q_{b}(x)\overline{Q}_{a}(x)\gamma^{\mu}C \overline{Q}_{b}^{T}(x), \tag{11}\] where \(a\), and \(b\) are color indices. In Eq. (11) \(Q(x)\) denotes either \(c\) or \(b\) quark fields, and \(C\) is the charge conjugation matrix. The current \(J(x)\) describes the tetraquark with spin-parities \(J^{\rm PC}=0^{++}\). In what follows, we write down formulas for the tetraquark \(X_{4\rm c}\): Expressions for the state \(X_{4\rm b}\) can be obtained from them trivially. The physical side of the sum rule \(\Pi^{\rm Phys}(p)\) \[\Pi^{\rm Phys}(p)=\frac{\langle 0|J|X_{4\rm c}(p)\rangle\langle X_{4\rm c}(p)|J^{ \dagger}|0\rangle}{m^{2}-p^{2}}+\cdots, \tag{12}\] is derived from Eq. (10) by inserting a complete set of intermediate states with quark content and spin-parities of the tetraquark \(X_{4{\rm c}}\), and performing integration over \(x\). Let us note that in \(\Pi^{\rm Phys}(p)\) the ground-state term is written down explicitly, whereas contributions of higher resonances and continuum states are shown by dots. The correlation function \(\Pi^{\rm Phys}(p)\) can be simplified using the matrix element \[\langle 0|J|X_{4{\rm c}}(p)\rangle=fm, \tag{13}\] which leads to the following expression \[\Pi^{\rm Phys}(p)=\frac{f^{2}m^{2}}{m^{2}-p^{2}}+\cdots. \tag{14}\] The correlator \(\Pi^{\rm Phys}(p)\) has simple Lorentz structure proportional to \({\rm I}\). Therefore, rhs of Eq. (14) is an invariant amplitude \(\Pi^{\rm Phys}(p^{2})\). The QCD side of the sum rule \(\Pi^{\rm OPE}(p)\) has to be computed in the operator product expansion (OPE) with certain accuracy. For this purpose, one substitutes the current \(J(x)\) into the correlator \(\Pi(p)\), contracts relevant quark fields, and replaces contractions by heavy quark propagators. These manipulations lead for \(\Pi^{\rm OPE}(p)\) to the formula \[\Pi^{\rm OPE}(p)=i\int d^{4}xe^{ipx}\left\{{\rm Tr}\left[\gamma_ {\mu}\widetilde{S}_{c}^{\nu\dagger}(-x)\gamma_{\nu}S_{c}^{e^{\prime}a}(-x) \right]\right.\] \[\times\left[{\rm Tr}\left[\gamma^{\nu}\widetilde{S}_{c}^{aa^{ \prime}}(x)\gamma^{\mu}S_{c}^{bb^{\prime}}(x)\right]-{\rm Tr}\left[\gamma^{ \nu}\widetilde{S}_{c}^{ba^{\prime}}(x)\gamma^{\mu}\right.\right.\] \[\left.\left.\times S_{c}^{ab^{\prime}}(x)\right]\right]+{\rm Tr }\left[\gamma_{\mu}\widetilde{S}_{c}^{a^{\prime}b}(-x)\gamma_{\nu}S_{c}^{b^{ \prime}a}(-x)\right]\] \[\times\left[{\rm Tr}\left[\gamma^{\nu}\widetilde{S}_{c}^{ba^{ \prime}}(x)\gamma^{\mu}S_{c}^{ab^{\prime}}(x)\right]-{\rm Tr}\left[\gamma^{ \nu}\widetilde{S}_{c}^{aa^{\prime}}(x)\gamma^{\mu}S_{c}^{bb^{\prime}}(x) \right]\right], \tag{15}\] where \[\widetilde{S}_{c}(x)=CS_{c}^{T}(x)C, \tag{16}\] with \(S_{c}(x)\) being the \(c\)-quark propagator. Explicit expression of the heavy quark propagator \(S_{Q}(x)\) can be found in Appendix. In the case under analysis QCD side of the sum rules depends exclusively on propagators of heavy quarks. The heavy quark propagator \(S_{Q}^{ab}(x)\) apart from a perturbative term contains also components which are linear and quadratic in gluon field strength. It does not depend on light quark or mixed quark-gluon vacuum condensates which are sources of main nonperturbative contributions to correlation functions. The \(\Pi^{\rm OPE}(p)\) has simple Lorentz structure \(\sim{\rm I}\): In what follows, corresponding invariant amplitude will be denoted as \(\Pi^{\rm OPE}(p^{2})\). Having equated two functions \(\Pi^{\rm Phys}(p^{2})\) and \(\Pi^{\rm QE}(p^{2})\), applied the Borel transformation to suppress contributions of higher resonances and continuum states, and subtracted these contributions by employing the assumption about quark-hadron duality [50; 51], we find the required sum rules for the mass and coupling of the tetraquark \(X_{4{\rm c}}\). Calculation of the function \(\Pi^{\rm OPE}(p^{2})\) is a next step in our efforts to derive the sum rules for \(m\) and \(f\). Analyses demonstrate that the Borel-transformed and continuum subtracted amplitude \(\Pi(M^{2},s_{0})\) has the form \[\Pi(M^{2},s_{0})=\int_{16m_{c}^{2}}^{s_{0}}ds\rho^{\rm OPE}(s)e^{-s/M^{2}}. \tag{17}\] Here, \(\rho^{\rm OPE}(s)\) is a two-point spectral density, which is found as an imaginary part of the invariant amplitude \(\Pi^{\rm OPE}(p^{2})\). The function \(\rho^{\rm OPE}(s)\) contains a perturbative term \(\rho^{\rm pert.}(s)\) and a dimension-4 nonperturbative contribution proportional to \(\langle\alpha_{s}G^{2}/\pi\rangle\). In Appendix, we write down an analytical expression for \(\rho^{\rm pert.}(s)\), and re-train from presenting dimension-4 term, which is rather lengthly. Then, the sum rules for \(m\) and \(f\) are given by the formulas \[m^{2}=\frac{\Pi^{\prime}(M^{2},s_{0})}{\Pi(M^{2},s_{0})}, \tag{18}\] and \[f^{2}=\frac{e^{m^{2}/M^{2}}}{m^{2}}\Pi(M^{2},s_{0}), \tag{19}\] respectively. In Eq. (18), we use a notation \(\Pi^{\prime}(M^{2},s_{0})=d\Pi(M^{2},s_{0})/d(-1/M^{2})\). The sum rules Eqs. (18) and (19) depend on the gluon vacuum condensate and on masses of \(c\) and \(b\) quarks, numerical values of which are listed below \[\langle\frac{\alpha_{s}G^{2}}{\pi}\rangle=(0.012\pm 0.004)~{}{\rm GeV }^{4},\] \[m_{c}=(1.27\pm 0.02)~{}{\rm GeV},\] \[m_{b}=4.18^{+0.03}_{-0.02}~{}{\rm GeV}. \tag{20}\] A choice of working windows for parameters \(M^{2}\) and \(s_{0}\) is another problem of sum rule computations. They should be fixed in such a way that to meet a constraint imposed on the pole contribution (PC), and ensure convergence of the operator product expansion. Because, in the present article we consider only a nonperturbative term \(\sim\langle\alpha_{s}G^{2}/\pi\rangle\), the pole contribution plays a decisive role in determining \(M^{2}\) and \(s_{0}\). To estimate PC, we use the expression \[{\rm PC}=\frac{\Pi(M^{2},s_{0})}{\Pi(M^{2},\infty)}, \tag{21}\] and require fulfillment of the constraint \({\rm PC}\geq 0.5\). The PC is employed to fix the higher limit of the Borel parameter \(M^{2}\). The lower boundary for \(M^{2}\) is found from a restriction imposed on the nonperturbative term: it should not exceed \(5\%\) of the whole result. Two values of \(M^{2}\) extracted by this way fix boundaries of the region where \(M^{2}\) can be varied. Calculations for the tetraquark \(X_{4{\rm c}}\) show that intervals \[M^{2}\in[5.5,7]~{}{\rm GeV}^{2},~{}s_{0}\in[49,50]~{}{\rm GeV}^{2}, \tag{22}\] are appropriate for the parameters \(M^{2}\) and \(s_{0}\), and comply with limits on PC and nonperturbative term. Thus, at \(M^{2}=7\ {\rm GeV}^{2}\) the pole contribution is \(0.51\), whereas at \(M^{2}=5.5\ {\rm GeV}^{2}\) it becomes equal to \(0.82\). At the minimum of \(M^{2}=5.5\ {\rm GeV}^{2}\), contribution of the nonperturbative term is negative and forms \(2\%\) of the correlation function. To demonstrate dynamics of the pole contribution, we plot in Fig. 1 PC as a function of \(M^{2}\) at different \(s_{0}\). It is seen, that the pole contribution exceeds \(0.5\) for all values of the parameters \(M^{2}\) and \(s_{0}\) from Eq. (22). We extract the mass \(m\) and coupling \(f\) of the tetraquark \(X_{4{\rm c}}\) by calculating them at different \(M^{2}\) and \(s_{0}\), and determining their mean values averaged over the regions Eq. (22). Our predictions for \(m\) and \(f\) read \[m = (6570\pm 55)\ {\rm MeV},\] \[f = (5.61\pm 0.39)\times 10^{-2}\ {\rm GeV}^{4}. \tag{23}\] The results in Eq. (23) correspond to sum rules' predictions at approximately middle point of the regions in Eq. (22), i.e., to predictions at the point \(M^{2}=6.1\ {\rm GeV}^{2}\) and \(s_{0}=49.5\ {\rm GeV}^{2}\), where the pole contribution is \({\rm PC}\approx 0.70\). This fact guarantees dominance of PC in obtained results, and confirms ground-state nature of the tetraquark \(X_{4{\rm c}}\). Dependence of \(m\) on the parameters \(M^{2}\) and \(s_{0}\) is depicted in Fig. 2. In the case of the tetraquark \(X_{4{\rm b}}\) relevant analysis for the working intervals of the Borel and continuum subtraction parameters yields \[M^{2} \in [17.5,18.5]\ {\rm GeV}^{2},\] \[s_{0} \in [375,380]\ {\rm GeV}^{2}. \tag{24}\] The pole contribution in the interval for \(M^{2}\) changes within limits \[0.72\geq{\rm PC}\geq 0.66. \tag{25}\] At \(M^{2}=17.5\ {\rm GeV}^{2}\) the dimension-4 term constitutes \(\simeq-1.5\%\) of the result. The mass and current coupling of \(X_{4{\rm b}}\) are \[m^{\prime} = (18540\pm 50)\ {\rm MeV},\] \[f^{\prime} = (6.1\pm 0.4)\times 10^{-1}\ {\rm GeV}^{4}. \tag{26}\] Figure 1: The pole contribution PC as a function of the Borel parameter \(M^{2}\) at different \(s_{0}\). The limit \({\rm PC}=0.5\) is shown by the horizontal line. The red triangle shows the point, where the mass \(m\) of the tetraquark \(X_{4{\rm c}}\) has been extracted from the sum rule. Figure 2: Mass of the tetraquark \(X_{4{\rm c}}\) as a function of the Borel parameter \(M^{2}\) (left), and as a function of the continuum threshold \(s_{0}\) (right). Behavior of \(m^{\prime}\) as a function of \(M^{2}\) and \(s_{0}\) is shown in Fig. 3. The mass \(m\) of the tetraquark \(X_{4{\rm c}}\) obtained in present article nicely agrees with the mass of the resonance \(X(6600)\) fixed by the ATLAS and CMS collaborations, and belong to the wide threshold enhancement \(6.2-6.8\) GeV in \(J/\psi J/\psi\) mass distribution seen by LHCb. Therefore, at this level of our knowledge, we consider the tetraquark \(X_{4{\rm c}}\) as a candidate to the \(X(6600)\) state. For more detailed comparison with ATLAS and CMS data, and for credible statements about nature of \(X_{4{\rm c}}\), we need to evaluate its full width. It turns out that, the second diquark-antidiquark state considered in this section, i.e., \(X_{4{\rm b}}\) has the mass \(m^{\prime}=18540\) MeV which is below the lowest \(\eta_{b}\eta_{b}\) threshold in the sector of fully beauty ordinary mesons. In other words, \(X_{4{\rm b}}\) is stable against strong decays to conventional \(b\overline{b}\) mesons. Such conclusion contradicts to earlier publications [3; 4; 5], but confirms results of recent studies [7; 10]. Therefore, in the next sections we consider strong decays of \(X_{4{\rm c}}\) and calculate its full width. ## III Decay \(X_{4{\rm c}}\to J/\psi J/\psi\) The mass of the tetraquark \(X_{4{\rm c}}\) exceeds two-meson thresholds both in \(J/\psi J/\psi\) and \(\eta_{c}\eta_{c}\) channels, therefore \(S\)-wave processes \(X_{4{\rm c}}\to J/\psi J/\psi\) and \(X_{4{\rm c}}\to\eta_{c}\eta_{c}\) are allowed decay modes of this particle. Another channel, which will be considered in the present article, is \(P\)-wave decay mode \(X_{4{\rm c}}\to\eta_{c}\chi_{c1}(1P)\). We begin our investigations from analysis of the process \(X_{4{\rm c}}\to J/\psi J/\psi\). Partial width of this decay is determined by the strong coupling \(g_{1}\) of particles at the vertex \(X_{4{\rm c}}J/\psi J/\psi\). In the context of the QCD sum rule method \(g_{1}\) can be extracted from the three-point correlation function \[\Pi_{\mu\nu}(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y }e^{-ipx}\langle 0|{\cal T}\{J_{\mu}^{J/\psi}(y)\] \[\times J_{\nu}^{J/\psi}(0)J^{\dagger}(x)\}|0\rangle, \tag{27}\] where \(J_{\mu}^{J/\psi}(x)\) is the interpolating currents for the \(J/\psi\) meson. The \(J(x)\) is given by Eq. (11), for \(J_{\mu}^{J/\psi}(x)\) we use \[J_{\mu}^{J/\psi}(x)=\overline{c}_{i}(x)\gamma_{\mu}c_{i}(x), \tag{28}\] where \(i=1,2,3\) are the color indices. The 4-momentum of the tetraquark \(X_{4{\rm c}}\) is \(p\), whereas momenta of the \(J/\psi\) mesons are \(p^{\prime}\) and \(q=p-p^{\prime}\), respectively. We follow standard prescriptions of the sum rule method and express the correlation function \(\Pi_{\mu\nu}(p,p^{\prime})\) in terms of involved particles' phenomenological parameters. Isolating the ground-state contribution to the correlation function (27) from effects of higher resonances and continuum states, for the physical side of the sum rule \(\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})\), we get \[\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})=\frac{\langle 0|J_{\mu} ^{J/\psi}|J/\psi(p^{\prime})\rangle}{p^{\prime 2}-m_{1}^{2}}\frac{\langle 0|J_{ \nu}^{J/\psi}|J/\psi(q)\rangle}{q^{2}-m_{1}^{2}}\] \[\times\langle J/\psi(p^{\prime})J/\psi(q)|X_{4{\rm c}}(p)\rangle \frac{\langle X_{4{\rm c}}(p)|J^{\dagger}|0\rangle}{(p^{2}-m^{2})}+\cdots, \tag{29}\] with \(m_{1}\) being the mass of the \(J/\psi\) meson. The function \(\Pi_{\mu\nu}^{\rm Phys}(p,p^{\prime})\) can be simplified by employing matrix elements of the tetraquark \(X_{4{\rm c}}\) and \(J/\psi\) meson. The matrix element of \(X_{4{\rm c}}\) is given by Eq. (13), whereas for \(\langle 0|J_{\mu}^{J/\psi}|J/\psi(p)\rangle\), we use \[\langle 0|J_{\mu}^{J/\psi}|J/\psi(p)\rangle=f_{1}m_{1}\varepsilon_{\mu}(p), \tag{30}\] Figure 3: The same as in Fig. 1, but for the mass \(m^{\prime}\) of the tetraquark \(X_{4{\rm b}}\). where \(f_{1}\) and \(\varepsilon_{\mu}\) are the decay constant and polarization vector of the \(J/\psi\) meson, respectively. We also model the vertex \(\langle J/\psi(p^{\prime})J/\psi(q)|X_{4{\rm c}}(p)\rangle\) in the form \[\langle J/\psi(p^{\prime})J/\psi(q)|X_{4{\rm c}}(p)\rangle=g_{1}(q^ {2})\left[q\cdot p^{\prime}\varepsilon^{*}(p^{\prime})\cdot\varepsilon^{*}(q)\right.\] \[\left.-q\cdot\varepsilon^{*}(p^{\prime})p^{\prime}\cdot\varepsilon ^{*}(q)\right]. \tag{31}\] After these transformations \(\Pi^{\rm Phys}_{\mu\nu}(p,p^{\prime})\) is given by the formula \[\Pi^{\rm Phys}_{\mu\nu}(p,p^{\prime})=g_{1}(q^{2})\frac{fmf_{1}^{ 2}m_{1}^{2}}{\left(p^{2}-m^{2}\right)\left(p^{\prime 2}-m_{1}^{2}\right) \left(q^{2}-m_{1}^{2}\right)}\] \[\times\left[\frac{1}{2}\left(m^{2}-m_{1}^{2}-q^{2}\right)g_{\mu \nu}-q_{\mu}p^{\prime}_{\nu}\right]+\cdots, \tag{32}\] where ellipses stand for contributions of higher resonances and continuum states. The correlator Eq. (32) contains different Lorentz structures, which may be used to construct the sum rule for \(g_{1}(q^{2})\). We choose to work with the term \(\sim g_{\mu\nu}\) and denote the relevant invariant amplitude by \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\). The Borel transformations over \(p^{2}\) and \(p^{\prime 2}\) of the amplitude \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\) yield \[{\cal B}\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})=g_{1}(q^{2}) fmf_{1}^{2}m_{1}^{2}\] \[\times\frac{m^{2}-m_{1}^{2}-q^{2}}{2(q^{2}-m_{1}^{2})}e^{-m^{2}/M _{1}^{2}}e^{-m_{1}^{2}/M_{2}^{2}}+\cdots. \tag{33}\] The correlation function \(\Pi_{\mu\nu}(p,p^{\prime})\) calculated in terms of heavy quark propagators reads \[\Pi^{\rm OPE}_{\mu\nu}(p,p^{\prime})=-2i^{2}\int d^{4}xd^{4}ye^{ ip^{\prime}y}e^{-ipx}\] \[\times\left\{{\rm Tr}\left[\gamma_{\mu}S_{c}^{ib}(y-x)\gamma_{ \alpha}\widetilde{S}_{c}^{ja}(-x)\gamma_{\nu}\widetilde{S}_{c}^{bj}(x)\gamma ^{\alpha}S_{c}^{ai}(x-y)\right]\right.\] \[\left.-{\rm Tr}\left[\gamma_{\mu}S_{c}^{ia}(y-x)\gamma_{\alpha} \widetilde{S}_{c}^{jb}(-x)\gamma_{\nu}\widetilde{S}_{c}^{bj}(x)\gamma^{\alpha }S_{c}^{ai}(x-y)\right]\right\}. \tag{34}\] The double Borel transform of the invariant amplitude \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) which corresponds to the term \(\sim g_{\mu\nu}\) in Eq. (34) constitutes the QCD side of the sum rule. After the Borel transformation and subtraction procedures it can be expressed in terms of the spectral density \(\rho(s,s^{\prime},q^{2})\): the latter is determined as a relevant imaginary part of \(\Pi^{\rm OPE}_{\mu\nu}(p,p^{\prime})\), \[\Pi({\bf M}^{2},{\bf s}_{0},q^{2})=\int_{16m_{c}^{2}}^{s_{0}}ds \int_{4m_{c}^{2}}^{s_{0}^{\prime}}ds^{\prime}\rho(s,s^{\prime},q^{2})\] \[\times e^{-s/M_{1}^{2}}e^{-s^{\prime}/M_{2}^{2}}. \tag{35}\] Here, \({\bf M}^{2}=(M_{1}^{2},M_{2}^{2})\) and \({\bf s}_{0}=(s_{0},s_{0}^{\prime})\) are the Borel and continuum threshold parameters, respectively. By equating the Borel transforms of the amplitudes \(\Pi^{\rm OPE}(p^{2},p^{\prime 2},q^{2})\) and \(\Pi^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\), and carrying out the continuum subtraction, one finds the sum rule for \(g_{1}(q^{2})\), which is determined by the expression \[g_{1}(q^{2})=\frac{2}{fmf_{1}^{2}m_{1}^{2}}\frac{q^{2}-m_{1}^{2} }{m^{2}-m_{1}^{2}-q^{2}}\] \[\times e^{m^{2}/M_{1}^{2}}e^{m_{1}^{2}/M_{2}^{2}}\Pi({\bf M}^{2},{ \bf s}_{0},q^{2}), \tag{36}\] where, for simplicity, we do not show explicitly \({\bf M}^{2}\) and \({\bf s}_{0}\) as arguments of the function \(g_{1}(q^{2})\). The form factor \(g_{1}(q^{2})\) depends on the masses and current couplings (decay constant) of the tetraquark \(X_{4{\rm c}}\) and meson \(J/\psi\), which appear in numerical computations as input parameters. Information about these parameters are moved to Table 1. It contains also spectroscopic parameters of \(\eta_{c}\) and \(\chi_{c1}(1P)\) mesons required to investigate two other decays of \(X_{4{\rm c}}\). The masses all of mesons are borrowed from Ref. [52]. For decay constant of the meson \(J/\psi\), we employ the experimental value reported in Ref. [53]. As \(f_{\eta_{c}}\) and \(f_{\chi_{c1}}\), we use predictions made on the basis of the sum rule method in Refs. [54; 55], respectively. To carry out numerical computations it is necessary also to choose the working regions for the parameters \({\bf M}^{2}\) and \({\bf s}_{0}\). Constraints imposed on \({\bf M}^{2}\) and \({\bf s}_{0}\) are standard restrictions of sum rule calculations and were explained in the previous section. For \(M_{1}^{2}\) and \(s_{0}\), associated with the \(X_{4{\rm c}}\) channel, we use the working windows from Eq. \begin{table} \begin{tabular}{|c|c|} \hline \hline Parameters & Values (in MeV units) \\ \hline \hline \(m_{1}[m_{J/\psi}]\) & \(3096.900\pm 0.006\) \\ \(f_{1}[f_{J/\psi}]\) & \(409\pm 15\) \\ \(m_{2}[m_{\eta_{c}}]\) & \(2983.9\pm 0.4\) \\ \(f_{2}[f_{\eta_{c}}]\) & \(320\pm 40\) \\ \(m_{3}[m_{\chi_{c1}}]\) & \(3510.67\pm 0.05\) \\ \(f_{3}[f_{\chi_{c1}}]\) & \(344\pm 27\) \\ \hline \hline \end{tabular} \end{table} Table 1: Masses and decay constants of \(\overline{c}c\) mesons, which have been used in numerical computations. Figure 4: The sum rule predictions and fit function for the strong coupling \(g_{1}(Q^{2})\). The red diamond denotes the point \(Q^{2}=-m_{1}^{2}\). (22). The parameters \((M_{2}^{2},\ s_{0}^{\prime})\) for the \(J/\psi\) channel are changed inside borders \[M_{2}^{2}\in[4,5]\ {\rm GeV}^{2},\ s_{0}^{\prime}\in[12,13]\ {\rm GeV}^{2}. \tag{37}\] It is known that the sum rule method leads to reliable predictions in the deep-Euclidean region \(q^{2}<0\). For our purposes, it is convenient to introduce a new variable \(Q^{2}=-q^{2}\) and denote the obtained function by \(g_{1}(Q^{2})\). A range of \(Q^{2}\) studied by the sum rule analysis covers the region \(Q^{2}=1-10\ {\rm GeV}^{2}\). Results of calculations are plotted in Fig. 4. But the width of the decay \(X_{4{\rm c}}\to J/\psi J/\psi\) is determined by the the form factor \(g_{1}(q^{2})\) at the mass shell \(q^{2}=m_{1}^{2}\). Stated differently, one has to find \(g_{1}(Q^{2}=-m_{1}^{2})\). To solve this problem, we use a fit function \({\cal G}_{1}(Q^{2})\), which at momenta \(Q^{2}>0\) gives the same values as the sum rule calculations, but can be extrapolated to the region of \(Q^{2}<0\). In this paper, we employ functions \({\cal G}_{i}(Q^{2}),\ i=1,2,3\) \[{\cal G}_{i}(Q^{2})={\cal G}_{i}^{0}\exp\left[c_{i}^{1}\frac{Q^{2}}{m^{2}}+c_{ i}^{2}\left(\frac{Q^{2}}{m^{2}}\right)^{2}\right], \tag{38}\] with parameters \({\cal G}_{i}^{0}\), \(c_{i}^{1}\) and \(c_{i}^{2}\). Calculations prove that \({\cal G}_{1}^{0}=1.17\ {\rm GeV}^{-1}\), \(c_{1}^{1}=2.55\), and \(c_{1}^{2}=-2.79\) give a nice agreement with the sum rule's data for \(g_{1}(Q^{2})\) shown in Fig. 4. At the mass shell \(q^{2}=m_{1}^{2}\) the function \({\cal G}_{1}(Q^{2})\) is equal to \[g_{1}\equiv{\cal G}_{1}(-m_{1}^{2})=(5.8\pm 1.2)\times 10^{-1}\ {\rm GeV}^{-1}. \tag{39}\] Partial width of the process \(X_{4{\rm c}}\to J/\psi J/\psi\) can be obtained by employing the following expression \[\Gamma\left[X_{4{\rm c}}\to J/\psi J/\psi\right]=g_{1}^{2}\frac{\lambda}{8\pi} \left(\frac{m_{1}^{4}}{m^{2}}+\frac{2\lambda^{2}}{3}\right), \tag{40}\] where \(\lambda=\lambda(m,m_{1},m_{1})\) and \[\lambda(a,b,c)=\frac{\sqrt{a^{4}+b^{4}+c^{4}-2(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{ 2})}}{2a}. \tag{41}\] Then it is easy to find \[\Gamma\left[X_{4{\rm c}}\to J/\psi J/\psi\right]=(43\pm 13)\ {\rm MeV}. \tag{42}\] ## IV Processes \(X_{4{\rm c}}\to\eta_{c}\eta_{c}\) and \(X_{4{\rm c}}\to\eta_{c}\chi_{c1}(1P)\) The decays \(X_{4{\rm c}}\to\eta_{c}\eta_{c}\) and \(X_{4{\rm c}}\to\eta_{c}\chi_{c1}(1P)\) can be explored in a similar way. The strong coupling \(g_{2}\) that describes the vertex \(X_{4{\rm c}}\eta_{c}\eta_{c}\) can be extracted from the correlation function \[\Pi(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y}e^{-ipx} \langle 0|{\cal T}\{J^{\eta_{c}}(y)\] \[\times J^{\eta_{c}}(0)J^{\dagger}(x)\}|0\rangle, \tag{43}\] where the current \(J^{\eta_{c}}(x)\) is \[J^{\eta_{c}}(x)=\overline{c}_{i}(x)i\gamma_{5}c_{i}(x). \tag{44}\] Separating the ground-state contribution and effects of higher resonances and continuum states, we write the correlation function (43) in the following form \[\Pi^{\rm Phys}(p,p^{\prime})=\frac{\langle 0|J^{\eta_{c}}|\eta_{c}(p ^{\prime})\rangle}{p^{\prime 2}-m_{2}^{2}}\frac{\langle 0|J^{\eta_{c}}|\eta_{c}(q) \rangle}{q^{2}-m_{2}^{2}}\] \[\times\langle\eta_{c}(p^{\prime})\eta_{c}(q)|X_{4{\rm c}}(p) \rangle\frac{\langle X_{4{\rm c}}(p)|J^{\dagger}|0\rangle}{(p^{2}-m^{2})}+ \cdots, \tag{45}\] where \(m_{2}\) is the mass of the \(\eta_{c}\) meson. We define the vertex composed of a scalar and two pseudoscalar particles by means of the formula \[\langle\eta_{c}(p^{\prime})\eta_{c}(q)|X_{4{\rm c}}(p)\rangle=g_{2}(q^{2})p \cdot p^{\prime}. \tag{46}\] To express the correlator \(\Pi^{\rm Phys}(p,p^{\prime})\) in terms of physical parameters of particles \(X_{4{\rm c}}\) and \(\eta_{c}\), we use the matrix element Eq. (13) and \[\langle 0|J^{\eta_{c}}|\eta_{c}\rangle=\frac{f_{2}m_{2}^{2}}{2m_{c}}, \tag{47}\] with \(f_{2}\) being the decay constant of the \(\eta_{c}\) meson. The correlation function \(\Pi^{\rm Phys}(p,p^{\prime})\) then takes the form \[\Pi^{\rm Phys}(p,p^{\prime})=g_{2}(q^{2})\frac{fmf_{2}^{2}m_{2} ^{4}}{4m_{c}^{2}\left(p^{2}-m^{2}\right)\left(p^{\prime 2}-m_{2}^{2}\right)}\] \[\times\frac{m^{2}+m_{2}^{2}-q^{2}}{q^{2}-m_{2}^{2}}+\cdots. \tag{48}\] The function \(\Pi^{\rm Phys}(p,p^{\prime})\) has simple Lorentz structure proportional to I, hence rhs of Eq. (48) is the corresponding invariant amplitude \(\widetilde{\Pi}^{\rm Phys}(p^{2},p^{\prime 2},q^{2})\). Using heavy quark propagators, we can find the QCD side of the sum rule \[\Pi^{\rm OPE}(p,p^{\prime})=2i^{2}\int d^{4}xd^{4}ye^{ip^{\prime} y}e^{-ipx}\] \[\times\left\{{\rm Tr}\left[\gamma_{5}S_{c}^{ia}(y-x)\gamma_{\alpha} \widetilde{S}_{c}^{jb}(-x)\gamma_{5}\widetilde{S}_{c}^{bj}(x)\gamma^{\alpha}S _{c}^{ai}(x-y)\right]\right.\] \[\left.-{\rm Tr}\left[\gamma_{5}S_{c}^{ia}(y-x)\gamma_{\alpha} \widetilde{S}_{c}^{jb}(-x)\gamma_{5}\widetilde{S}_{c}^{ai}(x)\gamma^{\alpha}S _{c}^{bi}(x-y)\right]\right\}. \tag{49}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline \(i\) & Channels & \(g_{i}\) (\({\rm GeV}^{-1}\)) & \(\Gamma_{i}\) (\({\rm MeV}\)) \\ \hline \(1\) & \(X_{4{\rm c}}\to J/\psi J/\psi\) & \((5.8\pm 1.2)\times 10^{-1}\) & \(43\pm 13\) \\ \(2\) & \(X_{4{\rm c}}\to\eta_{c}\eta_{c}\eta_{c}\) & \((2.9\pm 0.6)\times 10^{-1}\) & \(51\pm 15\) \\ \(3\) & \(X_{4{\rm c}}\to\eta_{c}\chi_{c1}(1P)\) & \(10.9\pm 2.8^{*}\) & \(16\pm 6\) \\ \hline \hline \end{tabular} \end{table} Table 2: Decay channels of the tetraquark \(X_{4{\rm c}}\), strong couplings \(g_{i}\), and partial widths \(\Gamma_{i}\). The coupling \(g_{3}\) marked by a star is dimensionless. The sum rule for the strong form factor \(g_{2}(q^{2})\) reads \[g_{2}(q^{2})=\frac{4m_{c}^{2}}{fmf_{2}^{2}m_{4}^{2}}\frac{q^{2}-m_ {2}^{2}}{m^{2}+m_{2}^{2}-q^{2}}\] \[\times e^{m^{2}/M_{1}^{2}}e^{m_{2}^{2}/M_{2}^{2}}\widetilde{\Pi}( \mathbf{M}^{2},\mathbf{s}_{0},q^{2}), \tag{50}\] with \(\widetilde{\Pi}(\mathbf{M}^{2},\mathbf{s}_{0},q^{2})\) being the invariant amplitude \(\widetilde{\Pi}^{\text{OPE}}(p^{2},p^{\prime 2},q^{2})\) corresponding to the correlator \(\Pi^{\text{OPE}}(p,p^{\prime})\) after the Borel transformations and subtractions. We carry out numerical computations using Eq. (50), parameters of the meson \(\eta_{c}\) from Table 1, and working regions for \(\mathbf{M}^{2}\) and \(\mathbf{s}_{0}\). The Borel and continuum subtraction parameters \(M_{1}^{2}\) and \(s_{0}\) in the \(X_{4\text{c}}\) channel is chosen as in Eq. (22), whereas for \(M_{2}^{2}\) and \(s_{0}^{\prime}\) which correspond to \(\eta_{c}\) channel, we employ \[M_{2}^{2}\in[3.5,4.5]\ \text{GeV}^{2},\ s_{0}^{\prime}\in[11,12]\ \text{GeV}^{2}. \tag{51}\] The interpolating function \(\mathcal{G}_{2}(Q^{2})\) has the following parameters: \(\mathcal{G}_{2}^{0}=0.65\ \text{GeV}^{-1}\), \(c_{2}^{1}=3.19\), and \(c_{2}^{2}=-3.34\). For the strong coupling \(g_{2}\), we get \[g_{2}\equiv\mathcal{G}_{2}(-m_{2}^{2})=(2.9\pm 0.6)\times 10^{-1}\ \text{GeV}^{-1}. \tag{52}\] The width of the process \(X_{4\text{c}}\to\eta_{c}\eta_{c}\) is determined by means of the formula \[\Gamma\left[X_{4\text{c}}\to\eta_{c}\eta_{c}\right]=g_{2}^{2}\frac{m_{2}^{2} \widetilde{\lambda}}{8\pi}\left(1+\frac{\widetilde{\lambda}^{2}}{m_{2}^{2}} \right), \tag{53}\] where \(\widetilde{\lambda}=\lambda(m,m_{2},m_{2})\). Finally, we obtain \[\Gamma\left[X_{4\text{c}}\to\eta_{c}\eta_{c}\right]=(51\pm 15)\ \text{MeV}. \tag{54}\] Treatment of the \(P\)-wave decay \(X_{4\text{c}}\to\eta_{c}\chi_{c1}(P)\) does not generate additional technical details, and is performed in a usual manner. The three-point correlator to be considered is equal to \[\Pi_{\mu}(p,p^{\prime})=i^{2}\int d^{4}xd^{4}ye^{ip^{\prime}y}e^{ -ipx}\langle 0|\mathcal{T}\{J_{\mu}^{\chi_{c1}}(y)\] \[\times J^{\eta_{c}}(0)J^{\dagger}(x)\}|0\rangle, \tag{55}\] where \(J_{\mu}^{\chi_{c1}}(y)\) is the interpolating current for the meson \(\chi_{c1}(1P)\) \[J_{\mu}^{\chi_{c1}}(y)=\overline{c}_{j}(x)\gamma_{5}\gamma_{\mu}c_{j}(x). \tag{56}\] In terms of the physical parameters of involved particles the correlation function has the form \[\Pi_{\mu}^{\text{Phys}}(p,p^{\prime})=g_{3}(q^{2})\frac{fmf_{2}m_ {2}^{2}f_{3}m_{3}}{2m_{c}\left(p^{2}-m^{2}\right)\left(p^{\prime 2}-m_{3}^{2}\right)}\] \[\times\frac{1}{q^{2}-m_{2}^{2}}\left[\frac{m^{2}-m_{3}^{2}-q^{2}}{ 2m_{3}^{2}}p^{\prime}_{\mu}-q_{\mu}\right]+\cdots. \tag{57}\] In Eq. (57) \(m_{3}\) and \(f_{3}\) are the mass and decay constant of the meson \(\chi_{c1}(1P)\). To derive the correlator \(\Pi_{\mu}^{\text{Phys}}(p,p^{\prime})\), we have used the known matrix elements of the tetraquark \(X_{4\text{c}}\) and meson \(\eta_{c}\), as well as new matrix elements \[\langle 0|J_{\mu}^{\chi_{c1}}|\chi_{c1}(p^{\prime})\rangle=f_{3}m_{3}\varepsilon _{\mu}^{*}(p^{\prime}), \tag{58}\] and \[\langle\eta_{c}(q)\chi_{c1}(p^{\prime})|X_{4\text{c}}(p)\rangle=g_{3}(q^{2})p \cdot\varepsilon^{*}(p^{\prime}), \tag{59}\] where \(\varepsilon_{\mu}^{*}(p^{\prime})\) is the polarization vector of \(\chi_{c1}(1P)\). The QCD side \(\Pi_{\mu}^{\text{OPE}}(p,p^{\prime})\) is given by the formula \[\Pi_{\mu}^{\text{OPE}}(p,p^{\prime})=2i^{3}\int d^{4}xd^{4}ye^{ip ^{\prime}y}e^{-ipx}\] \[\times\left\{\text{Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x) \gamma_{\alpha}\widetilde{S}_{c}^{jb}(-x)\gamma_{5}\widetilde{S}_{c}^{bj}(x) \gamma^{\alpha}S_{c}^{ai}(x-y)\right]\right.\] \[\left.-\text{Tr}\left[\gamma_{\mu}\gamma_{5}S_{c}^{ia}(y-x) \gamma_{\alpha}\widetilde{S}_{c}^{jb}(-x)\gamma_{5}\widetilde{S}_{c}^{aj}(x) \gamma^{\alpha}S_{c}^{bi}(x-y)\right]\right\}, \tag{60}\] The sum rule for \(g_{3}(q^{2})\) is derived using the invariant amplitudes corresponding to terms \(\sim p^{\prime}_{\mu}\) in \(\Pi_{\mu}^{\text{Phys}}(p,p^{\prime})\) and \(\Pi_{\mu}^{\text{OPE}}(p,p^{\prime})\). In numerical analysis, \(M_{2}^{2}\) and \(s_{0}^{\prime}\) in the \(\chi_{c1}\) channel are chosen in the following way \[M_{2}^{2}\in[4,5]\ \text{GeV}^{2},\ s_{0}^{\prime}\in[13,14]\ \text{GeV}^{2}. \tag{61}\] For the parameters of the fit function \(\mathcal{G}_{3}(Q^{2})\), we get \(\mathcal{G}_{3}^{0}=24.08\), \(c_{3}^{1}=2.98\), and \(c_{3}^{2}=-4.26\). Then, the strong coupling \(g_{3}\) is equal to \[g_{3}\equiv\mathcal{G}_{3}(-m_{2}^{2})=10.9\pm 2.8. \tag{62}\] The width of the decay \(X_{4\text{c}}\to\eta_{c}\chi_{c1}(P)\) can be calculated by mean of the expression \[\Gamma\left[X_{4\text{c}}\to\eta_{c}\chi_{c1}(P)\right]=g_{3}^{2}\frac{ \widetilde{\lambda}^{3}}{24\pi m_{3}^{2}}, \tag{63}\] where \(\widetilde{\lambda}=\lambda(m,m_{2},m_{3})\). Then the width of this process is \[\Gamma\left[X_{4\text{c}}\to\eta_{c}\chi_{c1}(P)\right]=(16\pm 6)\ \text{MeV}. \tag{64}\] The widths all of three decays are collected in Table 2. Based on these results, it is not difficult to find that \[\Gamma_{4\text{c}}=(110\pm 21)\ \text{MeV}, \tag{65}\] which nicely agrees with CMS datum \(\Gamma_{1}^{\text{CMS}}\). ## V Discussion and concluding notes In the present article, we have performed detailed analysis of the tetraquark \(X_{4\text{c}}\) by calculating the mass \(m\) and full width \(\Gamma_{4\text{c}}\) of this scalar diquark-antidiquark state. Our findings are in agreements with experimental data \(m_{1}^{\text{CMS}}=(6552\pm 10\pm 12)\ \text{MeV}\) and \((124\pm 29\pm 34)\) MeV of the CMS collaboration. The mass of \(X_{4c}\) is compatible also with \(m_{1}^{\rm ATL}\) if one takes into account existing experimental and theoretical errors. We have interpreted the ground-level \(1S\) tetraquark \(X_{4c}\) built of axial-vector constituents as the resonance \(X(6600)\). The dominant decay mode of \(\Gamma_{4c}\) is the channel \(X_{4c}\to\eta_{c}\eta_{c}\), partial width of which is larger than \(\Gamma\left[X_{4c}\to J/\psi J/\psi\right]\). The new fully charmed resonances were observed in the di-\(J/\psi\) mass distribution through \(4\mu\) final states. It is known that decays to lepton pairs \(e^{+}e^{-}\) and \(\mu^{+}\mu^{-}\) are among important modes of the \(J/\psi\) meson [52]. But, the \(\eta_{c}\) meson's main channels are decays to hadronic resonances, for example, to \(\rho\rho\) mesons. Naturally, the process \(X_{4c}\to\eta_{c}\eta_{c}\) could not be seen in \(4\mu\) events. There are numerous publications, in which properties of the tetraquark \(X_{4c}\) were studied using various methods (for complete list of relevant publications see, Ref. [45]). These investigations intensified after discovery of resonances \(X(6200)\), \(X(6600)\), \(X(6900)\) and \(X(7300)\). Comparing our result for the mass of \(X_{4c}\) with \((6.46\pm 0.16)\) GeV and \(6.46^{+0.13}_{-0.17}\) GeV from Refs. [10; 36], we see that through \(m\) exceeds them, within ambiguities of calculations all predictions are comparable with each other. But what is more important, decays to \(J/\psi J/\psi\) pairs are kinematically allowed channels for these structures. The first resonance \(X(6200)\) in the list of fully charmed states may be a manifestation of the hadronic molecule \(\eta_{c}\eta_{c}\) in the \(J/\psi J/\psi\) spectrum. But to be detected in this spectrum the mass of \(\eta_{c}\eta_{c}\) must exceed the di-\(J/\psi\) threshold \(\simeq 6195\) MeV. In Ref. [39] the authors predicted \(M_{\eta_{c}\eta_{c}}=6029\pm 198\) MeV which in upper limit overshoots the di-\(J/\psi\) threshold. Alternatively, appearance of near-threshold state \(X(6200)\) may be explained by coupled-channel effects [41]. The next structure \(X(6900)\) can be considered in the tetraquark model provided it composed of pseudoscalar diquark and antidiquark. Thus, the mass of such tetraquark was estimated around \((6.82\pm 0.18)\) GeV and \((6.80\pm 0.27)\) GeV in Refs. [10; 39], respectively. The hadronic molecule \(\chi_{c0}\chi_{c0}\) with the mass \(\simeq 6.93\) GeV is an alternative candidate to the resonance \(X(6900)\)[39]. The heaviest state \(X(7300)\) from this list is presumably a radially excited \(X_{4c}(2S)\) tetraquark. An argument in favor of such assumption came from the ATLAS collaboration, which fixed the resonances \(X(6600)\) and \(X(7300)\) in the \(J/\psi J/\psi\) and \(J/\psi\psi^{\prime}\) mass distributions, respectively. In other words \[X(7300) \to J/\psi\psi^{\prime},\] \[X(6600) \to J/\psi J/\psi, \tag{66}\] are decay modes of these resonances. The mass gap between \(\psi^{\prime}\) and \(J/\psi\) is around 590 MeV, whereas the mass difference of \(X(7300)\) and \(X(6600)\) equals to 600 MeV (the ATLAS) and to 735 MeV (the CMS). Then, it is natural to suppose that \(X(7300)\) is the first radially excited state of \(X(6600)\). Originally, similar hypothesis was made in Ref. [56], while considering main decay channels of the resonances \(Z_{c}(3900)\) and \(Z_{c}(4330)\) \[Z_{c}(4330) \to \psi^{\prime}\pi,\] \[Z_{c}(3900) \to J/\psi\pi. \tag{67}\] It was supposed that \(Z_{c}(4330)\) is first radial excitation of the tetraquark \(Z_{c}(3900)\). This idea was later confirmed by studies carried out using the diquark-antidiquark model and sum rule method [57; 58]. In light of this analysis the assumption about \(2S\) excited nature of \(X(7300)\) looks plausible and waits its realization. We have calculated also the mass of the fully-beauty scalar state \(X_{\rm 4b}\). It turned out that, its mass \(m^{\prime}=(18540\pm 50)\) MeV is smaller than the \(\eta_{b}\eta_{b}\) threshold, and hence \(X_{\rm 4b}\) is strong-interaction stable particle.This tetraquark cannot be observed in \(\eta_{b}\eta_{b}\) or \(\Upsilon(1S)\Upsilon(1S)\) mass distributions. Stable nature of \(X_{\rm 4b}\) was already predicted in Refs. [7; 10]. Transformation of \(X_{\rm 4b}\) to ordinary mesons may proceed through its weak leptonic and nonleptonic decays. It is clear that controversial character of conclusions about nature of fully heavy resonances is connected with different models and schemes employed for their investigations. In some of articles, for instance, \(X_{\rm 4b}\) can decay to a pair of pseudoscalar mesons \(\eta_{b}\eta_{b}\), but is stable against \(\Upsilon(1S)\Upsilon(1S)\) channel, whereas in other publications \(X_{\rm 4b}\) is strong-interaction stable compound. In the case of fully charmed states a same resonance may be interpreted within both the molecule and diquark-antidiquark models. We would like to emphasize that all conclusions about the ground-state and excited \(X_{\rm 4c}\) and \(X_{\rm 4b}\) exotic mesons were drawn using information on masses of these structures. In our view, in scenarios with four-quark mesons one has to calculate also their widths, otherwise statements made by relying only on masses of these states remain not fully convincing. Results of such comprehensive investigations to account for observed fully charmed resonances will be reported in our forthcoming publications. Appendix A Heavy quark propagator \(S_{Q}^{ab}(x)\) and spectral density \(\rho^{\rm pert.}(s,\alpha,\beta,\gamma)\) In the current article, for the heavy quark propagator \(S_{Q}^{ab}(x)\) (\(Q=c,\ b\)), we employ \[S_{Q}^{ab}(x)=i\int\frac{d^{4}k}{(2\pi)^{4}}e^{-ikx}\Bigg{\{}\frac {\delta_{ab}\left(\not{k}+m_{Q}\right)}{k^{2}-m_{Q}^{2}}-\frac{g_{s}G_{ab}^{ \alpha\beta}}{4}\frac{\sigma_{\alpha\beta}\left(\not{k}+m_{Q}\right)+\left( \not{k}+m_{Q}\right)\sigma_{\alpha\beta}}{(k^{2}-m_{Q}^{2})^{2}}\] \[+\frac{g_{s}^{2}G^{2}}{12}\delta_{ab}m_{Q}\frac{k^{2}+m_{Q}\not{k }}{(k^{2}-m_{Q}^{2})^{4}}+\cdots\Bigg{\}}. \tag{10}\] Here, we have used the notations \[G_{ab}^{\alpha\beta}\equiv G_{A}^{\alpha\beta}\lambda_{ab}^{A}/2,\ \ G^{2}=G_{\alpha\beta}^{A}G_{A}^{\alpha\beta}, \tag{11}\] where \(G_{A}^{\alpha\beta}\) is the gluon field-strength tensor, and \(\lambda^{A}\) are the Gell-Mann matrices. The indices \(A,B,C\) run in the range \(1,2,\ldots 8\). The invariant amplitude \(\Pi(M^{2},s_{0})\) obtained after the Borel transformation and subtraction procedures is given by the expression \[\Pi(M^{2},s_{0})=\int_{16m_{Q}^{2}}^{s_{0}}ds\rho^{\rm OPE}(s)e^{-s/M^{2}},\] where the spectral density \(\rho^{\rm OPE}(s)\) is determined by the formula \[\rho^{\rm OPE}(s)=\rho^{\rm pert.}(s)+\langle\alpha_{s}G^{2}/\pi\rangle\rho^{ \rm Dim4}(s). \tag{12}\] The components \(\rho^{\rm pert.}(s)\) and \(\rho^{\rm Dim4}(s)\) of the spectral density are \[\rho^{\rm pert.(Dim4)}(s)=\int_{0}^{1}d\alpha\int_{0}^{1-a}d\beta\int_{0}^{1- a-\beta}d\gamma\rho^{\rm pert.(Dim4)}(s,\alpha,\beta,\gamma), \tag{13}\] where the variables \(\alpha\), \(\beta\), and \(\gamma\) are Feynman parameters. The function \(\rho^{\rm pert.}(s,\alpha,\beta,\gamma)\) has the form \[\rho^{\rm pert.}(s,\alpha,\beta,\gamma)=\frac{\Theta(L_{1})N_{1} ^{2}}{64\pi^{6}N_{2}^{8}N_{3}^{5}(1-\gamma-\beta)^{2}}\left\{-6m_{Q}^{4}(\beta +\gamma-1)^{2}N_{2}^{4}N_{3}^{3}+m_{Q}^{2}N_{2}^{2}N_{3}\left\{3Ls\alpha\left[ N_{2}^{2}(N_{3}-L\alpha)\right.\right.\right.\] \[\left.\left.+LN_{2}\alpha\gamma(-N_{3}(2N_{3}+(\beta+\gamma-1)^{ 2})+4N_{3}\alpha(\beta+\gamma-1)-2\alpha^{2}(L^{2}-2N_{3}))+L^{2}\alpha^{2} \gamma^{2}\left(N_{3}(N_{3}+(\gamma+\beta-1)^{2})\right.\right.\right.\] \[\left.\left.-2N_{3}\alpha(\beta+\gamma-1)+\alpha^{2}(L^{2}-2N_{3} )\right)\right]+2N_{1}\left[-LN_{3}^{2}\alpha-L\alpha(\gamma(\beta+\gamma-1)+ \alpha(\gamma+\beta-1)+\alpha^{2})\right.\] \[\left.\left.\times(\beta(\gamma+\beta-1)+\alpha(\beta+\gamma-1)+ \alpha^{2})+N_{3}\left(\gamma\beta(\beta+\gamma-1)^{2}+\alpha(\beta+\gamma-1) ^{2}+\alpha^{2}(\beta+\gamma-1)(2\beta+2\gamma-1)\right.\right.\right.\] \[\left.\left.\left.+4\alpha^{3}(\beta+\gamma-1)+2\alpha^{4}) \right]\right\}-3L\alpha(L\alpha-N_{3})\left\{2L^{2}s^{2}\alpha^{2}\gamma(N_{3 }-L\alpha)(N_{2}-L\alpha\gamma)+2LN_{1}s\alpha\left[2L^{2}\alpha^{2}\gamma^{2 }(N_{3}-L\alpha)^{3}\right.\right.\] \[\left.\left.+N_{2}^{2}(N_{3}+\gamma(\beta+\gamma-1)-\alpha L)+LN_{2} \alpha\gamma(-3N_{3}-\gamma(\beta+\gamma-1)+3\alpha L)+N_{1}^{2}(-LN_{3} \alpha+(\beta^{2}+(\beta+\alpha)(\alpha+\gamma-1))\right.\right.\] \[\left.\left.\left.\times(\gamma^{2}+(\gamma+\alpha)(\alpha+\beta-1 ))\right]\right\}\right\}, \tag{14}\] In expressions above, \(\Theta(z)\) is the Unit Step function. We have used also the following notations \[N_{1}=s\alpha\beta\gamma\left[\gamma^{3}+2\gamma^{2}(\beta+\alpha -1)+\alpha(\beta+\alpha-1)+\gamma\left(1+\beta^{2}-3\alpha+2\alpha^{2}\right.\right.\] \[\left.\left.+\beta(-2+3\alpha))\right]-m_{Q}^{2}\left[\beta\alpha^{2}( \alpha+\beta-1)^{2}+\gamma^{4}(\alpha+\beta)+\gamma\alpha(\alpha+\beta-1)^{ 2}(2\beta+\alpha)\right.\right.\] \[\left.\left.+2\gamma^{3}(\beta^{2}+\alpha(\alpha-1)+\beta(2\alpha -1))+\gamma^{2}(\beta^{3}+\beta^{2}(5\alpha-2)+\alpha(1-3\alpha+2\alpha^{2})+ \beta(1-6\alpha+6\alpha^{2}))\right],\] \[N_{2}=\beta\alpha(\alpha+\beta-1)+\gamma^{2}(\alpha+\beta)+ \gamma\left[\beta^{2}+\alpha(\alpha-1)+\beta(2\alpha-1)\right],\] \[N_{3}=\gamma^{2}+(\gamma+\alpha)(\beta+\alpha-1),\ \ L=\alpha+\beta+\gamma-1,\ L_{1}=N_{1}/N_{2}^{2}. \tag{15}\]
2302.01905
On the Maximum Atom-Bond Sum-Connectivity Index of Graphs
The atom-bond sum-connectivity (ABS) index of a graph $G$ with edges $e_1,\cdots,e_m$ is the sum of the numbers $\sqrt{1-2(d_{e_i}+2)^{-1}}$ over $1\le i \le m$, where $d_{e_i}$ is the number of edges adjacent with $e_i$. In this paper, we study the maximum values of the ABS index over graphs with given parameters. More specifically, we determine the maximum ABS index of connected graphs of a given order and with a fixed (i) minimum degree, (ii) maximum degree, (iii) chromatic number, (iv) independence number, or (v) number of pendent vertices. We also characterize the graphs attaining the maximum ABS values in all of these classes.
Tariq Alraqad, Hisham Saber, Akbar Ali, Abeer M. Albalahi
2023-01-30T03:56:28Z
http://arxiv.org/abs/2302.01905v2
# On the Maximum Atom-Bond ###### Abstract Let \(G\) be a graph with \(m\) edges, namely \(e_{1},\cdots,e_{m}\). For \(i\in\{1,\cdots,m\}\), let \(d_{e_{i}}\) be the number of edges adjacent with \(e_{i}\). The sum of the numbers \(\sqrt{1-2(d_{e_{i}}+2)^{-1}}\) over \(1\leq i\leq m\) is known as the atom-bond sum-connectivity (ABS) index of \(G\). A pendant vertex of \(G\) is a vertex having degree \(1\). A subset \(S\) of the vertex set of \(G\) is said to be independent if the vertices of \(S\) are pairwise non-adjacent in \(G\). The maximum number among cardinalities of all independent sets of \(G\) is known as the independence number of \(G\). The least number of colors required to color the vertices of a graph, so that no two adjacent vertices have the same color, is termed as the chromatic number. This paper characterizes the graphs attaining the greatest values of the ABS index over the classes of graphs of a given order and with a fixed (i) chromatic number (ii) independence number (iii) number of pendant vertices. **keywords**: topological index; atom-bond sum-connectivity; graph **Mathematics Subject Classification:** 05C07, 05C90. ## 1 Introduction In this paper, just finite and simple graphs are taken into account. The sets of edges and vertices of a graph \(G\) are denoted, respectively, by \(E(G)\) and \(V(G)\). The degree of a vertex \(v\in V(G)\) is indicated by \(d_{v}(G)\) or just \(d_{v}\) if the graph being discussed is unambiguous. We utilize the conventional notation and nomenclature of (chemical) graph theory, and we refer readers to the relevant books, for example, [1, 2]. Graphs are being used to model chemical structures by replacing atoms and bonds of the structures with vertices and edges, respectively. In this way, it is possible to study the chemical structures using the concepts of graph theory. Such a field of study is usually referred to as the chemical graph theory. Graph invariants that adopt quantitative values are widely termed as topological indices in chemical graph theory. The connectivity index (also known as the Randic index) [3], a well-known topological index, was devised in the 1970s by the chemist Milan Randic under the name "branching index" [4]. Soon after its discovery, the connectivity index quickly found a variety of uses [5, 6, 7] in chemistry and consequently it become one of the most applied and well-researched topological indiies. For a graph \(G\), the connectivity index is defined as \[R(G)=\sum_{vw\in E(G)}\frac{1}{\sqrt{d_{v}d_{w}}}.\] The Randic index has be modified in several ways. Here, we mention two topological indices which were introduced by taking into consideration the definition of the Randic index, namely the "sum-connectivity (SC) index" [8] and the "atom-bond connectivity (ABC) index" [9]. These indices have the following definitions for a graph \(G\): \[SC(G)=\sum_{vw\in E(G)}\frac{1}{\sqrt{d_{v}+d_{w}}}\] and \[ABC(G)=\sum_{ab\in E(G)}\sqrt{\frac{d_{v}+d_{w}-2}{d_{v}d_{w}}}.\] Detail regarding the mathematical aspects of the SC and ABC indices may be found in the review papers [10] and [11], respectively. By utilizing the definitions of the ABC and SC indices, a novel topological index - the atom-bond sum-connectivity (ABS) index - has recently been proposed in [12]. For a graph \(G\), this index is defined as \[ABS(G)=\sum_{uv\in E(G)}\left(\frac{d_{u}+d_{v}-2}{d_{u}+d_{v}}\right)^{\frac {1}{2}}\,.\] In the paper [12], graphs possessing the maximum and minimum values of the ABS index were characterized over the classes of graphs and (chemical) trees of a given order; such kind of extremal results regarding unicyclic graphs were found in [13], where also chemical applications of the ABS index were reported. The preprint [14] is concerned with the problems of determining graphs possessing the minimum ABS index among all trees of a fixed order and/or a given number of pendant vertices; see also [15] where one of these two problems is attacked independently. A pendant vertex in a graph is a vertex of degree 1. The least number of colors required to color the vertices of a graph, so that no two adjacent vertices have the same color, is termed as the chromatic number. A subset \(S\) of the vertex set of \(G\) is said to be independent if the vertices of \(S\) are pairwise non-adjacent in \(G\). The maximum number among cardinalities of all independent sets of \(G\) is known as the independence number of \(G\) and it is denoted by \(\alpha(G)\). This paper characterizes the graphs attaining the greatest values of the ABS index over the classes of graphs of a given order and with a fixed (i) chromatic number (ii) independence number (iii) number of pendant vertices. ## 2 Results In order to avoid trivialities, throughout this section, we consider only connected graphs. To prove our results, we need few technical lemmas. **Lemma 1** (see [12]): _Let \(u\) and \(v\) be non-adjacent vertices in a connected graph \(G\). If \(G+uv\) is the graph obtained from \(G\) by adding the edge \(uv\) in \(G\) then_ \[ABS(G+uv)>ABS(G).\] **Lemma 2**: _Let_ \[f(x,y)=\left(\frac{x+y-2}{x+y}\right)^{\frac{1}{2}},\] _where \(\min\{x,y\}\geq 1\). For every positive real number \(s\), define the function \(g_{s}(x,y)=f(x+s,y)-f(x,y)\). Then \(f\) is strictly increasing in \(x\) and in \(y\). The function \(g_{s}\) is strictly decreasing and convex in \(x\) and in \(y\)._ Proof.The first and second partial derivatives function \(\frac{\partial f}{\partial x}\) of \(f\) with respect to \(x\) and \(y\) are calculated as \[\frac{\partial f}{\partial x}(x,y)=\frac{\partial f}{\partial y}(x,y)=(x+y-2)^ {-\frac{1}{2}}(x+y)^{-\frac{3}{2}},\] \[\frac{\partial^{2}f}{\partial x^{2}}(x,y)=\frac{\partial^{2}f}{\partial y^{2}} (x,y)=-\frac{1}{2}(x+y-2)^{-\frac{3}{2}}(x+y)^{-\frac{3}{2}}-\frac{3}{2}(x+y-2 )^{-\frac{1}{2}}(x+y)^{-\frac{5}{2}}\] Clearly \(\frac{\partial f}{\partial x}(x,y)>0\), whenever \(x>1\), and thus \(f\) is strictly increasing in \(x\) and in \(y\). Since \(\frac{\partial^{2}f}{\partial x^{2}}(x,y)<0\), whenever \(x>1\), we get \(\frac{\partial f}{\partial x}(x,y)\) is strictly decreasing in \(x\) when \(x\geq 1\). This implies that \(\frac{\partial g_{s}}{\partial x}(x,y)=\frac{\partial f}{\partial x}(x+s,y)- \frac{\partial f}{\partial x}(x,y)<0\) when \(x\geq 1\), and thus \(g_{s}(x,y)=f(x+s,y)-f(x,y)\), is strictly decreasing in \(x\) when \(x\geq 1\). Additional, \(\frac{\partial^{2}f}{\partial x^{2}}(x,y)\) is strictly increasing \(x\geq 1\). So \(\frac{\partial^{2}a_{s}}{\partial x^{2}}(x,y)=\frac{\partial^{2}f}{\partial x^{2}} (x+s,y)-\frac{\partial^{2}f}{\partial x^{2}}(x,y)>0\), and hence \(g_{s}(x,y)\), is convex in \(x\) when \(x\geq 1\). \(\Box\) **Lemma 3**: _Let \(M\) and \(N\) be real numbers satisfying \(1\leq M\leq N\). Then for every positive real number \(s\), the function \(h_{s}(x)=g_{s}(x,M)-g_{s}(x,N)\) is increasing in \(x\) when \(x\geq 1\)._ **Proof.** When \(x\geq 1\), we have \(\frac{\partial^{2}f}{\partial x\partial y}\) is strictly increasing in \(x\). So \(\frac{\partial^{2}g_{s}}{\partial x\partial y}(x,y)=\frac{\partial^{2}f}{ \partial x\partial y}(x+s,y)-\frac{\partial^{2}f}{\partial x\partial y}(x,y)>0\). Thus \(\frac{\partial g_{s}}{\partial x}\) is increasing in \(y\), and hence \(h^{\prime}_{s}(x)=\frac{\partial g_{s}}{\partial x}(x,N)-\frac{\partial g_{ s}}{\partial x}(x,M)>0\). Therefore \(h_{s}(x)\) is increasing in \(x\) when \(x\geq 1\). \(\Box\) The next theorem gives a sharp upper bound on the \(ABS\) value of all connected graphs of a fixed order and fixed chromatic number. A graph whose vertex set can be partitioned into \(r\) sets \(V_{1},V_{2},\ldots,V_{r}\) in such a way that all the vertices in every \(V_{i}\) (with \(1\leq i\leq r\)) are pairwise non-adjacent is known as an \(r\)-partite graph, where \(r\geq 2\) and the sets \(V_{1},V_{2},\ldots,V_{r}\) are called the partite sets. If, in addition, every vertex of partite set \(V_{i}\) is adjacent to all the vertices of the other partite sets for \(i=1,2,\ldots,r\), then the graph is called the complete \(r\)-partite graph. We denote, by \(T_{n,\chi}\), the complete \(\chi\)-partite graph of order \(n\) such that \(|n_{i}-n_{j}|\leq 1\), where \(n_{i}\), with \(i=1,2,\cdots,\chi\), is the number of vertices in the \(i\)-th partite set of \(T_{n,\chi}\). **Theorem 1**: _If \(G\) be a connected graph of order \(n\geq 5\) and having chromatic number \(\chi\geq 3\), Then_ \[ABS(G)\leq \frac{r(r-1)q^{2}}{2}\sqrt{\frac{n-q-1}{n-q}}+r(\chi-r)q(q+1) \sqrt{\frac{2n-2q-3}{2n-2q-1}}\] \[+\frac{(\chi-r)(\chi-r-1)(q+1)^{2}}{2}\sqrt{\frac{n-q-2}{n-q}}, \tag{1}\] _where \(q\) and \(r\) are non negative integers such that \(n=q\chi+r\) and \(r<\chi\). Moreover, the equality holds in (1) if and onlt if \(G\cong T_{n,\chi}\)._ **Proof.** Let \(G\) be a graph having the maximum \(ABS\) in the class of all connected graphs of a fixed order \(n\) and with a fixed chromatic number \(\chi\), where \(3\leq\chi\leq n-1\) and \(n\geq 5\). Note that the vertex set \(V(G)\) of \(G\) can be partitioned into \(\chi\) independent subsets, say \(V_{1},V_{2},\cdots,V_{\chi}\) such that \(|V_{i}|=n_{i}\) for \(i=1,2,\cdots,\chi\), provided that \(n_{1}\leq n_{2}\leq\cdots\leq n_{\chi}\). Consequently, \(G\) is isomorphic to a \(\chi\)-partite graph and hence, by Lemma 1, it must be isomorphic to the complete \(\chi\)-partite graph \(K_{n_{1},n_{2},\cdots,n_{\chi}}\). To complete the proof, we have to show that \(n_{\chi}-n_{1}\leq 1\). Contrarily, assume that \(n_{\chi}-n_{1}\geq 2\). Let \(G^{\prime}\cong K_{n^{\prime}_{1},n^{\prime}_{2},\cdots,n^{\prime}_{\chi}}\), where \(n^{\prime}_{1}=n_{1}+1\), \(n^{\prime}_{\chi}=n_{\chi}-1\), and \(n^{\prime}_{i}=n_{i}\) for every \(i\in\{2,\cdots,\chi-1\}\) \[ABS(G^{\prime})-ABS(G) = (n_{1}+1)(n_{\chi}-1)f(n-n_{1}-1,n-n_{\chi}+1)-n_{1}n_{\chi}f(n-n _{1},n-n_{\chi})\] \[+\sum_{i=2}^{\chi-1}\left[n_{i}(n_{1}+1)f(n-n_{1}-1,n-n_{i})-n_{1} n_{i}f(n-n_{1},n-n_{i})\right]\] \[+\sum_{i=2}^{\chi-1}\left[n_{i}(n_{\chi}-1)f(n-n_{\chi}+1,n-n_{i} )-n_{\chi}n_{i}f(n-n_{\chi},n-n_{i})\right]\] \[= (n_{\chi}-n_{1}-1)f(n-n_{1},n-n_{\chi})\] \[+\sum_{i=2}^{\chi-1}n_{i}\left[f(n-n_{1}-1,n-n_{i})-f(n-n_{\chi}+ 1,n-n_{i})\right]\] \[+\sum_{i=2}^{\chi-1}n_{i}\left[n_{\chi}g_{1}(n-n_{\chi},n-n_{i})- n_{1}g_{1}(n-n_{1}-1,n-n_{i})\right]\] Since \(n-n_{1}-1\geq n-n_{\chi}+1\), from Lemma 2 we get that for each \(i=2,\cdots,\chi-1\), \(f(n-n_{1}-1,n-n_{i})-f(n-n_{\chi}+1,n-n_{i})\geq 0\) and \(n_{\chi}g_{1}(n-n_{\chi},n-n_{i})-n_{1}g_{1}(n-n_{1}-1,n-n_{i})>n_{1}\left[g_{1 }(n-n_{\chi},n-n_{i})-g_{1}(n-n_{1}-1,n-n_{i})\right]\geq 0.\) So \(ABS(G^{\prime})-ABS(G)\), a contradiction. Thus \(n_{\chi}-n_{1}\leq 1\). \(\square\) The next theorem gives a sharp upper bound on the \(ABS\) value of all connected graphs of a fixed order and fixed independent number. **Theorem 2**: _If \(G\) is a connected graph of order \(n\) and independent number \(\alpha\) then_ \[ABS(G)\leq\alpha\sqrt{(n-\alpha)(n-\alpha-1)}+\frac{1}{2}(n-\alpha)\sqrt{(n- \alpha-1)(n-\alpha-2)},\] _with equality holds if and only if \(G\cong N_{\alpha}+K_{n-\alpha}\)_ **Proof.** Let \(G\) be a connected graph which has the maximum \(ABS\) value among all connected graphs of order \(n\) and independent number \(\alpha\). Let \(S\) be an independent set in \(G\) with \(|S|=\alpha\). Assume that there is a vertex \(u\in S\) that is not adjacent to a vertex \(v\in V(G)-S\). Then \(G+uv\) has order \(n\) and and indepenedent number \(\alpha\) and \(ABS(G+uv)>ABS(G)\), a contradiction. Thus, each vertex in \(S\) is adjacent to every vertex in \(G-S\). Furthermore, every pair of vertices in \(G-S\) are adjacent, yielding \(G[V(G)-S]\cong K_{n-\alpha}\). Thus \(G\cong N_{\alpha}+K_{n-\alpha}\). Therefore, \(ABS(G)=\alpha\sqrt{(n-\alpha)(n-\alpha-1)}+\frac{1}{2}(n-\alpha)\sqrt{(n- \alpha-1)(n-\alpha-2)}\) The next theorem gives a sharp upper bound on the \(ABS\) value of all connected graphs of a fixed order and fixed number of pendant vertices. We denote by \(S_{n-1}\) the star graph of order \(n\) and by \(S_{m,n-m}\) the double star graph of order \(n\) where the internal vertices have degrees \(m\) and \(n-m\). We also denote by \(K_{m}^{p}\) the graph of order \(m+p\) and \(p\) pendant vertices such that the induced subgraph on the internal vertices is complete graph and all pendant vertices are adjacent to the same internal vertex. **Theorem 3**: _Let \(G\) be a graph of order \(n\) having \(p\) pendant vertices._ 1. _if_ \(p=n-1\) _then_ \(G\cong ABS(S_{n-1})\)_, and thus_ \(ABS(G)=\frac{(n-1)\sqrt{n-2}}{n}\)_._ 2. _if_ \(p=n-2\) _then_ \(ABS(G)\leq\frac{1}{\sqrt{3}}+\frac{\sqrt{n-2}}{n}+\frac{(n-3)\sqrt{n-3}}{n-1}\) _with equality holds if and only if_ \(G\cong S_{2,n-2}\)__ 3. _if_ \(p\leq n-3\) _then_ \[ABS(G)\leq p\sqrt{\frac{n-2}{n}}+(n-p-1)\sqrt{\frac{2n-2p-3}{2n-2p-1}}+\frac{ 1}{2}\sqrt{n-p-1}(n-p-2)^{\frac{3}{2}}\] _with equality holds if and only if_ \(G\cong K_{n-p}^{p}\)_._ **Proof.** (1) Straightforward. (2) Let \(u,v\) be the internal vertices of \(G\). We may assume that there are \(t\) pendant vertices adjacent to \(u\) and \(p-t\) pendant vertices adjacent to \(v\). Thus \[ABS(G) =tf(1,d_{u})+(p-t)f(1,d_{v})+f(d_{u},d_{v})\] \[=tf(1,t+1)+(p-t)f(1,p-t+1)\] Consider the function \(h(t)=tf(1,t+1)+(p-t)f(1,p-t+1)\). \[h^{\prime}(t)=\frac{M-N}{(t+2)(p-t+2)\sqrt{(t+2)(p-t+2)}},\] where \(M=((p-1)t-t^{2}+3p+6)\sqrt{pt-t^{2}+2t}\) and \(N=(p-t+3)(t+2)\sqrt{(p-t)(t+2)}\). Clearly, both \(M>0\) and \(N>0\) when \(1\leq t\leq p-1\). Thus the sign of \(h^{\prime}(t)\) is determined by the sign of \((M-N)(M+N)=M^{2}-N^{2}\). Now \[M^{2}-N^{2}=(2t-p)(3tp(p-t)+10t(p-t)+8p^{2}+48p+72).\] Hence \(h^{\prime}(t)<0\) when \(1\leq t<p/2\) and \(h^{\prime}(t)>0\) when \(p/2<t\leq p-1\). Thus \(h(t)\) has maximum value at \(t=1\) or \(t=p-1\). Thus \[ABS(G)\leq h(1)=h(p-1)=\frac{1}{\sqrt{3}}+\frac{\sqrt{p}}{p+2}+\frac{(p-1) \sqrt{p-1}}{p+1}\] with equality holds if and only if \(G\cong S_{2,p}=S_{2,n-2}\). (3) Let \(P\) be the set of pendant vertices in \(G\). If there are two non adjacent vertices \(u,v\in V(G)\setminus P\) then \(G+\{uv\}\in\Gamma n,p\) and by Lemma 1, \(ABS(G+\{uv\})>ABS(G)\), a contradiction. Thus the induced subgraph \(G[V(G)\setminus P]\) is \(K_{n-p}\). Label the vertices of \(G[V(G)\setminus P]\) by \(v_{1},...,v_{n-p}\), and for each \(i=1,...,n-p\), let \(a_{i}=|N(v_{i})\cap P|\) so that \(a_{1}\geq a_{2}\geq...\geq a_{n-p}\). To obtain the desired result we want need to show that \(a_{1}=p\) and \(a_{2}=...=a_{n-p}=0\). So seeking a contradiction assume that \(a_{i}\geq 1\) for some \(i\geq 2\). Then \(a_{1}\geq a_{2}\geq 1\). Let \(x\in P\cap N(v_{2})\) and take \(G^{\prime}=G-\{xv_{2}\}+\{xv_{1}\}\). Note that for each \(i=1,...,n-p\), \(deg_{G}(v_{i})=a_{i}+n-p-1\). Then \[ABS(G^{\prime})-ABS(G) =f(1,a_{1}+n-p)-f(1,a_{2}+n-p-1)\] \[+a_{1}(f(1,a_{1}+n-p)-f(1,a_{1}+n-p-1))\] \[-(a_{2}-1)(f(1,a_{2}+n-p-1)-f(1,a_{2}+n-p-2))\] \[\sum_{i=3}^{n-p}(f(a_{i}+n-p-1,a_{1}+n-p)-f(a_{i}+n-p-1,a_{1}+n-p- 1))\] \[-\sum_{i=3}^{n-p}(f(a_{i}+n-p-1,a_{2}+n-p-1)-f(a_{i}+n-p-1,a_{2}+n -p-2))\] \[=f(1,a_{1}+n-p)-f(1,a_{2}+n-p-1)\] \[+a_{1}g_{1}(1,a_{1}+n-p-1)-(a_{2}-1)g_{1}(1,a_{2}+n-p-2)\] \[\sum_{i=3}^{n-p}(g_{1}(a_{1}+n-p-1,a_{i}+n-p-1)-g_{1}(a_{2}+n-p-2,a_{i}+n-p-1))\] Now for each \(i=3,...,n-p\), \(a_{i}\geq 0\) and so by Lemma 3, \[g_{1}(a_{1}+n-p-1,a_{i}+n-p-1)- g_{1}(a_{2}+n-p-2,a_{i}+n-p-1)\geq\] \[g_{1}(a_{1}+n-p-1,n-p-1)-g_{1}(a_{2}+n-p-2,n-p-1).\] Thus \[ABS(G^{\prime})-ABS(G)\geq f(1,a_{1}+n-p)-f(1,a_{2}+n-p-1)\] \[+a_{1}g_{1}(a_{1}+n-p-1,1)-(a_{2}-1)g_{1}(a_{2}+n-p-2,1)\] \[+(n-p-2)(g_{1}(a_{1}+n-p-1,n-p-1)-g_{1}(a_{2}+n-p-2,n-p-1))\] \[=w(a_{1})-w(a_{2}-1)\] Where \[w(t)=f(t+n-p,1)+tg_{1}(t+n-p-1,1)+(n-p-2)g_{1}(t+n-p-1,n-p-1)\] Our next aim is to show that \(w(t)\) is increasing in \(t\). Note that \[w^{\prime}(t)= \frac{\partial f}{\partial t}(t+n-p,1)+g_{1}(t+n-p-1,1)+t\frac{ \partial g_{1}}{\partial t}(t+n-p-1,1)\] \[+(n-p-2)\frac{\partial g_{1}}{\partial t}(t+n-p-1,n-p-1)\] Since \(\frac{\partial g_{1}}{\partial x}(x,y)\) is increasing in \(y\) when \(y\geq 1\), we get \[\frac{\partial g_{1}}{\partial t}(t+n-p-1,n-p-1)\geq\frac{\partial g_{1}}{ \partial t}(t+n-p-1,1),\] So \[w^{\prime}(t)\geq \frac{\partial f}{\partial t}(t+n-p,1)+g_{1}(t+n-p-1,1)+(t+n-p-1 )\frac{\partial g_{1}}{\partial t}(t+n-p-1,1)\] \[=L-K,\] where \[L=\frac{(t+n-p)^{2}+(t+n-p)-1}{(t+n-p-1)^{\frac{1}{2}}(t+n-p+1)^{\frac{3}{2}}} \text{ and }K=\frac{(t+n-p)^{2}-(t+n-p)-1}{(t+n-p-2)^{\frac{1}{2}}(t+n-p)^{\frac{3}{2} }}.\] Since \[L^{2}-K^{2}=\frac{2(t+n-p)^{5}-3(t+n-p)^{4}-8(t+n-p)^{3}+3(t+n-p)^{2}+4(t+n-p )+1}{(t+n-p-1)(t+n-p+1)^{3}(t+n-p-2)(t+n-p)^{3}}>0,\] we get \(w^{\prime}(t)\geq L-K>0\) and thus \(w(t)\) is increasing in \(t\) as desired. This implies that \(ABS(G^{\prime})-ABS(G)>w(a_{1})-w(a_{2}-1)>0\), a contradiction. So \(a_{1}=p\) and \(a_{i}=0\) for all \(i=2,...,n-p-1\), and hence \(G\cong K_{n-p}^{p}\).
2307.15176
RCT Rejection Sampling for Causal Estimation Evaluation
Confounding is a significant obstacle to unbiased estimation of causal effects from observational data. For settings with high-dimensional covariates -- such as text data, genomics, or the behavioral social sciences -- researchers have proposed methods to adjust for confounding by adapting machine learning methods to the goal of causal estimation. However, empirical evaluation of these adjustment methods has been challenging and limited. In this work, we build on a promising empirical evaluation strategy that simplifies evaluation design and uses real data: subsampling randomized controlled trials (RCTs) to create confounded observational datasets while using the average causal effects from the RCTs as ground-truth. We contribute a new sampling algorithm, which we call RCT rejection sampling, and provide theoretical guarantees that causal identification holds in the observational data to allow for valid comparisons to the ground-truth RCT. Using synthetic data, we show our algorithm indeed results in low bias when oracle estimators are evaluated on the confounded samples, which is not always the case for a previously proposed algorithm. In addition to this identification result, we highlight several finite data considerations for evaluation designers who plan to use RCT rejection sampling on their own datasets. As a proof of concept, we implement an example evaluation pipeline and walk through these finite data considerations with a novel, real-world RCT -- which we release publicly -- consisting of approximately 70k observations and text data as high-dimensional covariates. Together, these contributions build towards a broader agenda of improved empirical evaluation for causal estimation.
Katherine A. Keith, Sergey Feldman, David Jurgens, Jonathan Bragg, Rohit Bhattacharya
2023-07-27T20:11:07Z
http://arxiv.org/abs/2307.15176v3
# RCT Rejection Sampling for Causal Estimation Evaluation ###### Abstract Confounding is a significant obstacle to unbiased estimation of causal effects from observational data. For settings with high-dimensional covariates--such as text data, genomics, or the behavioral social sciences--researchers have proposed methods to adjust for confounding by adapting machine learning methods to the goal of causal estimation. However, empirical evaluation of these adjustment methods has been challenging and limited. In this work, we build on a promising empirical evaluation strategy that simplifies evaluation design and uses real data: subsampling randomized controlled trials (RCTs) to create confounded observational datasets while using the average causal effects from the RCTs as ground-truth. We contribute a new sampling algorithm, which we call _RCT rejection sampling_, and provide theoretical guarantees that causal identification holds in the observational data to allow for valid comparisons to the ground-truth RCT. Using synthetic data, we show our algorithm indeed results in low bias when oracle estimators are evaluated on the confounded samples, which is not always the case for a previously proposed algorithm. In addition to this identification result, we highlight several finite data considerations for evaluation designers who plan to use RCT rejection sampling on their own datasets. As a proof of concept, we implement an example evaluation pipeline and walk through these finite data considerations with a novel, real-world RCT--which we release publicly--consisting of approximately 70k observations and text data as high-dimensional covariates. Together, these contributions build towards a broader agenda of improved empirical evaluation for causal estimation. ## 1 Introduction Across the empirical sciences, confounding is a significant obstacle to unbiased estimation of causal effects from observational data. Covariate adjustment on a relevant set of confounders aka _backdoor adjustment_(Pearl, 2009) is a popular technique for mitigating such confounding bias. In settings with only a few covariates, simple estimation strategies--e.g., parametric models or contingency tables--often suffice to compute the adjusted estimates. However, modern applications of causal inference have had to contend with thousands of covariates in fields like natural language processing (Keith et al., 2020; Feder et al., 2022), genetics (Stekhoven et al., 2012), or the behavioral social sciences (Li et al., 2016; Eckles and Bakshy, 2021). In these high-dimensional scenarios, more sophisticated methods are needed and often involve machine learning. Recent approaches include non-parametric and semi-parametric estimators (Hill, 2011; Chernozhukov et al., 2018; Athey et al., 2018; Farrell et al., 2021; Bhattacharya et al., 2022), causally-informed covariate selection (Maathuis et al., 2009; Belloni et al., 2014; Shortreed & Ertefaie, 2017), proxy measurement and correction (Kuroki & Pearl, 2014; Wood-Doughty et al., 2018), and causal representation learning (Johansson et al., 2016; Shi et al., 2019; Veitch et al., 2020). Despite all this recent work targeted at high-dimensional confounding, these methods have not been systematically and empirically benchmarked. Such evaluations are essential in determining which methods work well in practice and under what conditions. However, unlike supervised learning problems which have ground-truth labels available for evaluating predictive performance on a held-out test set, analogous causal estimation problems require ground-truth labels for counterfactual outcomes of an individual under multiple versions of the treatment, data that is generally impossible to measure (Holland, 1986). A promising evaluation strategy is to directly subsample data from a randomized controlled trial (RCT) in a way that induces confounding. Causal effect estimates obtained using the confounded observational samples can then be compared against the ground-truth estimates from the RCT to assess the performance of different causal estimators. This idea has appeared in works like Hill (2011) and Zhang & Bareinboim (2021) and was recently formalized by Gentzel et al. (2021). We contribute to this evaluation strategy -- which we subsequently refer to as _RCT subsampling_ -- via theory that clarifies why and how RCT subsampling algorithms should be constrained in order to produce valid downstream empirical comparisons. In particular, we prove previous subsampling algorithms can produce observational samples from which the causal effect is provably not identified, which makes recovery of the RCT ground-truth impossible (even with infinite samples). To address this issue, we present a new RCT subsampling algorithm, which we call _RCT rejection sampling_, that appropriately constrains the subsampling such that the observed data distribution permits identification. In addition to improving the theoretical foundations of RCT subsampling, we provide evaluation designers a scaffolding to apply the theory. We implement a proof of concept evaluation pipeline with a novel, real-world RCT dataset--which we release publicly--consisting of approximately 70k observations and text data as high-dimensional covariates. We highlight important finite data considerations: selecting an RCT dataset and examining when empirical evaluation is appropriate; empirically verifying a necessary precondition for RCT subsampling; specifying and diagnosing an appropriate confounding function using finite samples; applying baseline estimation models; and briefly speculate on additional challenges that could arise. For each of these considerations, we walk through specific approaches we take in our proof of concept pipeline. In summary, our contributions are * We provide a proof using existing results in causal graphical models showing that previous RCT subsampling procedures (e.g., Gentzel et al. (2021)) may draw observational data in a way that prevents non-parametric identification of the causal effect due to selection bias (SS3.3). * We propose a new subsampling algorithm, which we call _RCT rejection sampling_, that is theoretically guaranteed to produce an observational dataset where samples are drawn according to a distribution where the effect is identified via a backdoor functional (SS3.4). Using three settings of synthetic data, we show our algorithm results in low bias, which is not always the case for a previous algorithm (SS3.5). * For evaluation designers who plan to use RCT rejection sampling for their own datasets, we highlight several finite data considerations and implement a proof of concept pipeline with a novel, real-world RCT dataset and application of baseline estimation models (SS4). * We release this novel, real-world RCT dataset of approximately 70k observations that has text as covariates (SS4.1.1). We also release our code.1 Footnote 1: Code and data at [https://github.com/kakeith/rct_rejection_sampling](https://github.com/kakeith/rct_rejection_sampling). These contributions build towards a more extensive future research agenda in empirical evaluation for causal estimation (SS5). ## 2 Related Work in Empirical Evaluation of Causal Estimators As we discussed briefly in Section 1, empirical evaluation of causal estimation methods for observational data is difficult but important. We argue an evaluation strategy should in general (i) reduce the _evaluation designers' degrees of freedom_ i.e. limit the number of choices researchers have to (inadvertently) pick an evaluation that favors their own method (Gentzel et al., 2019); (ii) has the necessary data (e.g. RCTs) available, and (iii) ensure the data generating process (DGP) reflects the real world. For applications of backdoor adjustment, we argue non-trivial evaluation should additionally (iv) include a high number of covariates and (v) make the data publicly available for reuse and reproducibility. Table 1 compares select previous work (and our own) according to the above desiderata. We briefly discuss these and other related work, and make a qualitative argument for the RCT subsampling strategy we contribute to. **Synthetic evaluations** are ones in which researchers specify the entire DGP, e.g. D'Amour and Franks (2021); Schmidt et al. (2022). This allows for infinite data availability, but is prone to encoding researcher preferences and can lead to over-simplification (or overly complex DGPs) compared to real-world observational scenarios. **Semi-synthetic evaluations** use some real data but specify the rest of the synthetic DGP. This approach has been used in causal inference competitions (Dorie et al., 2019; Shimoni et al., 2018) and settings with text-data as confounding variables (Roberts et al., 2020; Veitch et al., 2020; Weld et al., 2022). Other semi-synthetic work fits generative models to real-world data (Neal et al., 2020; Parikh et al., 2022) or uses pre-trained language models to generate high-dimensional confounders from variables in a synthetic DGP (Wood-Doughty et al., 2021). Although more realistic than synthetic data, semi-synthetic DGPs can also make unrealistic assumptions; for example, Reisach et al. (2021) demonstrate this issue in the context of evaluating causal discovery algorithms. **Constructed observational studies** (COSs) start with RCTs and then find non-experimental control samples that come from a similar population (LaLonde, 1986; Hill et al., 2004; Arceneaux et al., 2006; Shadish et al., 2008; Jaciw, 2016; Gordon et al., 2019; Eckles and Bakshy, 2021; Zeng et al., 2022; Gordon et al., 2022). The advantage of COSs over (semi-)synthetic data is that they have few researcher degrees of freedom; however, non-experimental control groups often do not exist or do not come from similar-enough populations; see Dahabreh et al. (2022) for more details on identification from COSs. **Subsampling RCTs** uses an RCT as ground-truth and then subsamples the RCT data to create a confounded observational dataset. For example, Zhang and Bareinboim (2021) subsample from the International Stroke Trial (IST) of roughly 20k patients to estimate the treatment effect of aspirin allocation. This strategy also appears in prior work (Hill, 2011; Kallus et al., 2018) and was recently formalized by Gentzel et al. (2021). While this approach is limited by the availability of RCTs and sampling decreases the number of units available to the estimation methods, it does not require the comparable non-experimental control group required by COSs, resulting in greater data availability. There are also fewer researcher degrees of freedom compared to synthetic or semi-synthetic approaches. Because of these tradeoffs, we believe this is one of \begin{table} \begin{tabular}{l l|l|l|l|l|l|l} **Dataset** & **Eval. strategy** & \multicolumn{3}{c|}{**DGP**} & \multicolumn{3}{c|}{**General**} & \multicolumn{1}{c}{**Application to Backdoor Adjustment**} \\ \hline Simulation, normal samples. (D’Amour and Franks, 2021) & Synthetic & ✘ Many & \# High & ✘ Low & ✓ High (100\(\cdot\)) & ✓ Yes \\ HIDP-ACIC 2016 (Dorie et al., 2019) & Semi-synthetic & ✘ Many & ✓ High & ✘ Medium & ✘ Medium (58) & ✓ Yes \\ Perfectd theorem (Vétich et al., 2020) & Semi-synthetic & ✘ Many & ✘ High & ✘ Medium & ✓ High (Text wouch) & ✓ Yes \\ RCT presitories (Gentzel et al., 2021) & RCT subsampling & ✘ Few & ✓ High-RCTs & ✘ High & ✘ Low (-2) & ✓ Yes \\ Job training (LaLonde, 1986) & COS & ✘ Few & ✘ Low & ✓ High & ✘ Low (4) & ✓ Yes \\ Facebook per efforts (Eckles and Bakshy, 2021) & COS & ✘ Few & ✘ Low & ✓ High & ✓ High (3700) & ✘ No \\ \hline This work & RCT subsampling & ✘ Few & ✓ High-RCTs & ✓ High & ✓ High (Text wouch) & ✓ Yes \\ \hline \end{tabular} \end{table} Table 1: Select related work in empirical evaluation of causal estimators compared on general desiderata of: ✔ few degrees of freedom (DoF) for the evaluation designer, ✔ high data availability, and ✔ realistic data-generating processes (DGP). We also examine the accompanying datasets presented for evaluating backdoor adjustment. Here, we want a ✔ high number of covariates to make the evaluation non-trivial and ✔ public availability of the data for reuse and reproducibility. the most promising strategies for empirically evaluating causal estimation methods and we build upon this strategy in the remainder of this work. ## 3 Subsampling from RCTs We preface this section with a brief description of causal graphs, a prerequisite to understanding subsequent results. Then we provide identification and non-identification proofs for RCT subsampling algorithms and evidence from synthetic data. ### Background: Causal graphical models A causal model of a directed acyclic graph (causal DAG) \(\mathcal{G}(V)\) can be viewed as the set of distributions induced by a system of structural equations: For each variable \(V_{i}\in V\) there exists a structural equation \(V_{i}\gets f_{i}(\operatorname{pa}_{i},\epsilon_{i})\)(Pearl, 2009). This function maps the variable's parents' values--\(\operatorname{pa}_{i}\) of \(V_{i}\) in \(\mathcal{G}(V)\)--and an exogenous noise term2, \(\epsilon_{i}\), to values of \(V_{i}\). The system of equations induces a joint distribution \(P(V)\) that is Markov relative to the DAG \(\mathcal{G}(V)\), i.e., \(P(V=v)=\prod_{V_{i}\in V}P(V_{i}=v_{i}\mid\operatorname{pa}_{i}=\operatorname {pa}_{i})\). Independences in the distribution can be read off from \(\mathcal{G}\) via the well-known d-separation criterion (Pearl, 2009). Interventions in the model are typically formalized using the do-operator (Pearl, 2009), where \(Y|\operatorname{do}(T=t)\) denotes the value of an outcome \(Y\) under an intervention that sets the treatment \(T\) to value \(t\). Footnote 2: Typically these noise terms are assumed to be mutually independent, but this assumption is not strictly necessary (Richardson & Robins, 2013). Our results are non-parametric in the sense that we do not require any distributional assumptions on the noise terms or the specific form (e.g. linearity) of the structural equations \(f_{i}(\cdot)\). Here, our causal estimand of interest is the average treatment effect (ATE) defined as, \[\operatorname{ATE}\equiv\mathbb{E}[Y|\operatorname{do}(T=t)]-\mathbb{E}[Y| \operatorname{do}(T=t^{\prime})], \tag{1}\] where \(t\) and \(t^{\prime}\) denote distinct values of \(T\). A causal parameter is said to be _identified_ if it can be expressed as a function of the observed data \(P(V)\). Given a set of variables \(Z\subset V\) that satisfy the _backdoor criterion_ w.r.t \(T\) and \(Y\)3, the ATE is identified via the well-known _backdoor adjustment functional_(Pearl, 1995). Footnote 3: The set \(Z\) satisfies the backdoor criterion if no variable in \(Z\) is a causal descendant of \(T\), and \(Z\) blocks all backdoor paths between \(T\) and \(Y\), i.e., all paths of the form \(T\leftarrow\dots\to Y\). ### RCT subsampling: Setup and conditions We now describe the specific setup and objectives of RCT subsampling.4 We start with a dataset \(D_{\text{RCT}}\) consisting of \(n\) iid draws from an RCT of pre-treatment covariates \(C=\{C_{1},\dots,C_{k}\}\), treatment \(T\), and outcome \(Y\). Since this data is assumed to come from an RCT, the observed data distribution \(P(C,T,Y)\) is Markov relative to the causal DAG shown in Fig. 1(a) where \(T\perp\!\!\!\perp C\). The goal of RCT subsampling is to construct an observational dataset \(D_{\text{OBS}}\) that consists of \(m\leq n\) iid draws from \(D_{\text{RCT}}\) that satisfies the following conditions which enable appropriate evaluation of causal estimation methods: Figure 1: Causal DAGs (a) corresponding to an RCT; (b) representing a sampling procedure; (c) corresponding to an observational study where \(C\) satisfies the backdoor criterion. 1. **Dependence induced.**\(D_{\text{OBS}}\) consists of samples drawn according to a new distribution \(P^{*}(C,T,Y)\) that satisfies the dependence relation \(T\not\perp\!\!\!\perp C\). 2. **ATE identified.** There exists a functional \(g\) of the RCT distribution and a functional \(h\) of the subsampled data distribution such that \(\text{ATE}=g(P(C,T,Y))=h(P^{*}(C,T,Y))\). Here, (II) is an important identification pre-condition that ensures that it is possible, at least in theory, to compute estimates of the ATE from \(D_{\text{OBS}}\) to match the ATE from \(D_{\text{RCT}}\), the latter of which is treated as ground-truth in evaluation. From Fig. 1(a) it is clear that two sets of variables satisfy the backdoor criterion: the set \(C\) and the empty set. Thus, the ATE is identified from the RCT distribution \(P(C,T,Y)\) via the following two backdoor adjustment functionals, ATE \[=\sum_{c}P(c)\times\left(\mathbb{E}[Y\mid t,c]-\mathbb{E}[Y\mid t ^{\prime},c]\right)\] (2) \[=\mathbb{E}[Y\mid t]-\mathbb{E}[Y\mid t^{\prime}].\] (3) Thus, a subsampling algorithm satisfies (II) if there is a functional \(h(P^{*}(C,T,Y))\) that is equal to equation 2 or equation 3. For our purposes, we add the condition (I) so that estimation in the observational data does not reduce to equation 3. That is, we aim to produce samples according to a distribution \(P^{*}\) such that some adjustment is in fact necessary to produce unbiased ATE estimates. We note that (I) by itself is not sufficient to guarantee this; RCT subsampling procedures also require that there exists at least one pre-treatment covariate correlated with the outcome, i.e., \(\exists\ C_{i}\in C\) such that \(C_{i}\not\perp\!\!\!\perp Y\) in \(P(C,T,Y)\)(Gentzel et al., 2021). However, this condition is easily testable, and we implement these checks in our synthetic experiments and real-world proof of concept (SS4.2). We now show a theoretical gap in existing approaches to subsampling RCTs, and propose a new algorithm that is theoretically guaranteed to satisfy conditions (I) and (II). ### Non-identification in prior work We claim that prior work that proposes RCT subsampling can result in observational samples from which the causal effect is _not identified_ non-parametrically unless additional constraints are placed on the subsampling process. We consider Algorithm 2 in Gentzel et al. (2021) which does not explicitly impose such constraints and can be summarized as follows. Let \(S\) be a binary variable indicating selection into the observational data from \(D_{\text{RCT}}\). A structural equation \(S\leftarrow\mathbb{1}\left(T=\text{Bernoulli}(f(C))\right)\) is used to generate the selection variable, where \(f\) is a function defined by the researcher and \(\mathbb{1}\) corresponds to the indicator function. \(D_{\text{OBS}}\) is created by retaining only samples from \(D_{\text{RCT}}\) where \(S=1\). This results in \(P^{*}(C,T,Y)=P(C,T,Y\mid S=1)\) which is Markov relative to the causal DAG in Fig. 1(b). From this DAG, it is easy to check via d-separation that condition (I) is satisfied as \(T\not\perp C\mid S=1\). However, the following proposition shows that condition (II) is not satisfied. **Proposition 3.1**.: _Given \(n\) iid samples from a distribution \(P\) that is Markov relative to Fig. 1(a), Algorithm 2 in Gentzel et al. (2021) draws samples according to a distribution \(P^{*}\) such that condition (II) is not satisfied._ We provide a proof in Appendix A. The intuition behind the proof of Proposition 3.1 is as follows. Identification of the ATE relies on two pieces: the conditional mean of the outcome given treatment and covariates and the marginal distribution of covariates. From Fig. 1(b), we have \(\mathbb{E}[Y|T,C]=\mathbb{E}[Y|T,C,S=1]\), but \(P(C)\neq P(C|S=1)\). Indeed this marginal distribution cannot be identified via any non-parametric functional of the subsampled distribution \(P^{*}(C,T,Y)\)(Bareinboim & Tian, 2015). However, this non-identification result holds assuming that there is no additional knowledge/constraints on how \(P^{*}\) is generated; in the next section we modify the sampling to place constraints on the generated distribution \(P^{*}\) that mitigate this issue. ``` 1:Inputs:\(D_{\text{RCT}}\) consisting of \(n\) i.i.d. draws from \(P(C,T,Y)\); \(P^{*}(T|C)\), a function specified by evaluation designers; \(M\geq\sup\frac{P^{*}(T|C)}{P(T)}\), a constant computed empirically 2:Output:\(D_{\text{OBS}}\), a subset of \(D_{\text{RCT}}\) constructed according to a distribution \(P^{*}(C,T,Y)\) which satisfies conditions (I) and (II) 3: 4:for each unit \(i\in D_{\text{RCT}}\)do 5: Sample \(U_{i}\) uniform on \((0,1)\) 6:if\(U_{i}>\frac{P^{*}(T=t_{i}|C_{i})}{P(T=t_{i})M}\)then 7: Discard \(i\) 8:end 9:endfor 10:Return:\(D_{\text{OBS}}\gets D_{\text{RCT}}-\{\text{discarded units}\}\) ``` **Algorithm 1** RCT rejection sampling ### RCT rejection sampling We propose Algorithm 1, which uses a rejection sampling procedure to subsample RCTs. Rejection sampling is useful when the target distribution is difficult to sample from but there exists a proposal distribution which is easier to sample from and the proposal distribution (times a constant) forms an "upper envelope" for the target distribution (Murphy, 2012, Chapter 23.2). Similar ideas on resampling data based on ratios of propensity scores appear in Thams et al. (2023) and Bhattacharya & Nabi (2022) in the context of testing independence constraints in post-intervention distributions. Though the rejection sampler also selects samples based on a function of \(T\) and \(C\), as in Fig. 1(b), we prove that additional constraints placed by the sampling strategy ensure identification holds in the new observed data distribution. The intuition behind our algorithm is as follows. Sufficient constraints for maintaining identifiability of the ATE in \(P^{*}(C,T,Y)\) via the functional in equation 2 are to ensure that \(P^{*}(C)=P(C)\) and \(P^{*}(Y\mid T,C)=P(Y\mid T,C)\).5 When this holds, it follows that equation 2 is equivalent to the adjustment functional \(h(P^{*}(C,T,Y))=\sum_{c}P^{*}(c)\times(\mathbb{E}^{*}[Y\mid T=t,c]-\mathbb{E}^ {*}[Y\mid T=t^{\prime},c])\), where \(\mathbb{E}^{*}\) denotes the expectation taken w.r.t \(P^{*}(Y\mid T,C)\). To also satisfy (I), we propose resampling with weights that modify \(P(T)\) to a new conditional distribution \(P^{*}(T\mid C)\). Footnote 5: One could also consider maintaining equality of just the conditional mean of \(Y\) rather than the full conditional density. The considerations listed in the prior paragraph inform our choice of an acceptance probability of \(\frac{1}{M}\times\frac{P^{*}(T|C)}{P(T)}\) in the rejection sampler, where \(M\) is the usual upper bound on the likelihood ratio used in the rejection sampler, which in our case is \(\frac{P^{*}(T|C)}{P(T)}\).6 Here, \(P^{*}(T\mid C)\) is a function specified by the evaluation designer that satisfies positivity (\(\forall c,0<P^{*}(T\mid C=c)<1\) almost surely), and is a non-trivial function of \(C\) in the sense that \(P^{*}(T\mid C)\neq P^{*}(T)\) for at least some values of \(T\) and \(C\). Footnote 6: In practice, we approximate \(M\) from \(D_{\text{RCT}}\) as \(\frac{\max_{i}\in\{1,\ldots,n\}P^{*}(T=t_{i}|C_{i})}{\min_{c}\{1,\ldots,n\}\widehat {P}(T=t_{i})}\). **Theorem 3.2**.: _Given \(n\) iid samples from a distribution \(P\) that is Markov relative to Fig. 1(a), a confounding function \(P^{*}(T|C)\) satisfying positivity, and \(M\geq\sup\frac{P^{*}(T|C)}{P(T)}\), the rejection sampler in Algorithm 1 draws samples from a distribution \(P^{*}\), such that conditions (I) and (II) are satisfied._ Proof.: Rejection sampling generates samples from a target distribution \(P^{*}(V_{1},\ldots,V_{k})\) by accepting samples from a proposal distribution \(P(V_{1},\ldots,V_{k})\) with probability \[\frac{1}{M}\times\frac{P^{*}(V_{1},\ldots,V_{k})}{P(V_{1},\ldots,V_{k})},\] where \(M\) is a finite upper bound on the likelihood ratio \(P^{*}/P\) over the support of \(V_{1},\ldots,V_{k}\). We start with samples from an RCT, so our proposal distribution factorizes according to the causal DAG in Fig. 1(a): \(P(C,T,Y)=P(C)\times P(T)\times P(Y\mid T,C)\). Our target distribution is one where \(T\not\perp\!\!\!\perp C\), and factorizes as \(P^{*}(C,T,Y)=P^{*}(C)\times P^{*}(T\mid C)\times P^{*}(Y\mid T,C)\), with additional constraints that \(P^{*}(C)=P(C)\) and \(P^{*}(Y\mid T,C)=P(Y\mid T,C)\). This establishes the likelihood ratio, \[\frac{P^{*}(C,T,Y)}{P(C,T,Y)} =\frac{P(C)\times P^{*}(T\mid C)\times P(Y\mid T,C)}{P(C)\times P (T)\times P(Y\mid T,C)}\] \[=\frac{P^{*}(T\mid C)}{P(T)},\] and any choice of \(M\geq\sup\frac{P^{*}(T\mid C)}{P(T)}\) used in the rejection sampler in Algorithm 1 produces samples from the desired distribution \(P^{*}\), where the additional constraints satisfy the identification condition (II) and specification of \(P^{*}(T\mid C)\) such that it truly depends on \(C\) satisfies condition (I). Since \(P^{*}\) satisfies \(T\not\perp\!\!\!\perp C\) and yields identification via the usual adjustment functional obtained in a conditionally ignorable causal model, Algorithm 1 can be thought of as producing samples exhibiting confounding bias similar to the causal DAG in Fig. 1(c), despite the selection mechanism. A longer argument for this qualitative claim is in Appendix B. We conclude this section by noting that similar to prior works on RCT subsampling algorithms, the subsampling strategy in Algorithm 1 only requires researchers to specify a single function, \(P^{*}(T\mid C)\). Hence, our procedure satisfies our original desideratum of limited researcher degrees of freedom, while providing stronger theoretical guarantees for downstream empirical evaluation. However, specification of \(P^{*}(T\mid C)\) may still be challenging when \(C\) is high-dimensional. In Section 4.4, we discuss this finite data consideration and we use a proxy strategy for our proof of concept in which we have a low-dimensional confounding set \(C\) along with a set of high-dimensional covariates \(X\) that serve as proxies of this confounding set. ### Evidence from synthetic data Using synthetic DGPs for \(D_{\text{RCT}}\), we produce \(D_{\text{OBS}}\) using Algorithm 2 from Gentzel et al. (2021) and separately via our RCT rejection sampler. We then compute ATE estimates using equation 2 for \(D_{\text{OBS}}\) and compare it to the ground-truth estimates using equation 3 in \(D_{\text{RCT}}\). Section C in the appendix gives the full details of the data-generating processes (DGPs) for three settings. Briefly, the DGP in Setting 1 has a single confounding covariate \(C\), sets \(P(T=1)=0.3\), and has an interaction term \(TC\) in the structural equation for \(Y\). Setting 2 is the same as Setting 1 except we set \(P(T=1)=0.5\). Setting 3 is a non-linear DGP with five covariates, \(C_{1},\ldots,C_{5}\). All methods are provided with the true adjustment set and functional form for the outcome regression, i.e., our experiments here use oracle estimators to validate the identification theory proposed in the previous subsection. Table 2 shows that our proposed RCT rejection sampler results in a reduction of absolute bias compared to Algorithm 2 from Gentzel et al. (2021) by a factor of over 24 for Setting 1 (0.22/0.009) and a factor of 21 in Setting 3 (0.252/0.012). For Setting 3, Gentzel et al.'s procedure results in almost a 100% increase in \begin{table} \begin{tabular}{c l l r r} \hline \hline & \multicolumn{1}{c}{**Synthetic DGP Setting**} & \multicolumn{1}{c}{**Sampling Algorithm**} & \multicolumn{1}{c}{**Abs. Bias (std.)**} & \multicolumn{1}{c}{**Rel. Abs. Bias (std.)**} \\ \hline 1 & \(|C|=1\), \(P(T=1)=0.3\) & Algorithm 2 from Gentzel et al. (2021) & 0.222 (0.010) & 0.089 (0.004) \\ & & RCT rejection sampling (This work) & **0.009 (0.007)** & **0.004 (0.003)** \\ \hline 2 & \(|C|=1\), \(P(T=1)=0.5\) & Algorithm 2 from Gentzel et al. (2021) & 0.009 (0.006) & 0.003 (0.003) \\ & & RCT rejection sampling & 0.007 (0.005) & 0.003 (0.002) \\ \hline 3 & \(|C|=5\), Nonlinear & Algorithm 2 from Gentzel et al. (2021) & 0.252 (0.010) & 0.979 (0.037) \\ & & RCT rejection sampling & **0.012 (0.009)** & **0.046 (0.034)** \\ \hline \hline \end{tabular} \end{table} Table 2: Absolute bias (abs. bias) between ATE from \(D_{\text{RCT}}\) and the estimated ATE via backdoor adjustment on \(D_{\text{OBS}}\) created by each sampling algorithm. We also report abs. bias relative to the RCT ATE (rel. abs. bias) and the mean and standard deviation (std.) across samples from 1000 random seeds. The DGPs for Settings 1-3 are given in Appendix C. bias relative to the gold RCT ATE of \(-0.26\). In Setting 2 where \(P(T=1)=0.5\), the differences in absolute bias between the two algorithms is less pronounced.7 The results of the simulation are consistent with our theoretical findings that our algorithm permits identifiability under more general settings than prior work. Footnote 7: We note Gentzel et al. (2021) primarily focus on the setting for which \(P(T=1)=P(T=0)=0.5\). However, their approach does not seem to generalize well outside of this setting, theoretically and empirically. ## 4 Finite Data Considerations and Proof of Concept In the previous section, we provided theoretical guarantees for RCT rejection sampling and confirmed the algorithm results in low bias on synthetic data. In this section, we demonstrate how to put this proposed theory into practice and highlight considerations when working with finite real-world data. Our goal is to surface questions that must be asked and answered in creating useful and high-quality causal evaluation. We also describe our specific approach towards each consideration as we create a proof of concept pipeline for empirical evaluation of high-dimensional backdoor adjustment methods. For this proof of concept, we use a large-scale, real-world RCT dataset with text data as covariates. Although our approaches are specific to our proof of concept dataset, we believe other evaluation designers will benefit from a real-world example of how to put the theory and considerations into practice. ### Considerations prior to using a specific RCT dataset A necessary component for RCT subsampling is obtaining a real-world RCT dataset. This ensures a more realistic data generating processes compared to synthetic or semi-synthetic approaches (see Table 1). As Gentzel et al. (2021) note, there are many RCT repositories from a variety of disciplines from which evaluation designers could gather data. However, Gentzel et al. find many of these existing datasets only have one or two covariates that satisfy \(C\not\perp Y\) (see Consideration #1 below). As we briefly mentioned in Section 1, for settings with just a few covariates one can often use simple estimation strategies with theoretical guarantees--e.g., parametric models or contingency tables--and empirical evaluation may not be particularly informative in this setting. Along these lines, we recommend that evaluation designers first ask themselves, _Is empirical evaluation of causal estimators appropriate and necessary for this setting?_ Not all settings are in need of empirical evaluation. A potentially untapped resource for RCT rejection sampling data is A/B tests from large online platforms. Other work, e.g., Eckles and Bakshy (2021), have used these types of experiments for constructed observational studies and we use such a dataset in our proof of concept. The large scale of these experiments can be advantageous since RCT subsampling reduces the number of units in the observational dataset by roughly half. Further, they often contain rich metadata and many covariates, which can be used to induce confounding in a way that emulates a high-dimensional setting. #### 4.1.1 Proof of concept approach For our proof of concept, we choose a setting for which empirical evaluation is appropriate and needed: high-dimensional backdoor adjustment. Our high-dimensional covariates are the thousands of vocabulary words from text data, an application area that has generated a large amount of interest from applied practitioners, see Keith et al. (2020); Feder et al. (2022). We use and publicly release a large, novel, real-world RCT (approximately 70k observations) that was run on an online scholarly search engine.8 Users arrive on a webpage that hosts metadata about a single academic paper and proceed to interact with this page. The RCT's randomized binary treatment is swapping the ordering of two buttons--a PDF reader and a new "enhanced reader". We set \(T=1\) as the setting where the "enhanced reader" is displayed first. The outcome of interest is a user clicking (\(Y=1\)) or not clicking (\(Y=0\)) on the enhanced reader button. The former action transports the user to a different webpage that provides a more interactive view of the publication. The RCT suggests that the treatment has a positive causal effect with an ATE of 0.113 computed using a simple difference of conditional means in the treated and untreated populations. See Appendix D for more details about the RCT. ### Consideration #1: Checking necessary precondition \(C\not\perp\!\!\!\perp Y\) As we mentioned in Section 3, a necessary precondition for RCT subsampling in general is the existence of a causal edge between \(C\) and \(Y\), i.e. \(C\not\perp\!\!\!\perp Y\). The relationship between \(C\) and \(Y\) is naturally occurring (not modified by evaluation designers) and the amount of confounding induced by sampling is, in part, contingent on this relationship (Gentzel et al., 2021). One can empirically check \(C\not\perp Y\) via independence tests, e.g., evaluating the odds ratio when both \(C\) and \(Y\) are binary variables. If there do not exist covariates, \(C\) such that \(C\not\perp\!\!\!\perp Y\), one cannot move forward in the evaluation pipeline using RCT subsampling. #### 4.2.1 Proof of concept approach For our proof of concept, we use a subpopulation strategy to ensure the precondition \(C\not\perp\!\!\!\perp Y\) is satisfied. We choose a single interpretable covariate to induce confounding: the field of study of manuscripts. Since \(C\perp\!\!\!\perp Y\) if and only if the odds ratio between \(C\) and \(Y\) is 1, we choose subpopulations of the full RCT that have high a odds ratio between a subset of the categorical field of study variable and the outcome \(Y\). Specifically, we choose \(C\) to be a binary covariate representing one of two fields of study; for _Subpopulation_\(A\), the field is either Physics or Medicine. In Appendix G, we implement the evaluation pipeline for an additional subpopulation with \(C\) chosen as the articles with Engineering or Business as the field of study. Substantively, one can interpret this high odds ratio as natural differences in click rates from users viewing articles from different fields of study. We combine this subpopulation strategy with a proxy strategy in the next section to ensure that the estimation procedures only have access to high-dimensional covariates instead of our low-dimensional \(C\). This has the benefit of simplifying the process of specifying \(P^{*}(T\mid C)\) while still ensuring that downstream modeling must still contend with high-dimensional adjustment. ### Consideration #2: Specification of \(P^{*}(t|C)\) Evaluation designers using RCT rejection sampling have one degree of freedom: specification of \(P^{*}(T|C)\). We describe one specific approach in our proof of concept to choose \(P^{*}(T|C)\), but we anticipate evaluation designers using RCT rejection sampling to create large empirical benchmarks may want to include many different parameterizations of \(P^{*}(T|C)\) to evaluate empirical performance of methods under numerous settings. Consideration #3 describes approaches to diagnosing the choice of \(P^{*}(T|C)\) for a specific finite dataset. Figure 2: Proof of concept approach. **Left figure.** Causal DAG for the proxy strategy. The blue edges are confirmed to empirically exist in the finite dataset. The red edge is selected by the evaluation designers via \(P^{*}(T|C)\). **Right table.** RCT dataset descriptive statistics including the number of units in the population/subpopulation (\(n\)) and the odds ratio, \(OR(C,Y)\). #### 4.3.1 Proof of concept approach **Proxy strategy.** In Section 3, we briefly mentioned that specifying a suitable confounding function \(P^{*}(T\mid C)\) may be difficult when \(C\) is high-dimensional. A key property of our RCT is that it has high-dimensional text data, \(X\), that is a proxy (with almost perfect predictive accuracy) for low-dimensional structured metadata--categories of scientific articles, e.g., Physics or Medicine. We use this structured metadata as the covariates \(C\) in our RCT rejection sampler, but provide the causal estimation methods only \(X,Y\) and \(T\). Note that as evaluation designers, we still have access to \(C\) to run diagnostics. This proxy strategy helps simplify the specification of \(P^{*}(T\mid C)\) we use in the rejection sampler and avoids direct specification of a function involving high-dimensional covariates. Others have used similar proxy strategies for text in semi-synthetic evaluations, e.g., Roberts et al. (2020); Veitch et al. (2020). Such a technique may also be applied in other RCTs, e.g., healthcare studies where one or two important biomarkers serve as low-dimensional confounding variables, and the larger electronic health record data serves as the proxy \(X\). **Using \(X\).** For each document \(i\), the high-dimensional covariates \(X_{i}\) is a bag-of-words representation of the document's concatenated title and abstract given a 2000-unigram vocabulary. A new vocabulary is created for each RCT subpopulation; see Appendix E for details. We check that there is high predictive accuracy of \(P(C|X)\) to ensure the plausibility of causal estimation models only having access to \(X\).9 To measure this predictive accuracy, we model \(P(C|X)\) with a logistic regression10 classifier. Averaged across held-out test folds, the F1 score is 0.98 and the average precision is 0.99 for Subpopulation A (Physics, Medicine). Footnote 10: [https://www.stat.ucsd.edu/~](https://www.stat.ucsd.edu/~) #### 4.4.1 Proof of concept approach For our proof of concept pipeline, we step through the following diagnostics on the sampling procedure. Recall the notation from Section 3, \(D_{\text{RCT}}\) is our RCT dataset (here, from Subpopulation A) and \(D_{\text{OBS}}\) is the resulting observational dataset after RCT rejection sampling. First, we check empirically that overlap is satisfied for \(C\) in \(D_{\text{OBS}}\), i.e., \(0<\hat{P}(T=1|C=c)<1\) for all \(c\). Second, in Figure 3 we compare the amount of confounding induced to the error in the oracle adjustment for different \(\zeta_{0},\zeta_{1}\) in Equation 4 across 100 random seeds (blue dots). On the y-axis, we plot the absolute difference between the ATE for \(D_{\text{RCT}}\) (GoldATE) and exact backdoor estimates11 obtained from using the oracle adjustment set \(C\). On the x-axis, we plot the absolute difference between the ATE on \(D_{\text{RCT}}\) (GoldATE) and the unadjusted naive estimate on \(D_{\text{OBS}}\). In general, we want more samples to fall below the \(y=x\) line (shown in red) since this means that more confounding was induced than there is error in estimation. Samples above the \(y=x\) have more error from the sampling process than the amount of confounding induced, and thus are not useful in benchmarking methods that adjust for confounding. Of the settings in Figure 3, \(\zeta_{0}=0.85\) and \(\zeta_{1}=0.15\) had the best proportion of sampled datasets lying below the red line, so we choose these parameters for our proof of concept pipeline. We leave to future work providing more guidance on choosing settings of \(P^{*}(T|C)\) for a comprehensive benchmark. Footnote 11: We leave to future work correcting for measurement error with noisy proxies \(X\); see Wood-Doughty et al. (2018). Footnote 10: Using scikit-learn Pedregosa et al. (2011) and an elasticnet penalty, L1 ratio 0.1, class-weight balanced, and SAGA solver. We tune the regularization parameter \(C\) via cross-validation over the set \(C\in 1e^{-4},1e^{-3},1e^{-2},1e^{-1},1e^{0},1e^{1}\). ### Consideration #4: Modeling The primary goal of this work is to create clear steps to follow during the evaluation design phase. Although this stage precedes a thorough modeling effort, we recommend that one runs baseline models to check for potential issues. #### 4.5.1 Proof of concept approach As a proof of concept, we apply baseline causal estimation models to the resulting \(D_{\text{OBS}}\) datasets after RCT rejection sampling (with many random seeds); as we mention above. We implement12 commonly-used causal estimation methods via two steps: (1) fitting base learners and (2) using causal estimators that combine the base learners via plug-in principles or second-stage regression. We took care to ensure we use the _same_ pre-trained base learners--same functional form and learned weights--as inputs into any appropriate causal estimator. Footnote 12: We attempted to use the EconML package Microsoft Research (2019) but as of this writing it did not support using sparse matrices, which is required for our high-dimensional datasets. **Base learners.** We implement base learners for:13 Footnote 13: Note for a binary outcome \(Y\), we can rewrite the above equations with probabilities, as \(\mathbb{E}[Y\mid\cdot]=P(Y=1\mid\cdot)\). Footnote 14: Using CatBoost (Dorogush et al., 2018) with default parameters and without cross-validation. **Base learners.** We implement base learners for:13 Footnote 13: Using scikit-learn Pedregosa et al. (2011) and an elasticnet penalty, L1 ratio 0.1, balanced class weights, and SAGA solver. We tune the regularization parameter \(C\) via cross-validation over the set \(C\in 1e^{-4},1e^{-3},1e^{-2},1e^{-1},1e^{0},1e^{1}\). \[Q_{T_{0}}(x) :=\mathbb{E}[Y|T=0,X=x] \tag{5}\] \[Q_{T_{1}}(x) :=\mathbb{E}[Y|T=1,X=x]\] (6) \[g(x) :=P(T=1|X=x)\] (7) \[Q_{X}(x) :=\mathbb{E}[Y|X=x] \tag{8}\] In our application both \(T\) and \(Y\) are binary variables, so we use an ensemble of gradient boosted decision trees (catboost) 14 and logistic regression15 for our base learners. We fit our models using cross-fitting (Hansen, 2000; Newey & Robins, 2018) and cross-validation; see Appendix F for more details. Footnote 14: The exact backdoor equation is tractable because \(T,C\) and \(Y\) are all binary. **Causal estimators.** After training base learners with cross-fitting, we implement the following plug-in causal estimators: backdoor adjustment (outcome regression) (Q), inverse propensity of treatment weighting (IPTW), and augmented inverse propensity of treatment weighting (AIPTW) (Robins et al., 1994). We also use DoubleML (Chernozhukov et al., 2018) which applies an ordinary least squares on residuals from the base learners. See Appendix F for exact estimation equations. **Modeling results.** Table 3 shows results for Subpopulation A. Since both \(T\) and \(Y\) are binary, we report average precision (AP) for the base learners on both the training and inference folds; this metric is only calculated for observed (not counterfactual) observations. We also report the relative absolute error (RAE) between estimators on \(D_{\text{OBS}}\) and the ATE for \(D_{\text{RCT}}\). Comparing predictive base learners, the propensity score model, \(g(x)\), has much higher AP on inference folds than models that involve the outcome. As we previously mentioned, RCT subsampling allows us to set the relationship between \(X\) (via proxy for \(C\)) and \(T\) but not \(X\) and \(Y\) so the low AP for outcome models could reflect the difficulty in estimating this "natural" relationship between \(X\) and \(Y\). See Section 4.6.1 for additional discussion on low average precision for the outcome models (\(Q\)). For causal estimators, we see that the doubly robust estimator AIPTW using catboost has the lowest estimation error--on par with estimates obtained using the oracle backdoor adjustment set \(C\). It appears the doubly robust estimators using linear models do not recover from the poor predictive performance of the outcome models and IPTW is better in this setting. Though seemingly counterintuitive that the linear and catboost models have similar predictive performance but large differences in the causal estimation error, this discrepancy between predictive performance and causal estimation error is consistent with theoretical results on fitting nuisance models in causal inference (Tsiatis, 2007; Shortreed and Ertefaie, 2017) and empirical results from semi-synthetic evaluation (Shi et al., 2019; Wood-Doughty et al., 2021). Although we used best practices from machine learning to fit our base learners, these results suggest future work is needed to adapt machine learning practices to the goals of causal estimation. ### Consideration #5: Additional finite data challenges The broader purpose of this line of empirical evaluation is to understand the real-world settings for which certain estimation approaches are successful or unsuccessful. We believe bridging the gap between theory that holds asymptotically and finite data is important for drawing valid causal inference, but evaluation designers might encounter unforeseen challenges particular to finite data that need to be examined carefully. #### 4.6.1 Proof of concept approach In our proof of concept pipeline, we hypothesize the outcome models have very low average precision (Table 3) because of finite data issues with class imbalance. In particular, for Subpopulation A, 82% of our data is \(C=1\) and \(\mathbb{E}[Y]=0.07\) so there are few examples to learn from in the smallest category (\(C=0\), \(Y=1\)): \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{\(\beta(x)\)} & \multicolumn{3}{c}{\(\hat{Q}_{\text{T}}(x)\)} & \multicolumn{3}{c}{\(\hat{Q}_{\text{T}}(x)\)} & \multicolumn{3}{c}{\(\hat{Q}_{X}(x)\)} \\ **Prediction Ave. Prec. (\(\uparrow\) better)** & train & inference & train & inference & train & inference & train & inference \\ \hline linear & 0.85 (0.07) & 0.59 (0.02) & 0.63 (0.32) & 0.03 (0.01) & 0.85 (0.17) & 0.13 (0.03) & 0.71 (0.2) & 0.06 (0.01) \\ catboost (nonlinear) & 0.57 (0.0) & 0.5 (0.02) & 1.0 (0.0) & 0.03 (0.01) & 0.99 (0.01) & 0.13 (0.02) & **0.98 (0.01)** & 0.05 (0.01) \\ \hline **Causal Rel. Abs. Error (\(\downarrow\) better)** & Undyst (Baseline) & Backdoor \(C\) (radec) & \(\hat{T_{Q}}\) & \(\hat{T_{YW}}\) & \(\hat{T_{YW}}\) & \(\hat{T_{YW}}\) & \(\hat{T_{YW}}\) \\ \hline linear & 0.21 (0.08) & 0.12 (0.09) & 1 (0.04) & 0.24 (0.01) & 0.19 (0.01) & 0.19 (0.01) \\ catboost (nonlinear) & 0.21 (0.08) & 0.12 (0.09) & 0.24 (0.01) & 0.14 (0.11) & 0.11 (0.1) & 0.13 (0.1) \\ \hline \hline \end{tabular} \end{table} Table 3: Modeling results for subpopulation A. **Top:** Predictive models’ average precision (ave. prec.) for training (yellow) and inference (green) data splits. **Bottom:** Causal estimation models’ relative absolute error (rel. abs. error) between the models’ estimated ATE and the RCT ATE. Here, darker shades of red indicate worse causal estimates. Baselines, unadjusted conditional mean on the samples (unadjusted) and the backdoor adjustment with the oracle \(C\) (backdoor C), are uncolored. We use two baselaerer settings: linear and catboost (nonlinear). We report both the average and standard deviation (in parentheses) over 100 random seeds during sampling. All settings use \(P^{*}(T|C)\) in equation 4 parameterized by \(\zeta_{0}=0.85,\zeta_{1}=0.15\). only 34 documents. This shows that even with our RCT that has a relatively large size (roughly 4k units) compared to other real-world RCTs, there are challenges with having sufficient support. ## 5 Discussion and Future Work Unlike predictive evaluation, empirical evaluation for causal estimation is challenging and still at a nascent stage. In this work, we argue that one of the most promising paths forward is to use a RCT subsampling strategy, and to this line of work, we contribute an RCT rejection sampler with theoretical guarantees. We showed the utility of this algorithm in a proof of concept pipeline with a novel, real-world RCT to empirically evaluate high-dimensional backdoor adjustment methods. Of course, there are critics of empirical evaluation. Hernan (2019) pejoratively compared a competition for causal estimation to evaluating "spherical cows in a vacuum" and claimed this discounted necessary subject-matter expertise. Even in the machine learning community, researchers warn against "mindless bake-offs" of methods (Langley et al., 2011), and in some cases the creation of benchmarks has led to the community overfitting to benchmarks, e.g. Recht et al. (2019). However, in the absence of theory or when theoretical assumptions do not match reality, we see empirical evaluation as a necessary, but not exclusive, part of the broader field of causal inference. A fruitful future direction is for evaluation designers to use our RCT rejection sampler to create comprehensive benchmarks for various phenomena of interest: not only high-dimensional confounding but also heterogeneous treatment effects, unmeasured confounding, missing data, etc. This would involve gathering more RCTs and establishing interesting ways to set \(P^{*}(T|C)\). Our proof of concept evaluation pipeline demonstrated the utility of RCT subsampling but there were many avenues we chose not to pursue such as: measuring confidence interval coverage, measurement error, null causal effects, or moving to more sophisticated natural language processing approaches beyond bag-of-words, e.g. the CausalBERT model (Veitch et al., 2020). In another direction, applied practitioners need guidance on which causal estimation method to use given their specific observational data. Although other work has attempted to link observational data to experimental data (real or synthetic) in which the ground-truth is known (Neal et al., 2020; Kallus et al., 2018), we believe RCT subsampling could help with meta analyses of which combination of techniques work best under which settings. Overall, we see this work as contributing to a much larger research agenda on empirical evaluation for causal estimation. #### Broader Impact Statement We conducted this research with ethical due diligence. Our real-world RCT dataset was implemented by owners of the online platform and in full compliance with the platform's user agreement. The platform owners gave us explicit permission to use and access this dataset. Our dataset contains paper titles and abstracts, which are already publicly available from many sources, and we have removed any potentially personally identifiable information from the dataset, e.g. author names, user ids, user IP addresses, or session ids. By releasing this data, we do not anticipate any harm to authors or users. Like any technological innovation, our proposed RCT rejection sampling algorithm and evaluation pipeline have the potential for dual use--to both benefit or harm society depending on the actions of the humans using the technology. We anticipate there could be substantial societal benefit from more accurate estimation of causal effects of treatments in the medical or public policy spheres. However, other applications of causal inference could potentially harm society by controlling or manipulating individuals. Despite these tradeoffs in downstream applications, we feel strongly this paper's contributions will result in net overall benefit to the research community and society at large. ## Author Contributions KK conceived the original idea of the project and managed the project. RB contributed the ideas behind Algorithm 1 as well as the proofs in Section 3 and the Appendix. RB and KK implemented the synthetic experiments in Section 3. KK gathered and cleaned the data for the proof of concept pipeline in Section 4. KK and SF implemented the proof of concept empirical pipeline in Section 4. KK and RB wrote the first draft of the manuscript. KK, SF, DJ, JB, and RB guided the research ideas and experiments and edited the manuscript. ### Acknowledgments The authors gratefully thank David Jensen, Amanda Gentzel, Purva Pruthi, Doug Downey, Brandon Stewart, Zach Wood-Doughty and Jacob Eisenstein for comments on earlier drafts of this manuscript. The authors also thank anonymous reviewers from ICML for helpful comments. Special thanks to the Semantic Scholar team at the Allen Institute for Artificial Intelligence for help gathering the real-world RCT dataset.
2307.12619
Sparse annotation strategies for segmentation of short axis cardiac MRI
Short axis cardiac MRI segmentation is a well-researched topic, with excellent results achieved by state-of-the-art models in a supervised setting. However, annotating MRI volumes is time-consuming and expensive. Many different approaches (e.g. transfer learning, data augmentation, few-shot learning, etc.) have emerged in an effort to use fewer annotated data and still achieve similar performance as a fully supervised model. Nevertheless, to the best of our knowledge, none of these works focus on which slices of MRI volumes are most important to annotate for yielding the best segmentation results. In this paper, we investigate the effects of training with sparse volumes, i.e. reducing the number of cases annotated, and sparse annotations, i.e. reducing the number of slices annotated per case. We evaluate the segmentation performance using the state-of-the-art nnU-Net model on two public datasets to identify which slices are the most important to annotate. We have shown that training on a significantly reduced dataset (48 annotated volumes) can give a Dice score greater than 0.85 and results comparable to using the full dataset (160 and 240 volumes for each dataset respectively). In general, training on more slice annotations provides more valuable information compared to training on more volumes. Further, annotating slices from the middle of volumes yields the most beneficial results in terms of segmentation performance, and the apical region the worst. When evaluating the trade-off between annotating volumes against slices, annotating as many slices as possible instead of annotating more volumes is a better strategy.
Josh Stein, Maxime Di Folco, Julia Schnabel
2023-07-24T08:49:20Z
http://arxiv.org/abs/2307.12619v1
# Sparse annotation strategies for segmentation of short axis cardiac MRI ###### Abstract Short axis cardiac MRI segmentation is a well-researched topic, with excellent results achieved by state-of-the-art models in a supervised setting. However, annotating MRI volumes is time-consuming and expensive. Many different approaches (e.g. transfer learning, data augmentation, few-shot learning, etc.) have emerged in an effort to use fewer annotated data and still achieve similar performance as a fully supervised model. Nevertheless, to the best of our knowledge, none of these works focus on _which_ slices of MRI volumes are most important to annotate for yielding the best segmentation results. In this paper, we investigate the effects of training with sparse volumes, i.e. reducing the number of cases annotated, and sparse annotations, i.e. reducing the number of slices annotated per case. We evaluate the segmentation performance using the state-of-the-art nnU-Net model on two public datasets to identify which slices are the most important to annotate. We have shown that training on a significantly reduced dataset (48 annotated volumes) can give a Dice score greater than 0.85 and results comparable to using the full dataset (160 and 240 volumes for each dataset respectively). In general, training on more slice annotations provides more valuable information compared to training on more volumes. Further, annotating slices from the middle of volumes yields the most beneficial results in terms of segmentation performance, and the apical region the worst. When evaluating the trade-off between annotating volumes against slices, annotating as many slices as possible instead of annotating more volumes is a better strategy. Keywords:Cardiac MRI; Segmentation; Sparse annotations ## 1 Introduction Cardiac image segmentation constitutes a fundamental initial phase in various applications. Segmentation is often the first step in evaluating cardiac functionality in order to diagnose disease. By partitioning the image into distinct semantically meaningful regions, typically aligned with anatomical structures, it facilitates the extraction of quantitative measures critical for further analysis and interpretation [1]. Deep learning methods have become the state-of-the-art approach for this task, but they require the collection and annotation of data, which are time-consuming and laborious processes. Much effort has been spent improving methods that require fewer ground truth annotations. Some popular approaches include data augmentation [2], transfer learning [3]-[5], semi-supervised learning [6], [7], and self-supervised learning [8], [9] - many others exist [10]. In general, the data used for these methods are _sparse_ or _limited_. The exact definition varies from context to context and usually falls into one of three broad categories [10]. First, the annotation of data volumes could be sparse, where only particular patients may be annotated. Second, the annotation of slices within volumes could be sparse. Instead of having a fully annotated 3D volume, only particular slices within the volume may be annotated. Third, there could be sparsity in the slice annotations themselves. Instead of having a pixel-accurate ground truth annotation, there may be bounding boxes, scribbles, or particular labelled points. These categories are not mutually exclusive - for example, there may be sparse slices that are sparsely annotated. In cardiac imaging, segmentation of the short-axis view on MRI data has been well studied, thanks to technical challenges [11], [12]. Nowadays, we consider fully supervised short-axis cMRI segmentation a well-researched task, with state-of-the-art approaches surpassing human performance. In this paper, we purposely choose not to use any approaches for sparse data to answer the following questions, which are still unclear in the literature: 1. How many sparse data does are needed for achieving reasonable results with state-of-the-art network nnU-Net? 2. Which cardiac regions (basal, mid or apical) contribute the most to segmentation performance? 3. Is there a particular annotation strategy which one should prefer between annotating volumes or annotating slices? In this work, we investigate the effects on segmentation performance on two public datasets [11], [12], when removing volumes (reducing the number of annotated case), removing slice annotations (both randomly and from particular cardiac regions) and the balance between these two. ## 2 Related work Recent works on cardiac imaging segmentation have focused on using sparse annotations while still achieving results that compare to using a fully annotated dataset. Bitarafan et al. [13] use a single annotated 2D slice with registration and self-training to propagate and train on label propagations. They achieve approximately a 10% reduction in Dice score using a single annotation compared to using a fully annotated volume. Bai et al. [14] also use label propagation in combination with a recurrent neural network (RNN) to incorporate both spatial and temporal information. Using two annotated frames, they are able to out-compete a baseline U-Net model (trained on all available annotated frames). Contrastive learning strategies have also been explored. Zeng et al. [9] use a contrastive loss between slice position in a self-supervised pre-training stage that achieves a similar Dice score to a fully annotated dataset using only 15 annotated volume (the achieved Dice score is 3% lower compared to using the fully annotated set). You et al. [15] present a contrastive semi-supervised 2D medical segmentation framework for very limited annotations and accomplish a Dice score of 0.82 using only 1% of labels and similar performance to fully annotated set using only 10% of the labels. ## 3 Methods ### Segmentation network We use nnU-Net [16], the current state-of-the-art model for cardiac segmentation (achieving first place in ACDC and M&Ms challenges). We do not modify the standard nnU-Net processing pipeline, except to change the data sampling strategy. We evaluate on mean foreground Dice score, Hausdorff Distance (HD) and Mean Absolute Distance (MAD). We evaluate both 3D and 2D nnU-Net models. For both datasets, we evaluate only the 3D high resolution model (i.e. not the low resolution or cascaded model). The reader is referred to the original nnU-Net paper [16] for clarification on the differences between these models and their processing pipelines. The 2D models are trained by sampling a slice from a given volume. ### Definition of sparsity For each dataset, we investigate the effects of removing volumes (i.e. enforcing sparsity of volumes), zeroing out slices (i.e. enforcing sparsity of slices) and the balance between these two. _Sparse volumes:_ To investigate the sparsity of volumes, we randomly sample a percentage of the total patient cardiac volumes, which are then used for training. By iteratively training on smaller samples of the dataset we want to determine the number of needed annotated volumes to achieve comparable results to training on the full dataset. _Sparse slices:_ We investigate slice sparsity by sampling and training on several slices from within a volume. All non-sampled slices are zeroed, which allows us to maintain volume shape. This prevents the need to modify network parameters, which would otherwise need to continuously adapt based on the input volume. Slices can either be sampled randomly (from within the entire volume) or explicitly sampled from the apical, mid-ventricular, and/or basal regions. We assume that each volume is split into equal thirds, where the first third corresponds to apical slices, the second mid-ventricular slices and the third basal slices. Sampling from various permutations of these regions allows us to investigate which (if any) regions are most important for segmentation performance. Annotation strategy:Finally, we investigate the balance and relative importance of sparse volumes vs sparse slices by sampling a percentage of the volumes and then randomly sampling different percentages of slices from within the sampled volumes. ## 4 Experiments and results ### Dataset We use the Automatic Cardiac Diagnosis Challenge (ACDC) [11] and the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation Challenge (M&Ms) [12]. These datasets are popular in the literature, and have been used to investigate a variety of supervised and unsupervised segmentation methods. They are both inherently sparse (across volumes) as they provide annotations at only end-diastolic and end-systolic phases. ACDC has a set of 100 training cases, each of which has two fully annotated volumes (one at end-diastole, one at end-systole). We train on all 200 volumes using 160 volumes for training and the remaining 40 volumes for validation. The test set is composed of 100 cases, each of which is again fully annotated at end-diastole and end-systole. After nnU-Net preprocessing transformations, all volume have 20 slices. The total number of available training slices is therefore \(20\times 160=3200\). Similarly, M&Ms has a set of 150 training cases, each of which has end-diastole and end-systole volumes annotated. We split the training data into 240 training cases and 60 validation cases. The test set is composed of 136 cases (again, each case has end-diastole and end-systole annotated). After nnU-Net preprocessing transformations, there are 14 slices per volume - the total number of available training slices is therefore \(14\times 240=3360\). The M&Ms dataset contains images from different vendors and centers. We keep the same split of dataset described in [12]. Please refer to the corresponding paper for details on the acquisition protocol. ### Sparse volumes The results of using a reduced dataset for both ACDC and M&Ms are shown in Table 1. First, we observe that the networks trained on ACDC generally outperform those trained on M&Ms. We believe this is due to the different MRI domains present in the M&Ms dataset. Second, we note that using more than 48 volumes (approximately 30% of the ACDC dataset, and 20% of the M&Ms dataset), regardless of dataset or model dimensionality, yields a Dice score greater than 0.85. However, using fewer volumes leads to increases in corresponding HD and MAD scores, and a decrease in Dice scores. Further, using fewer than 48 volumes leads to a worse performance in networks trained on ACDC data compared to those trained on M&Ms. However, when the number of volumes is severely restricted (e.g. using 8 volumes) we see a similarly poor performance for both datasets. Finally, we note that the difference in performance between 2D and 3D networks is more pronounced for networks trained on ACDC than those trained on M&Ms. This is especially true when considering the difference in surface distance metrics between 2D and 3D networks. We conclude that having a variety of domains within the M&Ms dataset makes it more difficult to achieve higher Dice scores, while simultaneously allowing for better generalisation when removing annotated volumes. ### Sparse annotations The results of training 3D nnU-Net only on particular cardiac regions are shown in Table 2. As expected, the best performance is achieved by using all three cardiac regions (i.e. the most slices). Using only two regions, all combinations achieve a Dice score greater than 0.8, except for the network trained on the apical and basal combination trained on M&Ms data, which achieves 0.79. We also observe that training on combinations using mid-ventricular slices (i.e. apical and mid, or basal and mid combinations) yield the best-performing networks. The worst performance is achieved by networks trained on only a single cardiac region (the network trained on M&Ms data trained solely on the apical region is a very poor-performing network). The best-performing single region networks are those trained on the basal region for ACDC data and the middle region for M&Ms data. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & & & & \multicolumn{8}{c|}{Number of training volumes} \\ \hline Network & Dataset & Evaluation metric & 8 & 24 & 32 & 48 & 80 & 160 & 192 & 240 \\ \hline \multirow{3}{*}{2D nnU-Net} & \multirow{3}{*}{ACDC} & Dice & 0.62 & 0.71 & 0.74 & 0.85 & 0.89 & **0.91** & - & - \\ & & HD (mm) & 36.03 & 22.15 & 18.56 & 14.61 & 7.20 & **5.06** & - & - \\ & & MAD (mm) & 10.16 & 5.67 & 8.95 & 2.81 & 1.65 & **1.16** & - & - \\ \hline \multirow{3}{*}{3D nnU-Net} & \multirow{3}{*}{ACDC} & Dice & 0.57 & 0.66 & 0.71 & 0.85 & 0.85 & **0.91** & - & - \\ & & HD (mm) & 54.93 & 43.63 & 29.34 & 8.81 & 7.75 & **4.4** & - & - \\ & & MAD (mm) & 18.03 & 13.61 & 8.95 & 2.17 & 1.94 & **1.16** & - & - \\ \hline \multirow{3}{*}{2D nnU-Net} & \multirow{3}{*}{M\&Ms} & Dice & 0.6 & 0.82 & 0.83 & 0.85 & 0.86 & 0.86 & **0.87** \\ & & HD (mm) & 32.43 & 9.3 & 9.42 & 8.81 & 7.75 & 6.84 & 6.87 & **6.54** \\ & & MAD (mm) & 8.9 & 2.39 & 2.38 & 2.17 & 1.94 & 1.74 & 1.74 & **1.74** \\ \hline \multirow{3}{*}{3D nnU-Net} & \multirow{3}{*}{M\&Ms} & Dice & 0.54 & 0.82 & 0.82 & 0.84 & 0.85 & 0.86 & 0.86 & **0.87** \\ & & HD (mm) & 37.41 & 9.1 & 8.86 & 6.98 & 6.44 & 5.89 & **5.8** & 6.02 \\ \cline{1-1} & & MAD (mm) & 11.26 & 2.25 & 2.32 & 1.79 & 1.65 & 1.56 & **1.53** & 1.6 \\ \hline \end{tabular} \end{table} Table 1: Effect of training on sparse annotated volumes. Note that the ACDC dataset only has 160 volumes. We then train networks on randomly sample slicesd from all three cardiac regions, the results of which are show in Table 3. The aforementioned cardiac regions correspond to using one-third of all slices per region. For ACDC, a single cardiac region corresponds to sampling approximately 6 slices, and two cardiac regions to approximately 13 slices, and 5 slices and 10 slices for M&Ms, respectively. Randomly sampling a third of the available slices yields better results than sampling from any single cardiac region (although there is still a similar drop in performance when using a limited number of slices). Randomly sampling two thirds of available slices yields similar results as sampling from either the apical and middle slices or the middle and basal slices, which in turn is similar to using the full set of available slices. We also observe a Dice score greater than 0.8 when only using approximately 40% of slices (i.e. 8 slices for ACDC, 6 slices for M&Ms) with a slight decrease for the surface distance metrics. A number of 10 slices annotated is sufficient to achieve a Dice superior to 0.85 for both datasets. This corresponds to half the slices annotated for ACDC and around 70% for M&Ms. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{8}{|c|}{Cardiac regions trained on} \\ \hline Dataset & Metric & A + M + B & A + M & M + B & A + B & A & M & B \\ \hline \multirow{3}{*}{ACDC} & Dice & **0.91** & 0.88 & 0.89 & 0.81 & 0.54 & 0.5 & 0.69 \\ & HD (mm) & **4.17** & 6.66 & 5.89 & 52.47 & 21.04 & 149.41 & 26.37 \\ & MAD (mm) & **1.08** & 1.96 & 1.76 & 18.45 & 6.35 & 64.59 & 8.16 \\ \hline \multirow{3}{*}{M\&Ms} & Dice & **0.87** & 0.84 & 0.86 & 0.79 & 0.08 & 0.69 & 0.38 \\ & HD (mm) & **5.52** & 11.48 & 7.0 & 9.86 & 46.5 & 59.53 & 51.21 \\ \cline{1-1} & MAD (mm) & **1.47** & 4.17 & 2.17 & 2.63 & 21.75 & 19.82 & 17.22 \\ \hline \end{tabular} \end{table} Table 2: Effect of training on sparse annotated slices from different cardiac regions (A=apical slices, M=middle slices, B=basal slices). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{8}{|c|}{Number of slices used for training} \\ \hline Dataset & Metric & 1 & 2 & 4 & 6 & 8 & 10 & 14 & 16 & 20 \\ \hline \multirow{3}{*}{ACDC} & Dice & 0.01 \(\pm\)0.01 & 0.28 \(\pm\)0.08 & 0.62 \(\pm\)0.14 & 0.77 \(\pm\)0.07 & 0.81 \(\pm\)0.07 & 0.87 \(\pm\)0.05 & 0.9 \(\pm\)0.02 & 0.9 \(\pm\)0.02 & **0.91**\(\pm\)0.02 \\ & HD (mm) & 82.18 \(\pm\)18.83 & 24.79 \(\pm\)3.46 & 19.34 \(\pm\)7.77 & 10.83 \(\pm\)1.27 & 8.58 \(\pm\)0.9 & 6.88 \(\pm\)2.3 & 5 \(\pm\)1.75 & 4.78 \(\pm\)1.0 & **4.63**\(\pm\)1.6 \\ & MAD (mm) & 33.09 \(\pm\)4.65 & 8.34 \(\pm\)1.21 & 6.18 \(\pm\)2.82 & 2.95 \(\pm\)0.15 & 2.39 \(\pm\)0.27 & 1.71 \(\pm\)0.43 & 1.3 \(\pm\)0.3 & 1.22 \(\pm\)0.22 & **1.2**\(\pm\)0.33 \\ \hline \multirow{3}{*}{M\&Ms} & Dice & 0.01 \(\pm\)0.01 & 0.28 \(\pm\)0.1 & 0.68 \(\pm\)0.11 & 0.79 \(\pm\)0.08 & 0.83 \(\pm\)0.06 & 0.85 \(\pm\)0.04 & **0.86**\(\pm\)0.03 & - & - \\ & HD (mm) & 76.06 \(\pm\)16.43 & 28.2 \(\pm\)6.05 & 14.1 \(\pm\)2.57 & 11.11 \(\pm\)3.33 & 7.58 \(\pm\)1.35 & 6.33 \(\pm\)0.76 & **5.64**\(\pm\)1.05 & - & - \\ \cline{1-1} & MAD (mm) & 35.3 \(\pm\)3.47 & 10.31 \(\pm\)2.85 & 4.27 \(\pm\)0.68 & 3.82 \(\pm\)1.87 & 2.4 \(\pm\)0.76 & 1.67 \(\pm\)0.14 & **1.49**\(\pm\)0.2 & - & - \\ \hline \end{tabular} \end{table} Table 3: Influence of training with randomly sampled and sparsely annotated slices from all three cardiac regions using 3D nnU-Net. Note that there are only 14 slices per volume for the M&Ms dataset. ### Sparse dataset vs sparse annotations In this section, we investigate annotation strategies using different balances of reduced volume annotations and reduced slice annotations while keeping a fixed number of total slices annotated. Table 4 shows the results for approximately 1400 slices annotated across both datasets. For both networks, when keeping the total number of slices the same, better results are achieved when using more slices per volume. This is further observed in Tables 5 and 6 where we compare approximately 700 annotated slices per dataset. Again, we note how best performance is achieved with more slices, even if a smaller number of volumes is annotated. We observe that in general, the networks trained on ACDC perform better than those trained on M&Ms when using the same number of slices and volumes. Note that since the overall number of slices is quite similar for both datasets (3200 for ACDC, 3360 for M&Ms) the relative proportion for a given slice/volume trial is also similar. Finally, we note that ACDC seems to be more affected by using fewer volumes - that is, the drop in performance when halving the number of volumes (and keeping the number of slices fixed) is slightly larger compared to M&Ms. Despite this, we see the same overall pattern that using fewer slices leads to worse performance. using a severely restricted dataset. We note that using more than 48 volumes is sufficient to achieve a Dice score greater than 0.85, and using more than 80 volumes is comparable to using the full dataset (160 volumes in ACDC, 240 volumes in M&Ms). This corresponds to using half of all available volumes for ACDC (a total of 1600 slices), and one-third of available volumes for M&Ms (a total of 1120 slices). Further, experiments with sparse annotations demonstrate that using more than two-thirds of available slices yields results comparable to using the full set of available annotations. We observe that randomly sampling these slices from throughout all cardiac regions results in better performance than sub-sampling from particular regions. If two regions are sub-sampled, the middle region contributes the most to segmentation performance and allows better generalisation. The conclusion differs between the dataset when we sub-sampled one region. Nevertheless, as we expect, the apical region generalises (and performs) the worst due to differences in ventricular sizes compared to the middle and basal regions. Finally, we demonstrate the importance of using more slices relative to volumes. When we use the full set of available training volumes with a limited number of slices, we achieve only poor results. However, even when the number of volumes is reduced, good performance can still be achieved if there is a large number of slices to learn from. For both datasets, annotating upwards of 60% of the slices provides the best results. Therefore, we recommend annotating as many slices as possible in each volume instead of annotating more volumes with fewer slices. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Num slices & \multicolumn{2}{c|}{14} & \multicolumn{2}{c|}{12} & \multicolumn{2}{c|}{10} & \multicolumn{2}{c|}{9} & \multicolumn{2}{c|}{8} & \multicolumn{2}{c|}{6} \\ \hline Num cases & 100 & 50 & 120 & 60 & 144 & 77 & 160 & 80 & 192 & 96 & 240 & 120 \\ \hline Dice & **0.86** & 0.84 & 0.84 & 0.85 & 0.84 & 0.83 & 0.84 & 0.82 & 0.82 & 0.81 & 0.77 & 0.75 \\ HD (mm) & **6.1** & 7.14 & 6.72 & 6.3 & 7.07 & 6.92 & 6.93 & 7.79 & 8.17 & 7.77 & 9.28 & 12.11 \\ MAD (mm) & **1.58** & 1.79 & 1.86 & 1.67 & 1.93 & 1.83 & 1.91 & 2.22 & 2.34 & 2.15 & 2.75 & 3.76 \\ \hline \end{tabular} \end{table} Table 6: Influence of keeping slices constant while reducing number of training volumes. Trained on M&Ms with a 3D nnU-Net network. Standard deviation scores are removed for brevity. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Num slices & \multicolumn{2}{c|}{20} & \multicolumn{2}{c|}{17} & \multicolumn{2}{c|}{14} & \multicolumn{2}{c|}{12} & \multicolumn{2}{c|}{10} & \multicolumn{2}{c|}{9} \\ \hline Num cases & 65 & 32 & 80 & 40 & 100 & 50 & 120 & 60 & 144 & 77 & 160 & 80 \\ \hline Dice & 0.87 & 0.7 & **0.89** & 0.84 & 0.87 & 0.87 & 0.88 & 0.87 & 0.87 & 0.85 & 0.83 & 0.78 \\ HD (mm) & 8 & 32.4 & **6.15** & 11.35 & 8.52 & 8.83 & 6.7 & 9.21 & 7.67 & 8.5 & 8.27 & 9.61 \\ MAD (mm) & 1.98 & 9.79 & **1.51** & 3.59 & 2.08 & 2.35 & 1.6 & 2.18 & 1.81 & 2.16 & 2.01 & 2.41 \\ \hline \end{tabular} \end{table} Table 5: Influence of keeping slices constant while reducing number of training volumes. Trained on ACDC with a 3D nnU-Net network. Standard deviation scores are removed for brevity. Future work will build on this baseline using state-of-the-art nnU-Net and compare different approaches for sparse annotations, such as transfer learning or semi-supervised learning, to evaluate the most appropriate strategy. Further, the study will be extended to include more datasets with different cardiac MRI views or modalities (e.g. ultrasound, CT).
2308.04826
WaveNeRF: Wavelet-based Generalizable Neural Radiance Fields
Neural Radiance Field (NeRF) has shown impressive performance in novel view synthesis via implicit scene representation. However, it usually suffers from poor scalability as requiring densely sampled images for each new scene. Several studies have attempted to mitigate this problem by integrating Multi-View Stereo (MVS) technique into NeRF while they still entail a cumbersome fine-tuning process for new scenes. Notably, the rendering quality will drop severely without this fine-tuning process and the errors mainly appear around the high-frequency features. In the light of this observation, we design WaveNeRF, which integrates wavelet frequency decomposition into MVS and NeRF to achieve generalizable yet high-quality synthesis without any per-scene optimization. To preserve high-frequency information when generating 3D feature volumes, WaveNeRF builds Multi-View Stereo in the Wavelet domain by integrating the discrete wavelet transform into the classical cascade MVS, which disentangles high-frequency information explicitly. With that, disentangled frequency features can be injected into classic NeRF via a novel hybrid neural renderer to yield faithful high-frequency details, and an intuitive frequency-guided sampling strategy can be designed to suppress artifacts around high-frequency regions. Extensive experiments over three widely studied benchmarks show that WaveNeRF achieves superior generalizable radiance field modeling when only given three images as input.
Muyu Xu, Fangneng Zhan, Jiahui Zhang, Yingchen Yu, Xiaoqin Zhang, Christian Theobalt, Ling Shao, Shijian Lu
2023-08-09T09:24:56Z
http://arxiv.org/abs/2308.04826v2
# WaveNeRF: Wavelet-based Generalizable Neural Radiance Fields ###### Abstract Neural Radiance Field (NeRF) has shown impressive performance in novel view synthesis via implicit scene representation. However, it usually suffers from poor scalability as requiring densely sampled images for each new scene. Several studies have attempted to mitigate this problem by integrating Multi-View Stereo (MVS) technique into NeRF while they still entail a cumbersome fine-tuning process for new scenes. Notably, the rendering quality will drop severely without this fine-tuning process and the errors mainly appear around the high-frequency features. In the light of this observation, we design WaveNeRF, which integrates wavelet frequency decomposition into MVS and NeRF to achieve generalizable yet high-quality synthesis without any per-scene optimization. To preserve high-frequency information when generating 3D feature volumes, WaveNeRF builds Multi-View Stereo in the Wavelet domain by integrating the discrete wavelet transform into the classical cascade MVS, which disentangles high-frequency information explicitly. With that, disentangled frequency features can be injected into classic NeRF via a novel hybrid neural renderer to yield faithful high-frequency details, and an intuitive frequency-guided sampling strategy can be designed to suppress artifacts around high-frequency regions. Extensive experiments over three widely studied benchmarks show that WaveNeRF achieves superior generalizable radiance field modeling when only given three images as input. ## 1 Introduction Rendering novel views from a set of posed scene images has been studied for years in the fields of computer vision and graphics. With the emergence of implicit neural representation, neural radiance field (NeRF)[25] and its variants[21, 23] have recently achieved very impressive performance in novel view synthesis with superb multi-view consistency. However, most existing works fall short in model scalability by requiring a per-scene optimization process with densely sampled multi-view images for training. To avoid the cumbersome process of training from scratch for new scenes, a popular line of generalizable NeRF [3, 37, 32, 34, 13] introduces a pipeline that first trains a base model on the training data and then conducts fine-tuning for each new scene, which improves the scalability and shortens the per-scene training process. Their base models often extract features from the source views and then inject the features into a neural radiance field. Several previous studies [37, 32] directly use CNN networks to extract features while recent generalizable NeRF models [3, 34, 13] resort to Multi-View Stereo (MVS) technique to warp 2D source feature maps into 3D features planes, yielding better performance than merely using CNN networks. However, per-scene fine-tuning still entails a fair number of posed training images that are often challenging to collect in various real-world tasks. On the other hand, removing the per-scene fine-tuning will incur a significant performance drop with undesired artifacts and poor detail. Notably, we intriguingly observe that the rendering error mainly lies around image regions with rich high-frequency information as illustrated in Fig. 1. The phenomenon of losing high-frequency detail is largely attributed to the fact that most existing generalizable NeRFs conduct down-sampling operations at the feature extraction stage of their pipeline, i.e., the CNN networks adopted in [37, 32] or the MVS module adopted in [3, 34, 13]. In the light of the aforementioned observation, we present **Wave**lets-based **Ne**ural **R**adiance **F**ields (**WaveNeRF**) which incorporates explicit high-frequency information into the training process and thus obviates the per-scene fine-tuning under the generalizable and few-shot setting. Specifically, with MVS technique to construct 3D feature volumes which are converted to model NeRF in the spatial domain, we further design a Wavelet Multi-View Stereo (WMVS) to incorporate scene wavelet coefficients into the MVS to achieve frequency-domain modeling. Distinct from other frequency transformations like Fourier Transform, WaveNeRF employs Wavelet Transform which is coordinate invariant and preserves the relative spatial positions of pixels. This property is particularly advantageous in the context of MVS as it allows multiple input views to be warped in the direction of a reference view to form sweeping planes in both the spatial domain and the frequency domain within the same coordinate system. Apart from MVS, this property also enables to build a frequency-based radiance field so that a designed Hybrid Neural Renderer (HNR) can leverage the information in both the spatial and frequency domains to boost the rendering quality of the appearance, especially around the high-frequency regions. In addition, WaveNeRF is also equipped with a Frequency-guided Sampling Strategy (FSS) which enables the model to focus on the regions with larger high-frequency coefficients. The rendering quality can be improved clearly with FSS by sampling denser points around object surfaces. The contributions of this work can be summarized in three points. * _First_, we design a WMVS module that preserves high-frequency information effectively by incorporating wavelet frequency volumes while extracting geometric scene features. * _Second_, we design a HNR module that can merge the features from both the spatial domain and the frequency domain, yielding faithful high-frequency details in neural rendering. * _Third_, we develop FSS that can guide the volume rendering to sample denser points around the object surfaces so that it can infer the appearance and geometry with higher quality. ## 2 Related Works ### Multi-View Stereo Multi-view stereo (MVS) is a method that involves using multiple images taken from various viewpoints to create a detailed 3D reconstruction of an object or scene. Over time, various conventional methods have been proposed and tested in this field [6, 16, 7, 29, 8, 27]. More recently, deep learning techniques have been integrated into the multi-view stereo process. One such technique is MVSNet [35], which extracts features from all input images and warps them onto a reference image to generate probabilistic planes with varying depth values. These planes are then combined to create a variance-based cost volume that accurately represents the specific scene. Although MVS methods have demonstrated promising performance, their large memory requirements, due to the 3D volume grid and operations, severely limit the resolution of input images and subsequent development of deep learning-based MVS research. To address this issue, R-MVSNet [36] sequentially regularizes the cost volume with GRU, making MVSNet more scalable. In addition, cascade MVS models [5, 11] use a coarse-to-fine strategy to generate cost volumes of various scales and compute depth output accordingly, freeing up more memory space. MVS has been shown to be effective in inferring the geometry and occlusions of a scene [3, 13]. We follow the previous MVS techniques and further introduce wavelet transform into it to achieve a higher quality of inference. ### Neural Radiance Field 3D scene reconstruction and novel view synthesis have been extensively studied for many years. Researchers have used various explicit representations of scene geometry such as 3D meshes [14, 22] and point clouds [17, 1]. However, NeRF [25] employs an implicit neural representation method that uses an MLP-based network to render novel views. NeRF has demonstrated excellent rendering performance and has been further extended to various computer vision tasks [15, 4, 26, 9, 18, 10, 28, 2, 30, 38, 19, 20]. Although all of these studies showcase the impressive strength of NeRF in specific tasks, they still follow the same training process as the original NeRF and require per-scene training to complete the corresponding task. To address this issue, several studies in the generalization Figure 1: The comparison between the absolute rendering errors (c) of GeoNeRF[13] and the high-frequency features of the ground truth (d). We can see that the errors mainly appear around the pixels with high-frequency features. of NeRF have shown some degree of success. Specifically, PixelNeRF [37] and IBRNet [32] both rely on the notion that aggregating multi-view features at each sampled point leads to better performance than using direct encoded RGB inputs. Another typical approach that achieves generalizable NeRF is using multi-view stereo (MVS) techniques. For instance, MVSNeRF [3], which is the first to combine MVSNet and NeRF, simply concatenates the cost volume in MVSNet with the 5D input in NeRF. More recent generalizable NeRF, PointNeRF [34] and GeoNeRF [13], both use MVS techniques to obtain a coarse 3D representation, but PointNeRF uses point cloud growing to enhance the inference ability, while GeoNeRF uses attention-based transformer modules. Although some of the NeRF models are generalizable, they typically require a specific number of inputs, such as 10 source views in IBRNet [32]. In addition, almost all of them need per-scene optimization to achieve photorealistic outcomes. Per-scene optimization is actually an additional training process which greatly impairs the generalizability. It is worth noting that without this optimization process, the rendering quality of these existing models can drop significantly, with most errors occurring around high-frequency features. Based on this observation, we integrate wavelet frequency decomposition into NeRF to achieve generalizable yet high-quality synthesis without any per-scene optimization. We believe that this approach is much more realistic, as it mimics situations where intelligent vehicles have limited sensors and need to reconstruct 3D scenery immediately. ## 3 Method This section presents our novel wavelet-based generalizable NeRF, designed for synthesizing high-quality novel views of a scene from three-shot source views without any per-scene fine-tuning process. Inspired by the observation that the rendering errors of the previous models mainly gather around the high-frequency regions, we design a Wavelet Multi-view Stereo (WMVS) module to obtain feature volumes in both the spatial domain and the frequency domain so that the high-frequency information can be maintained and represented separately. Besides, since the renderer in prior studies is unable to directly disentangle the errors around high-frequency features, we implement a Hybrid Neural Renderer (HNR) that can adjust the rendered colors based on the high-frequency information obtained from WMVS. During this rendering process, we also notice that previous sampling strategies necessitate an additional sampling step based on the outcome of the initial sampling, or they simultaneously sample all the points at the expense of sampling quality. Therefore, to achieve higher sampling quality where more samples are around objects in the scene in a one-round sampling process, we adopt a Frequency-guided Sampling Strategy (FSS) where the coordinates of the sampled points are determined by the distribution of the features in the frequency feature volume. The overall architecture of WaveNeRF is shown in Fig. 2. We elaborate on our designed WMVS, FSS, and HNR, in Section 3.1, 3.2, and 3.3 respectively. ### Wavelet Multi-view Stereo Since Wavelet Transform can decompose an image into components with different scales, it naturally fits with the pyramid structure of the CasMVSNet[11]. Therefore, given a set of input source views \(\{I_{v}\}_{v=0}^{V}\) with the size of \(H\times W\), we design a Wavelet Multi-View Stereo (WMVS) module to construct cascaded spatial feature volumes as well as a high-frequency feature volume following the similar way of CasMVSNet as shown in Fig. 2. We make several modifications to both the feature extraction process and the volume construction process of CasMVSNet. First, we utilize level-2 Discrete Wavelet Transform (DWT) to obtain different frequency components, where \(w_{L}\) represents the low-frequency component and \(w_{H}^{(l)}\) represents the high-frequency components of level \(l\). The low-frequency components \(w_{L}\) have the smallest size (\(\frac{H}{4},\frac{W}{4}\)) and are directly used to generate the lowest level of semantic feature maps \(f_{s}^{(0)}\) via a CNN-based feature extractor. For each level of high-frequency components, it is infeasible to generate spatial features by naively adding different frequency components together due to the domain gap. We thus design an Inverse Wavelet Block (IWB) that simulates the inverse discrete wavelet transform by combining frequency features of the previous level with high-frequency features of the current level via dilated deconvolution to generate latent spatial feature maps \(f_{L}^{(l)}\). Then the latent spatial feature maps are used to generate semantic feature maps of the current level by CNN as below: \[f_{s}^{(l)}=\textbf{CNN}(f_{s}^{(l-1)},\textbf{IWB}(f_{L}^{(l-1)},w_{H}^{(l)}) ),\ \ l\in{1,2}. \tag{1}\] In addition, all the high-frequency features are eventually gathered to form the 2D compounded high-frequency components which are used to generate frequency feature maps \(f_{w}\) by a CNN-based network. After having the spatial semantic feature maps and the wavelet feature maps, we follow the same approach as in CasMVSNet [11] to build sweep planes and spatial feature volumes \(P_{s}^{(l)}\) at three levels. Besides, thanks to the nice property of Wavelet Transform that it does not affect the relative coordinates, we can follow the same manner to construct the high-frequency feature volume. Since high-frequency information is often sparsely distributed, it is sufficient to represent the high-frequency features in a relatively small volume. Here we choose to use the second coarsest level (\(l=1\)) to balance the depth range and the depth sampling precision and construct a wavelet frequency feature volume \(P_{w}\) with the size of \(\frac{H}{2}\times\frac{W}{V^{2}}\). In a nutshell, given a set of input source views \(\{I_{v}\}_{v=0}^{V}\), our WMVS module generates 2D feature maps \(f_{s}^{(l)},f_{w}\) and their corresponding 3D features volumes \(P_{s}^{(l)},P_{w}\) for subsequent modules as below: \[(f_{s}^{(l)},f_{w},P_{s}^{(l)},P_{w})=\textbf{WMVS}(\{I_{v}\}_{v=0}^{V}),\,\,\,l \in 0,1,2. \tag{2}\] ### Frequency-guided Sampling Strategy After generating features from the WMVS module, we use the ray-casting approach to create new views. To cover the depth range, we sample \(N_{c}\) points uniformly along each camera ray at a novel camera pose. Many previous studies [21, 23, 37, 32] follow the classic NeRF [25], sampling \(N_{f}\) points based on the volume density distribution inferred by the \(N_{c}\) points to approximate the object surfaces. However, this coarse-to-fine sampling strategy requires training two NeRF networks at the same time. MVSNeRF [3] directly discards the fine sampling and claims that adding a fine sampling process cannot significantly improve the performance. GeoNeRF [13] first estimates a set of valid coarse sample points by checking whether the coordinates lie within the valid NDC (Normalized Device Coordinate) coordination system and then randomly samples \(N_{f}\) points around these valid coarse points. Although GeoNeRF simultaneously samples a mixture of \(N_{c}+N_{f}\) points, it cannot ensure the sampled points are near the objects. We propose a frequency-guided sampling strategy (FSS) (as shown in Fig. 3) based on the observation that high-frequency features often indicate valuable scene information. Our strategy first uses the coordinates of coarse sampling points to fetch corresponding high-frequency features from the wavelet feature volume \(P_{w}\). Then, we use these frequency features to create a probability density function Figure 3: The illustration of our Frequency-guided Sampling Strategy (FSS). We utilize the distribution of the coarse sampling points in the frequency volume to determine the distribution of the fine sampling points. Areas having higher wavelet feature values are more likely to be sampled in the fine sampling process. Figure 2: The overview of the proposed WaveNeRF. With sparse input views, wavelet multi-view stereo (WMVS) is designed to produce frequency feature volume \(f_{w}\) and multi-level spatial feature volumes \([f_{s}^{(0)},f_{s}^{(1)},f_{s}^{(2)}]\). Specifically, the input views are first divided into different frequency components with level-2 discrete wavelet transform. The spatial and frequency features are then obtained via our designed Inverse Wavelet Blocks (IWB) and CNN-based feature extractors, and warpped into corresponding 3D feature volumes\([P_{s}^{(0)},P_{s}^{(1)},P_{s}^{(2)},P_{w}]\). With 2D features and 3D volumes, a novel Frequency-guided Sampling Strategy (FSS) is introduced to yield more precise samples with spatial and frequency tokens. These tokens are fed into a subsequent Hybrid Neural Renderer (HNR) to infer the volume density, colors, and frequency coefficients. \(p_{0}\) along the ray, which determines the distribution of the fine sampling points. Regions with higher wavelet feature values have a higher probability of being sampled in the fine sampling process which yields better sampling quality. ### Hybrid Neural Renderer Since we have feature volumes \(P_{s}^{(l)},P_{w}\) in both the spatial domain and the frequency domain and the coordinates of the sampled points from FSS, we can fetch the features from the feature volumes and represent them as sets of tokens. For a point \(x_{n}\) in both domains, we generate a view-independent (i.e., global) token \(t_{n,0}\) and V view-dependent tokens \(t_{n,v}\). We define \(t^{s}\) and \(t^{w}\) as tokens in the spatial domain and tokens in the wavelet frequency domain, respectively. For a sample \(n\), \(t_{n,0}^{s/w}\) could be considered as a global understanding of the scene at point \(x_{n}\), while \(t_{n,v}^{s/w}\) represents the view-dependent understanding of the scene. We then implement a Hybrid Neural Renderer (HNR) which integrates these tokens to estimate both the colors and the frequency coefficients of the rays. The overall structure of the HNR is shown in Fig. 4. We first adopt an Attention-Based Aggregator(ABA) in GeoNeRF [13] to refine the feature tokens. The refined view-independent tokens are used to estimate the volume density while the refined view-dependent tokens are utilized to predict the colors and frequency coefficients. Since the global information of wavelet high-frequency is often sparse and we demand local high-frequency enhancement, we only reserve the view-independent tokens in the spatial domain for the subsequent volume density estimation. Hence, the output of ABA only contains one set of view-independent tokens \(\{t_{n,0}^{\prime}\}_{n=1}^{N}\) which have access to all necessary data to learn the geometry of the scene and estimate volume densities. These view-independent tokens are then regularized using an auto-encoder-style MLP network (AE-MLP) [13]. The AE-MLP network learns the global geometry along the ray using convolutional layers and predicts more coherent volume densities \(\sigma_{n}\). Notably, only the tokens in the frequency domain \(\{t_{n,v}^{\prime w}\}_{v=1}^{V}\) are used to predict the frequency coefficients \(\hat{f}_{n}\) while the color prediction utilizes all the view-dependent tokens. The prediction of color and frequency coefficients for each point relies on a weighted sum of the source view samples. The weight of each view, denoted as \(w_{n,v}^{s/w}\), is determined using a MLP-based module. To obtain the color and wavelet samples for each point \(x_{n}\), we project them onto the source images and the source wavelet frequency maps, resulting in the samples \(c_{n,v}\) and \(f_{n,v}\), respectively. We first estimate the wavelet coefficients via this weighted sum process. These wavelet coefficients form another set of weights by two linear layers which are further used to adjust the color prediction based on the weighted sum of the color samples as: \[\hat{f}_{n}=\sum_{v=1}^{V}w_{n,v}^{w}f_{n,v}\;, \tag{3}\] \[\hat{c_{n}}=(\sum_{v=1}^{V}w_{n,v}^{s}c_{n,v})*(\mathbf{L}\mathbf{T}(\hat{f}_ {n})+1). \tag{4}\] We argue that this design can increase the significance of the color samples around the surfaces of the objects and can reconstruct more details of the objects in the novel view. Once we have the prediction of the volume densities, colors, and frequency coefficients, the color and the wavelet coefficient of the camera ray at a novel pose can be estimated via the classic volume rendering technique in NeRF [25]. Besides the color and the wavelet coefficient, we also predict the depth value of each ray for the depth supervision (see supplementary materials for more details). The volume rendering can be represented as: \[\{\hat{c},\hat{f},\hat{d}\}=\sum_{n=1}^{N}\text{exp}(-\sum_{k=1}^{n-1}\sigma_ {k})(1-\text{exp}(-\sigma_{n}))\{\hat{c}_{n},\hat{f}_{n},z_{n}\}, \tag{5}\] where \(z_{n}\) is the depth of point \(x_{n}\) with respect to the novel pose. ### Loss Function Based on previous studies, we adopt the same primary color loss \(\mathcal{L}_{c}\) and depth loss \(\mathcal{L}_{D}\) as GeoNeRF [13]. For more details about these losses, please refer to the supplementary materials. In addition to these losses, we introduce two frequency losses on the predicted wavelet coefficients to supervise the training in the frequency domain. The base frequency loss Figure 4: The overall structure of the Hybrid Neural Renderer (HNR). First, attention-based modules are employed to obtain refined tokens \(\{t_{n,v}^{\prime s/w}\}\) for each domain. These tokens are then sent to MLP-based modules introduced in GeoNeRF [13] to generate volume density \(\sigma_{n}\), color \(\hat{c}_{n}\), and frequency coefficient \(\hat{f}_{n}\) for each point \(x_{n}\). The frequency coefficient \(\hat{f}_{n}\) is further used to adjust the color after passing through the linear layers. function is similar to the color loss function and calculates the mean squared error between the predicted wavelet coefficients and the ground truth pixel wavelet coefficients as below: \[\mathcal{L}_{f_{b}}=\frac{1}{|R|}\sum_{r\in R}||\hat{f}(r)-f_{gt}(r)||^{2}, \tag{6}\] where \(R\) is the set of rays in each training batch and \(f_{gt}\) is the ground truth frequency coefficients. To improve learning around high-frequency features, we have also designed a Weighted Frequency Loss (WFL), which is a modified color loss. This loss amplifies the error around the high-frequency features based on the value of the wavelet coefficients in that region. It can be represented as: \[\mathcal{L}_{f_{w}}=\frac{1}{|R|}\sum_{r\in R}f_{gt}(r)||\hat{c}(r)-c_{gt}(r)|| ^{2}. \tag{7}\] Finally, by combining all the losses mentioned above, the complete loss function of our model is represented as: \[\mathcal{L}=\mathcal{L}_{c}+0.1\mathcal{L}_{f_{b}}+0.5\mathcal{L}_{f_{w}}+0.1 \mathcal{L}_{D}. \tag{8}\] ## 4 Experiment **Dataset.** We have trained our generalizable network using the DTU dataset [12], IBRNet dataset [32], and a real forward-facing dataset from LLFF [24]. For the partition of DTU dataset, we follow the approach of PixelNeRF [37], resulting in 88 training scenes and 16 testing scenes while maintaining an image resolution of \(600\times 800\) as in GeoNeRF [13]. For depth supervision, we only use ground truth depths from MVSNet [35] for DTU dataset. For samples from the forward-facing LLFF dataset and IBRNet dataset, we use self-supervised depth supervision. Specifically, we used 35 scenes from LLFF and 67 scenes from IBRNet as in GeoNeRF. To evaluate our model, we test it on three datasets: DTU test data, Synthetic NeRF data [25], and LLFF Forward-Facing data. DTU dataset contains 16 test scenes and the other two datasets both have 8 test scenes. We followed the same evaluation protocols as NeRF [25] for the synthetic dataset and LLFF dataset, and the same protocol in MVSNeRF [3] for the DTU dataset. **Implementation details.** To fit the pyramid structure, we adopt a two-scale (J=2) wavelet transform for the WMVS module. Increasing the number of scales does not improve the rendering quality significantly, but it largely increases the difficulty of implementation due to the complicated padding operations. In contrast to the three different granularities (\(D_{s}=[8,32,48]\)) for the spatial sweep planes in WMVS, we uniformly sample 32 frequency sweep planes (\(D_{w}=32\)) from near to far because high-frequency features are usually sparsely distributed. We set the number of points in our sampling strategy to be \(N_{c}=96\) and \(N_{f}=32\) on a ray for all scenes, and set the number of input views to be \(V=3\) for both the training and evaluation process. For more implementation details, please refer to the supplementary. ### Experiment Results We evaluate our model and compared it with existing generalizable NeRF models, including PixelNeRF [37], MVSNeRF [3], PointNeRF [34], and GeoNeRF [13]. We quantitatively compare the models in terms of PSNR, SSIM [33], and LPIPS [39] as shown in Table 1, which demonstrates the superiority of our WaveNeRF model over previous generalizable models. Notably, for a fair comparison, we evaluate all methods under the same setting with only three input views, and do not quote the results reported in original papers. Specifically, MVSNeRF [3] has a nearest-view evaluation mode that uses three nearest source views for novel views, which actually imports more than three input views. We thus adopt its fixed-views evalua \begin{table} \begin{tabular}{l||c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{DTU [12]} & \multicolumn{3}{c|}{NeRF Synthetic [25]} & \multicolumn{3}{c}{LLFF [24]} \\ \cline{2-10} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline PixelNeRF [37] & 19.31 & 0.789 & 0.382 & 7.390 & 0.658 & 0.411 & 11.24 & 0.486 & 0.671 \\ \hline MVSNeRF [3] & 20.68 & 0.875 & 0.243 & 16.70 & 0.845 & 0.278 & 20.07 & 0.726 & 0.318 \\ \hline PointNeRF [34] & 23.89 & 0.874 & 0.203 & 22.73 & 0.887 & 0.193 & N/A & N/A & N/A \\ \hline GeoNeRF [13] & 27.67 & 0.920 & 0.117 & 24.80 & 0.891 & 0.182 & 23.22 & 0.757 & 0.248 \\ \hline GeoNeRF* & 29.02 & 0.940 & 0.0864 & 25.83 & 0.907 & 0.137 & 24.31 & 0.793 & 0.213 \\ \hline WaveNeRF & 29.55 & 0.948 & 0.0749 & 26.12 & 0.918 & 0.113 & 24.28 & 0.794 & 0.212 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison of our proposed WaveNeRF with existing generalizable NeRF models in terms of PSNR\(\uparrow\), SSIM\(\uparrow\), and LPIPS\(\downarrow\) metrics. The results in red are the best, the results in orange are the second best, and the third best ones are in yellow. ‘*’ denotes that the model (i.e., GeoNeRF) was trained based on a pre-trained Cascade MVSNet checkpoint, while our model is trained from scratch. When we train the GeoNeRF model from scratch using their training scripts, its performance degrades to the values shown in the row of GeoNeRF. tion mode that has three fixed source views. Additionally, the pretrained checkpoints provided by GeoNeRF [13] are based on the pretrained weights from CasMVSNet [11], while our model is trained end-to-end. We thus train a GeoNeRF from scratch using their scripts and evaluate both the end-to-end version and the complete version. The results show that our model can outperform GeoNeRF even if it is trained based on the pretrained weights from CasMVSNet. In addition to quantitative comparisons, we also provide qualitative comparisons of our model with existing methods on different datasets in Fig. 5. Our WaveNeRF model produces images that better preserve the details of the scene and contain fewer artifacts. ### Ablation Study We conducted several ablation studies to validate the effectiveness of our designed modules on three evaluation datasets (DTU dataset [12], NeRF synthetic dataset [25], and LLFF dataset [24]). The evaluation of WaveNeRF includes the following variants: 1) the baseline model without any of our novel modules, 2) the baseline model + our WMVS module, 3) the baseline model + our WMVS module + our FSS sampling strategy, 4) the baseline model + all three of our proposed modules but without the WFL loss \(\mathcal{L}_{f_{w}}\), and 5) The complete version of our WaveNeRF model. Table 2 shows the quantitative results of the ablation study, indicating the effectiveness of our proposed modules. \begin{table} \begin{tabular}{l||c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Experiments} & \multicolumn{3}{c|}{DTU [12]} & \multicolumn{3}{c|}{NeRF Synthetic [25]} & \multicolumn{3}{c}{LLFF [24]} \\ \cline{2-10} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline Baseline & 27.67 & 0.920 & 0.117 & 24.80 & 0.891 & 0.182 & 23.22 & 0.757 & 0.248 \\ \hline + WMVS & 27.97 & 0.922 & 0.113 & 24.63 & 0.887 & 0.183 & 23.23 & 0.762 & 0.244 \\ \hline + WMVS + FSS & 28.90 & 0.942 & 0.084 & 25.63 & 0.912 & 0.119 & 23.99 & 0.782 & 0.227 \\ \hline + WMVS + FSS + HNR & 29.16 & 0.942 & 0.083 & 25.89 & 0.916 & 0.118 & 24.02 & 0.795 & 0.206 \\ \hline + WMVS + FSS + HNR + WFL & **29.55** & **0.948** & **0.075** & **26.12** & **0.918** & **0.113** & **24.28** & **0.794** & **0.212** \\ \hline \end{tabular} \end{table} Table 2: The quantitative results of the Ablation studies in terms of PSNR\(\uparrow\), SSIM\(\uparrow\), and LPIPS\(\downarrow\) metrics. The experiments are carried out on the DTU dataset, the NeRF Synthetic dataset, and the LLFF dataset. Please refer to Section 4.2 for the details of the design of our ablation studies Figure 5: The qualitative results of our WaveNeRF and the comparison with PixelNeRF [37], MVSNet [3], and GeoNeRF [13]. We show the scenes from LLFF dataset [24] (_horn_), DTU dataset [12] (_scan40_), and NeRF synthetic dataset [25] (_chair_). Our WaveNeRF model can preserve more details than the previous generalizable NeRF. ### Evaluation of High-frequency Components To assess how well our model renders high-frequency features in images, we rely on a metric called HFIV [31]. This metric measures the proportion of high-frequency components (HF\({}_{c}\)) in an image, which is indicative of its high-frequency quality. To facilitate comparisons across our test data, we modify HFIV to calculate the difference between the HF\({}_{c}\) of the ground truth and the HF\({}_{c}\) of the rendered results. The smaller this difference, the better the performance of the model. We compare HFIV of our WaveNeRF, GeoNeRF [13], and MVSNeRF [3] on the same three datasets as the previous experiments. The quantitative (see Table 3) results indicate that our WaveNeRF model can reconstruct better high-frequency details than the previous generalizable NeRFs. ### Evaluation of the Frequency-Guided Sampling In the classic NeRF [25], the fine-sampled \(N_{f}\) points are selected based on a normalized weight distribution obtained by estimating the volume density of coarse-sampled points, which allows to sample dense points around the region with visible content. To simplify this coarse-to-fine process, GeoNeRF [13] randomly samples fine points around the valid coarse points to calculate the color and the volume density of all points simultaneously. However, this randomly-sampling strategy cannot ensure that the fine-sampled points exist around the surfaces of the objects, which motivates our frequency-guided sampling strategy. In this section, we evaluate the sampling quality of our frequency-guided strategy by comparing the distribution of the volume density of the sampled points from WaveRF, GeoNeRF [13], and MVSNeRF [3]. As shown in Fig. 6, we can observe that our WaveNeRF model can sample more points with high volume density values, which means our FSS strategy effectively guides the model to have more samples around the surfaces of the objects. ## 5 Limitation Our model is designed to be trained and evaluated using three-shot source views (V=3) on a single GPU with 16 GB memory. For cases with more input views, larger memory is required or the batch size should be decreased to accommodate the additional inputs. It is worth noting that our WMVS module is based on the MVS technique, which means that artifacts may appear if stereo reconstruction fails. The artifacts can manifest as noise in textureless regions or as view-dependent noisy floating-point clusters. ## 6 Conclusion In this paper, we present a new generalizable NeRF model that is capable of generating high-quality novel view images under the few-shot setting, without requiring per-scene optimization. Our proposed model constructs MVS volumes and NeRF in the wavelet frequency domain where the explicit frequency information can be incorporated to boost the rendering quality. Additionally, we utilize frequency features to guide the sampling in NeRF, yielding densely sampled points around objects. We demonstrate that our model outperforms existing models on three datasets: the DTU dataset [12], the NeRF synthetic dataset [25], and the LLFF real forward-facing dataset [24], each with fixed-three input source views. ## 7 Acknowledgements This work is funded by the Ministry of Education Singapore, under the Tier-2 project scheme with a project number MOE-T2EP20220-0003. Fangneng Zhan and Christian Theobalt are funded by the ERC Consolidator Grant 4DRepLy (770784). Figure 6: Comparisons of the distribution of volume density of our WaveNeRF, GeoNeRF [13], and MVSNeRF [3] on LLFF dataset [24] and DTU dataset [12]. The horizontal axis represents the level of volume density where large levels indicate a higher possibility of being around objects. The vertical axis means the number of sampled points whose values are standardized to a standard normal distribution for better visualization. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline Method & DTU & NeRF Synthetic & LLFF \\ \hline MVSNeRF [3] & 0.129 & 0.1910 & 0.241 \\ \hline GeoNeRF [13] & 0.103 & 0.0455 & 0.128 \\ \hline WaveNeRF & **0.0521** & **0.0362** & **0.115** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative comparisons of rendered high-frequency components among MVSNeRF [3], GeoNeRF [13], and our WaveNeRF. The metric used here is HFIV\(\downarrow\) which can measure the difference between two images on the high-frequency bands.
2301.09809
Low-Resource Compositional Semantic Parsing with Concept Pretraining
Semantic parsing plays a key role in digital voice assistants such as Alexa, Siri, and Google Assistant by mapping natural language to structured meaning representations. When we want to improve the capabilities of a voice assistant by adding a new domain, the underlying semantic parsing model needs to be retrained using thousands of annotated examples from the new domain, which is time-consuming and expensive. In this work, we present an architecture to perform such domain adaptation automatically, with only a small amount of metadata about the new domain and without any new training data (zero-shot) or with very few examples (few-shot). We use a base seq2seq (sequence-to-sequence) architecture and augment it with a concept encoder that encodes intent and slot tags from the new domain. We also introduce a novel decoder-focused approach to pretrain seq2seq models to be concept aware using Wikidata and use it to help our model learn important concepts and perform well in low-resource settings. We report few-shot and zero-shot results for compositional semantic parsing on the TOPv2 dataset and show that our model outperforms prior approaches in few-shot settings for the TOPv2 and SNIPS datasets.
Subendhu Rongali, Mukund Sridhar, Haidar Khan, Konstantine Arkoudas, Wael Hamza, Andrew McCallum
2023-01-24T04:27:27Z
http://arxiv.org/abs/2301.09809v2
# Low-Resource Compositional Semantic Parsing with Concept Pretraining ###### Abstract Semantic parsing plays a key role in digital voice assistants such as Alexa, Siri, and Google Assistant by mapping natural language to structured meaning representations. To extend the capabilities of a voice assistant for a new domain, the underlying semantic parsing model needs to be retrained using thousands of annotated examples from the new domain, which is time-consuming and expensive. In this work, we present an architecture to perform such _domain adaptation_ automatically, with only a small amount of metadata about the new domain and without any new training data (zero-shot) or with very few examples (few-shot). We use a base seq2seq (sequence-to-sequence) architecture and augment it with a _concept_ encoder that encodes intent and slot tags from the new domain. We also introduce a novel decoder-focused approach to pretrain seq2seq models to be concept aware using Wikidata. This pretraining helps our model learn important concepts and perform well in low-resource settings. We report few-shot and zero-shot results for compositional semantic parsing on the TOPv2 dataset and show that our model outperforms prior approaches in few-shot settings for the TOPv2 and SNIPS datasets. ## 1 Introduction Voice assistants such as Alexa, Siri, and Google Assistant often rely on semantic parsing to understand requests made by their users. The underlying semantic parsing model converts natural language user utterances into logical forms consisting of actions requested by the user (play music, check weather), called _intents_, and relevant entities in the request (which song? which location?), called _slots_. The model is built to process requests in a fixed set of domains, such as music, weather, shopping, and so on. With voice assistants increasingly pervading more aspects of daily life, systems need to be continuously updated to comprehend new intents and slots across an ever-growing number of domains. Current semantic parsing models are trained on large amounts of annotated data from a predetermined set of domains. Extending these models to learn new intents or slots typically involves collecting and annotating large amounts of new data. This process is expensive and time-consuming. To combat this problem, researchers have proposed semantic parsing models that can be efficiently trained with fewer examples (few-shot) from new domains Shrivastava et al. (2021); Mansimov and Zhang (2021); Ghoshal et al. (2020); Shin et al. (2021); Desai et al. (2021); Rongali et al. (2022); Shrivastava et al. (2022). While these methods facilitate few-shot learning, they have limitations. Some of them rely on hand-crafted knowledge such as intermediate grammars or logical form templates Shrivastava et al. (2022); Shin et al. (2021); Rongali et al. (2022). Others rely on very large pretrained language models, such as GPT-3, to perform in-context learning by appending test examples with instructional prompts Shin et al. (2021). In this work, we explore few-shot domain adaptation for semantic parsing without any additional hand-crafted knowledge apart from the intent and slot tag names, and with much smaller architectures that can perform efficient inference in practical production environments. We also explore zero-shot domain adaptation, when we have no annotated training data from a new domain. To that end, we propose Concept-Seq2Seq, a novel architecture based on a state-of-the-art semantic parsing model, Seq2Seq-Ptr Rongali et al. (2020), which uses seq2seq models and a pointer generator network to decode the target semantic parse. We augment this model with a _concept_ encoder that encodes intents and slots from the schema and uses those encodings to condition ally decode the semantic parse. Figure 1 shows the architecture of our proposed model. We train this model on annotated data from the given domains. During inference, we simply encode all intents and slots from the schema, including new, unseen ones, into the learned concept space, and decode the target parse. This model has the same time complexity as the original Seq2Seq-Ptr model but comes with the added benefit of now being able to effectively parse utterances from unseen domains without any additional effort. There have been a few zero-shot semantic parsing approaches proposed in the past but they either covered only simple slot-filling style utterances [1, 13] or compositional utterances that also came with carefully crafted intermediate representations and context-free grammars [1, 15]. Our model is capable of performing zero-shot domain adaptation for compositional semantic parsing, producing meaning representations with nested intents and slots, but also doesn't require any grammars, whose construction effort often exceeds the effort required to annotate a few examples. In few-shot scenarios, we fine-tune our zero-shot model checkpoints further on the small number of available examples. Due to the presence of the concept encoder in our architecture, we expect to receive better knowledge-transfer advantages by encoding intent and slot tags from new domains as opposed to initializing them as new tags. To further improve performance, we propose a novel decoder-focused pretraining scheme for Concept-Seq2Seq using an entity-centric processed version of Wikidata [12] called WikiWiki [14], to help it better encode unseen concepts and parse effectively. We report the first zero-shot performance numbers for semantic parsing on the compositional TOPv2 dataset [13] and show that Concept-Seq2Seq achieves commendable zero-shot performance on the flat-entity SNIPS dataset [1]. We also evaluate in few-shot settings and show that we match or outperform previous state-of-the-art models while still being production-viable. In summary, our contributions are as follows. * We propose Concept-Seq2Seq, a bi-tower architecture with a seq2seq model and a concept encoder, that can perform few-shot and zero-shot domain adaptation for compositional semantic parsing without additional handcrafted knowledge. * We propose a novel decoder-focused pretraining scheme for Concept-Seq2Seq using Wikidata that helps it better encode unseen concepts and parse effectively. * We report few-shot and zero-shot semantic parsing results on the TOPv2 and SNIPS datasets and show that our model outperforms or matches previously proposed approaches on a variety of few-shot settings. Figure 1: The architecture of Concept-Seq2Seq for low resource domain adaptation. The concept encoder encodes descriptions of each of the concept tags into an embedding and incorporates them into the decoded parse. Methodology In this section, we describe our proposed model, Concept-Seq2Seq, for low resource (few-shot and zero-shot) domain adaptation for semantic parsing. It is based on the Seq2Seq-Ptr model from Rongali et al. (2020), consisting of a sequence-to-sequence encoder-decoder component, augmented with a pointer generator network to constrain the target decoding vocabulary. Since our task at hand is to perform potential zero-shot semantic parsing with just descriptive metadata about the new domain, we modify the architecture of Seq2Seq-Ptr to incorporate information about new intents and slots from new domains by adding a concept encoder. Section 2.2 describes this architecture in detail. To help our model learn to parse utterances from unseen domains better, we also propose a novel pretraining scheme to incorporate general concept parsing knowledge into it. Section 2.3 describes this concept pretraining scheme. Finally, we describe Concept-Seq2Seq model specifics for few-shot and zero-shot settings in Section 2.4. Before we get to these sections, we first describe the source and target sequence formulation for the semantic parsing task below. ### Task Formulation Our model solves semantic parsing as a sequence-to-sequence task, where the source sequence is the utterance and the target sequence is a linearized representation of the semantic parse. Following Rongali et al. (2020), we modify the target sequence to only contain intent/slot tags or pointers to utterance tokens. An example source and target sequence from the TOPv2 dataset are given below. ``` Source:How faristheoffeeshop Target:[IN:GET_DISTANCE@\(ptr_{0}\)@\(ptr_{1}\)@\(ptr_{2}\) [SL:DestITATIONING:GET_RESTATBANDT_LOCATION @\(ptr_{1}\)[SL:TYPE_POPOO@\(ptr_{2}\)@\(ptr_{3}\)@\(ptr_{4}\)@\(ptr_{5}\)@\(ptr_{6}\)@\(ptr_{7}\)@\(ptr_{8}\)@\(ptr_{9}\)@\(ptr_{10}\)@\(ptr_{11}\)@\(ptr_{12}\)@\(ptr_{13}\)@\(ptr_{14}\)@\(ptr_{15}\)@\(ptr_{16}\)@\(ptr_{17}\)@\(ptr_{18}\)@\(ptr_{19}\)@\(ptr_{20}\)@\(ptr_{21}\)@\(ptr_{22}\)@\(ptr_{23}\)@\(ptr_{24}\)@\(ptr_{25}\)@\(ptr_{26}\)@\(ptr_{27}\)@\(ptr_{28}\)@\(ptr_{29}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@(ptr_{2}\)@\(ptr_{2}\)@(ptr_{2}\)@\(ptr_{2}\)@(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@\(ptr_{2}\)@(ptr_ training and to choose the next token to generate during inference. For the target token embeddings in the decoder, we use a set of special embeddings to represent the \(@ptr_{i}\) tokens and \([c_{1}\dots c_{m}]\) to represent the concept embeddings. Figure 1 shows this process in action on a toy example. The model is decoding the next token after \(\mathtt{SL}\): genre at step 5. To do this, the model computes the pointer attention scores \([a_{1}\dots a_{n}]\) (blue, left) and the concept token attention scores \([s_{1}\dots s_{n}]\) (green, right). The highest overall score is for the token \(@ptr_{2}\), corresponding to the word _country_ in the source sequence, so the next predicted token is _country_. ### Concept Pretraining Concept-Seq2Seq has the ability to incorporate new, unseen concepts while parsing using the concept encoder and transfer knowledge on similar concepts. In order to produce these unseen concepts or types, and have our model be robust in low-data settings, it is important for our decoder to be type aware. Conventional seq2seq pretraining schemes such as Lewis et al. (2020); Raffel et al. (2020); Soltan et al. (2022) pretrain the decoder using language modeling criterion. Li et al. (2022) extend the language modeling task to induce entity-type information by treating it as a question answering task. We pretrain the seq2seq model on the semantic parsing task using the WikiWiki Li et al. (2022) dataset. We explain how to achieve this by keeping an open-domain, extensible output space for the semantic parse. The Wikiwiki dataset curates mentions, entities, and entity types from 10M Wikipedia documents using hyperlink information linking sub-spans of text in sentences to other Wikipedia pages. The hyperlink is considered as the mention, and the entity and the type information are extracted from the new page. For further details on this processing, please refer to Li et al. (2022). This dataset contains around 2m entities and 40K entity types. Each example in the Wikiwiki dataset consists of a _context_, which is a paragraph from a wiki page, _mentions_, which are sub-spans of text that link to another page, _entities_, which correspond to each mention, and _entity types_, which describe the type of the entity. We extract individual sentences from this dataset and use them to train Concept-Seq2Seq to learn to encode a wide variety of concepts using the entity type fields as descriptions and tag the relevant mentions in the sentence. Figure 2 shows an example sentence from this dataset and the different fields. The source and target sequences for pretraining, and the descriptions of the concept tags for this example are given below. Source: He is a member of the Soul Seekers Target: (@ptr_{1}@ptr_{2}@ptr_{3}@ptr_{4}@ptr_{4}@ptr_{4}@ptr_{5}@ptr_{6}@ptr_{7}@215380] [Q215380] [Q215380] begin musical group Q215380] end musical group During training, we collect all the concept tokens within a training batch and use them to create in-batch negatives (denominator of the softmax calculation) for the decoding task. We do this since it is extremely inefficient to encode all 40K \(\times\) 2 concept token descriptions (begin and end) from Wikiwiki in every step. ### Few-shot and Zero-shot Specifics Concept-Seq2Seq is primarily designed to perform low resource domain adaptation for semantic parsing by effectively encoding the output space via a concept encoder. In the zero-shot setting, we deploy the following procedure to build our models. We first perform concept pretraining on Concept-Seq2Seq using Wikiwiki example sequences. We take this checkpoint and train on a set of known domains to then obtain the zero-shot model. During inference, we encode all the intent and slot tags from the new unknown domain using the concept encoder of the obtained zero-shot model and set the appropriate decoder parameters to reduce the architecture to a simple encoder-decoder setting. In few-shot settings, we further fine-tune the zero-shot checkpoint on the available handful of training examples. Since we explore extremely low-resource settings (1, 5, 25 samples per intent/s Figure 2: An example sentence from the Wikiwiki dataset with the associated mention, entity, and type fields. The full hyperlinked sub-span is extracted as the mention and the entity and type are extracted from the target page. lot), we run the risk of over-fitting and instability during training. To account for these risks and smooth training, we augment the finetuning loss at every step with the loss from a randomly sampled batch of training data from the known domains. We scale the loss from the random known domain batch down using a multiplier before adding to the loss. This scheme is akin to rehearsal Ratcliff (1990), a popular technique in domain adaptation. ## 3 Experimental Setup We evaluate Concept-Seq2Seq on few-shot and zero-shot domain adaptation using two popular English task-oriented semantic parsing datasets - TOPv2 Chen et al. (2020) and SNIPS Coucke et al. (2018). Both datasets have utterances grouped into multiple domains; TOPv2 has eight domains and SNIPS has seven intents from seven different areas, which we consider domains. TOPv2 is a large dataset consisting of 10k-20k training and 3k-7k test examples per domain. It is also comprised of compositional examples with nested intents and slots. We exclude the _unsupported_ utterances from the training and test sets in TOPv2 for the zero-shot experiments (we use the full sets in few-shot). _Unsupported_ utterances consist of utterances that belong to a domain but are not supported, which is impossible to learn in zero-shot. SNIPS is a smaller and simpler dataset with flat, disjoint slots. It has 2k training and 100 test examples per domain. For zero-shot, we use a leave-one-out approach where given \(n\) domains, we train models on annotated data from \(n-1\) of them and evaluate on utterances from the left-out domain. For few-shot settings, where the model has access to a few annotated examples from the left-out domain, we further fine-tune using 1, 5, and 25 samples-per-intent/slot (SPIs), which we randomly sample from the training data of the left-out domain. We fine-tune Concept-Seq2Seq on three different randomly sampled training sets per domain per SPI setting and report the average performance score of the three runs. We use a transformer encoder, initialized from a _roberta-base_ checkpoint, for Concept-Seq2Seq. The decoder is a transformer decoder initialized from scratch and it contains 6 layers, 8 heads, and a hidden state size of 768. The concept encoder is also a transformer encoder and it is initialized from a _bert-base-uncased_ checkpoint. We choose a BERT-based model here since it is pretrained to compute a vector for the whole sequence using the CLS token, which is what we need for encoding a concept consisting of a multi-word description. We also choose all _base_-size components to keep the overall model size small and expect the relative improvements shown by our model to generalize. We train our zero-shot models using sequence cross entropy loss. We use the Adam optimizer with learning rate \(2e^{-5}\) and \(\epsilon=1e^{-8}\), warm-up proportion \(0.1\), weight decay \(0.01\), and batch-size 128. The number of epochs is set to 100 and we evaluate after every epoch and early stop with a patience of 5. For the Wikiwiki pretraining step, we use the same hyper-parameters but stop the model training after 2 epochs on the entire Wikiwiki dataset. We did not perform explicit hyper-parameter tuning. For the few-shot experiments, we take the zero-shot model trained by excluding the given few-shot domain and fine-tune it on small set of annotated examples for 1000 epochs, evaluating after every 25 epochs. All other parameters are set to the same values as in the initial zero-shot training. We use a multiplier of 0.1 for the augmented loss from the random known-domain data batch. To speed up evaluation during training, we use teacher-forced sequence accuracy as our validation metric, which doesn't require us to perform any beam search. During inference, we use beam search decoding with a beam size of 4. We report exact match (EM) accuracy for the few-shot experiments, meaning the entire predicted parse has to match exactly with the gold parse. For zero-shot, we report both EM and F1 score since the task is more difficult and the performance is \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & **Alarm** & **Event** & **Timer** & **Weather** & **Alarm** & **Event** & **Timer** & **Weather** \\ \hline & \multicolumn{4}{c}{F1 score} & \multicolumn{4}{c}{EM Accuracy} \\ \hline Concept-Seq2Seq w/o pretraining & 62.53 & 30.25 & 55.68 & 51.23 & 45.94 & 0.00 & 0.51 & 8.62 \\ Concept-Seq2Seq & 71.00 & 70.47 & 58.48 & 66.29 & 53.64 & 20.21 & 3.86 & 26.63 \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot performance of Concept-Seq2Seq on domains in TOPv2. We observe notable scores in the alarm and weather domains and improvements across all the domains after the concept pretraining step. generally lower. At these lower numbers, F1 score, which awards partial credit to correctly tagged spans provides a better picture for improvements than EM accuracy, which requires the entire predicted parse to be correct for credit. For comparison wherever applicable, we use prior state-of-the-art models as baselines. In addition, we also report the performance of a vanilla Seq2Seq-Ptr model without concept pretraining. ## 4 Results and Discussion In this section, we report and discuss the performance of Concept-Seq2Seq on zero-shot and few-shot domain adaptation for semantic parsing. We first briefly describe our findings in zero-shot setting and then describe findings in a variety of few-shot scenarios. ### Zero-shot Domain Adaptation We report the first zero-shot performance numbers for domain adaptation on the TOPv2 dataset. Table 1 contains these numbers. We observed that our model produced decent predictions on four of the eight domains in the dataset, which we document in the table. For the other domains, the scores were very low. Concept-Seq2Seq achieves good F1 and EM Accuracy scores on the alarm domain (71.00% F1 and 53.64% EM). On the weather and event domains, the concept pretraining step helps it achieve decent EM scores around 20%. On the timer domain, Concept-Seq2Seq achieves a fairly high F1 score (58.48%) but a very low EM score (3.86%). Upon manual examination, we found that this was because our model always skipped a certain tag. In the timer domain, there is a slot tag called SL:METHOD_TIMER which tags the kind of timer such as _timer_ or _stopwatch_. Our model never learns to tag these words with that slot. We believe this is probably due to the description being inadequate for performing the requisite task. Overall, we believe the task at hand here is difficult due to the combination of the zero-shot setting and the presence of specific nesting/parsing rules in a compositional semantic parsing task. While a good amount of information can be gleaned from the intent and slot names, our model has no access to any new kinds of tagging rules since it has no annotated data or any descriptions of those rules within the concept descriptions. The descriptions themselves are sometimes inadequate as described with the timer domain above. We simply use the descriptions from the dataset and they weren't really designed to be used to describe the entity being tagged. We leave exploration into better descriptions and incorporating parsing rules without explicit annotations for future work. We also evaluated our model on the SNIPS dataset to compare Concept-Seq2Seq to prior zero-shot approaches for flat slot-filling style datasets. We created a strong baseline using recent NLP advancements such as pretrained transformers and attention mechanisms on the slot-filling style zero-shot model proposed by Bapna et al. (2017) and Lee and Jha (2019). Table 2 compares the performance of Concept-Seq2Seq to this baseline. We observe that our model matches or outperforms the slot-filling baseline on most domains while also being adaptable to compositional datasets. ### Few-shot Domain Adaptation We evaluated Concept-Seq2Seq in few-shot settings of 1, 5, and 25 samples per intent/slot. Table 3 reports the EM accuracy scores of Concept-Seq2Seq and other recent baselines on TOPv2. We also report the performance of a fully trained Concept-Seq2Seq model on all the training data for reference and to show that the architecture of Concept-Seq2Seq is competitive with other state-of-the-art methods in the full-resource setting. We evaluated with a range of SPIs to allow for \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & **Music** & **Book** & **Creative** & **Weather** & **Restaurant** & **Playlist** & **Screening** \\ \hline \multicolumn{6}{c}{F1 score} \\ \hline Slot-filling (Lee and Jha, 2019; Bapna et al., 2017) & 31.20 & **34.13** & 86.21 & 65.64 & 51.40 & **59.96** & 44.50 \\ Concept-Seq2Seq & **50.00** & 30.26 & **88.75** & **74.58** & **57.78** & 57.11 & **45.78** \\ \hline \multicolumn{6}{c}{EM Accuracy} \\ \hline Slot-filling (Lee and Jha, 2019; Bapna et al., 2017) & 11.00 & 1.00 & 69.00 & 37.00 & 12.00 & **21.00** & 24.00 \\ Concept-Seq2Seq & **20.00** & **2.00** & **69.00** & **42.00** & **13.00** & 19.00 & **26.00** \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot performance of Concept-Seq2Seq on domains in SNIPS. Our model matches or outperforms the slot-filling style baseline on most domains. comparison with a wide range of models focused on both extremely low-resource (1, 5 SPIs) and medium low-resource (25 SPIs) settings. We report numbers for the baselines from their original papers, so they are missing for some domains. As shown in the table, Concept-Seq2Seq outperforms a vanilla Seq2Seq-Ptr, Inventory Desai et al. (2021), and Retrieve-and-Fill (RAF) Shrivastava et al. (2022) models on most domains in the 1 SPIs setting. RAF scores are very close and the approach outperforms our model on two domains but it uses additional hand-crafted information such as handmade descriptions and examples for intents and slots, as well as an intermediate scenario-bank to retrieve templates from. Concept-Seq2Seq simply works off of the existing information in the dataset. In the 5 SPIs setting, it again outperforms the vanilla Seq2Seq-Ptr and Inventory models on most domains and on average. Inventory is a similar model to ours where the lexical information from intents and slots is used to help better transfer knowledge in the low-resource setting. However, this information is prepended to the input sequence and this might cause input size issues for large inventories. In the slightly higher resource setting of 25 SPIs, Concept-Seq2Seq beats a vanilla Seq2Seq-Ptr model and matches the performance of RINE Mansimov and Zhang (2021), reported on two domains. Across all the SPIs settings, we see that there is a noticeable drop in performance of Concept-Seq2Seq without the Wikiwiki concept pretraining. This shows the effectiveness of the pretraining step in helping the model generalize to unseen concepts and domains better. To wrap up our evaluation, we also report the performance of Concept-Seq2Seq on SNIPS in the few-shot settings described above. Table 4 shows these numbers. We can see that we almost catch up to a fully trained model by training with just 25 SPIs in this dataset on most domains except music. We believe the music domain probably requires a lot of samples to effectively identify the diverse set of entities in the domain. Overall, we find Concept-Seq2Seq to be a very promising approach which achieves high performance scores in low resource domain adaptation. It is capable of doing this in both compositional and flat semantic parsing settings, without any additional hand-crafted information apart from the little documentation in the dataset, and with the memory and inference latency footprint of a vanilla Seq2Seq-Ptr model. ## 5 Related Work Zero-shot domain adaptation for task-oriented semantic parsing has been previously explored for simple flat queries with single intents and disjoint, \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & **Alarm** & **Event** & **Messaging** & **Music** & **Navigation** & **Reminder** & **Timer** & **Weather** \\ \hline \multicolumn{8}{c}{Few-shot 1 SPIs} \\ \hline Seq2Seq-Ptr & 20.41 & 31.85 & 38.12 & 25.58 & 19.96 & 23.66 & 16.62 & 47.24 \\ Inventory Desai et al. (2021) & 62.13 & 46.57 & **46.54** & 23.00 & 21.16 & 28.58 & 28.92 & 54.53 \\ RAF Shrivastava et al. (2022) & 62.71 & - & - & 35.47 & - & - & **55.06** & **61.05** \\ Concept-Seq2Seq w/o pretraining & 61.72 & 44.28 & 34.24 & 20.66 & 20.82 & 35.39 & 44.75 & 52.24 \\ Concept-Seq2Seq & **64.71** & **54.42** & 46.13 & **36.30** & **30.00** & **36.93** & 53.44 & 54.68 \\ \hline \multicolumn{8}{c}{Few-shot 5 SPIs} \\ \hline Seq2Seq-Ptr & 45.50 & 38.31 & 52.79 & 48.75 & 43.38 & 36.37 & 54.79 & 49.94 \\ Inventory Desai et al. (2021) & 71.81 & 58.87 & **63.72** & **53.59** & 42.59 & 48.88 & 55.54 & 65.09 \\ Concept-Seq2Seq w/o pretraining & 71.32 & 53.73 & 51.52 & 45.96 & 50.71 & 50.83 & 58.89 & 66.65 \\ Concept-Seq2Seq & **74.17** & **61.72** & 61.20 & 51.24 & **56.76** & **54.36** & **63.13** & **68.54** \\ \hline \multicolumn{8}{c}{Few-shot 25 SPIs} \\ \hline Seq2Seq-Ptr & - & - & - & - & - & 55.7 & - & 71.6 \\ RINE Mansimov and Zhang (2021) & - & - & - & - & **68.71** & - & 74.53 \\ Concept-Seq2Seq w/o pretraining & 78.16 & 68.21 & 75.28 & 65.54 & 67.67 & 67.92 & 70.72 & 74.30 \\ Concept-Seq2Seq & **79.87** & **72.96** & **80.45** & **67.91** & **70.94** & 67.76 & **72.41** & **76.44** \\ \hline \multicolumn{8}{c}{Reference - Fully trained} \\ \hline Concept-Seq2Seq & 88.07 & 83.23 & 93.11 & 79.47 & 81.63 & 79.57 & 77.33 & 90.73 \\ \hline \hline \end{tabular} \end{table} Table 3: EM Accuracy scores of various models in few-shot settings on TOPv2. We see that our model outperforms prior approaches on many domains and settings, most notably in the 1 SPIs setting. non-overlapping slots. Bapna et al. (2017) and Lee and Jha (2019) encode the lexical tag features and create a token-tagging schema to create the final semantic parses. Yu et al. (2021) solve the task using a retrieve-and-fill mechanism. Our baseline model for simple queries is based on these approaches. For complex utterances with nested structures, zero-shot semantic parsing has been explored using intermediate, concept-agnostic logical forms (Herzig and Berant, 2018; Dong and Lapata, 2018; Reddy et al., 2017) or natural language canonical forms (Wu et al., 2021). These approaches apply to semantic parsing datasets which have context free grammars and specified rules, such as database or knowledge graph queries. The effort to craft these grammars for task-oriented semantic parsing in a voice assistant setting could quite possibly be greater than annotating utterances. A more relevant class of approaches for this work are ones that solve task-oriented semantic parsing for complex utterances in a few-shot setting using lexical tag features. Shrivastava et al. (2021) and Mansimov and Zhang (2021) modify the seq2seq architecture from Rongali et al. (2020) to perform non-autoregressive style decoding and show that their models perform better in a few-shot setting. Ghoshal et al. (2020) use adaptive label smoothing, a model-agnostic technique. Shin et al. (2021) proposed a prompting-style approach where custom instructional prompts filled with handful of annotated examples and an unsolved utterance are fed as input to GPT-3 to directly produce a semantic parse. Their approach is extremely slow and cannot be easily adapted into a zero-shot framework. Shrivastava et al. (2022) explore a retrieve-and-fill style approach where they retrieve the best _scenario_, an intermediate logical form consisting of the semantic frame and abstracted out tags, from a scenario bank of all supported semantic parses. Their approach is contingent on the availability of this scenario bank which could possibly entail more effort than annotating utterances. Mueller et al. (2022) and Desai et al. (2021) use lexical features from intent and slot names to create an _inventory_ and use it as input to train semantic parsers for new domains. Mueller et al. (2022) also pretrain their model to improve generalizability but only evaluate it on an intent classification task. Desai et al. (2021) evaluate their model for full sequences and our model is similar to theirs. However, we use our inventory to create custom decoder embeddings in a seq2seq model, which removes any input size issues that their model will encounter with large inventories. We also pretrain our model with Wikidata and evaluate it in a completely zero-shot setting, in addition to few-shot. Zhao et al. (2022) is another recent question-answering-based approach that uses lexical features from the intent and slot tags by using them as context and posing questions but it has a similar input size issue with large inventories. ## 6 Conclusion We propose a model called Concept-Seq2Seq to perform low-resource domain adaptation for compositional semantic parsing. Our model is built on the Seq2Seq-Ptr framework and is augmented with a concept encoder to transfer knowledge and encode unseen intents and slots from new domains through their text definitions. We also propose a novel concept pretraining scheme to incorporate general concept knowledge into our model using an entity-centric Wikipedia dataset called Wikiwiki. We evaluate our model in zero-shot and multiple few-shot settings on Facebook TOPv2 and SNIPS datasets. We show that our model is capable of performing zero-shot domain adaptation on some domains of the TOPv2 dataset and beats a strong slot-filling baseline on the SNIPS dataset. In few-shot, over multiple dataset sizes of 1, 5, and 25 SPIs, we show that our model outperforms many strong prior models on TOPv2. Using the SNIPS dataset, we also demonstrate how our model catches up to a fully-trained semantic parsing model using just 25 SPIs on most domains. Our model is capable of \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Music**} & \multicolumn{2}{c}{**Book**} & \multicolumn{2}{c}{**Creative**} & \multicolumn{2}{c}{**Weather**} & \multicolumn{2}{c}{**Restaurant**} & \multicolumn{2}{c}{**Playlist**} & \multicolumn{2}{c}{**Screening**} \\ \hline & F1 & EM & F1 & EM & F1 & EM & F1 & EM & F1 & EM & F1 & EM & F1 & EM \\ \hline 1 SPIs & 67.10 & 39.00 & 86.74 & 61.67 & 92.39 & 80.33 & 88.14 & 71.00 & 89.64 & 72.00 & 78.31 & 47.00 & 79.19 & 61.00 \\ 5 SPIs & 84.33 & 68.00 & 96.83 & 89.33 & 92.58 & 84.33 & 94.64 & 86.33 & 93.27 & 81.33 & 87.74 & 68.33 & 94.16 & 87.67 \\ 25 SPIs & 85.90 & 72.67 & 99.32 & 97.33 & 98.10 & 95.33 & 98.27 & 94.67 & 96.38 & 89.33 & 91.73 & 80.67 & 95.02 & 89.33 \\ Fully-trained & 90.78 & 83.00 & 99.18 & 97.00 & 100.00 & 100.00 & 98.55 & 96.00 & 96.89 & 90.00 & 94.53 & 87.00 & 98.35 & 97.00 \\ \hline \hline \end{tabular} \end{table} Table 4: Few-shot performance of Concept-Seq2Seq on domains in SNIPS. Our 25 SPIs model almost catches up to a fully trained model. Numbers are an average of three runs with different random samples of SPIs. low-resource domain adaptation in both compositional and flat parsing settings, without additional hand-crafted information, and with the inference behavior of a vanilla Seq2Seq-Ptr model. ## Acknowledgements This work was supported in part by Amazon Alexa AI, in part by the Chan Zuckerberg Initiative, in part by IBM Research AI through the AI Horizons Network, and in part by the National Science Foundation (NSF) grants IIS-1763618 and IIS-1955567. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors. ## Limitations Our models were trained on GPUs that had at least 20GB on-board memory since in addition to the traditional encoder and decoder components in a seq2seq model, we also train a concept encoder, which is around the same size as the encoder. This eliminates the use of popular GPUs such as 1080ti and 2080-ti, unless parameter freezing or other tricks are employed during training. During inference however, once the concepts are trained, we can simply encode the target tags and this reduces to the size and performance of a traditional Seq2Seq-Ptr model. We also report all results on models (ours and baselines) with _base_-size components such as _roberta-base_. We do this since these models are more likely to be used in production than the much bigger _large_-size models. Results and comparison with _large_-sized models is missing from this work (we expect the trends shown to generalize) and we leave this to future work. Finally, to simulate our low-resource experiments, we randomly sample a few examples from the existing training datasets. While this is useful for experimentation, it doesn't truly mimic a real low-resource workflow where these few examples could be carefully crafted by developers to ensure better semantic coverage in terms of the language of the utterances. This work doesn't include any analysis on the influence of the content of the few selected examples; it just focuses on their number.
2303.16188
Symmetric Rank-$k$ Methods
This paper proposes a novel class of block quasi-Newton methods for convex optimization which we call symmetric rank-$k$ (SR-$k$) methods. Each iteration of SR-$k$ incorporates the curvature information with~$k$ Hessian-vector products achieved from the greedy or random strategy. We prove that SR-$k$ methods have the local superlinear convergence rate of $\mathcal{O}\big((1-k/d)^{t(t-1)/2}\big)$ for minimizing smooth and strongly convex function, where $d$ is the problem dimension and $t$ is the iteration counter. This is the first explicit superlinear convergence rate for block quasi-Newton methods, and it successfully explains why block quasi-Newton methods converge faster than ordinary quasi-Newton methods in practice. We also leverage the idea of SR-$k$ methods to study the block BFGS and block DFP methods, showing their superior convergence rates.
Chengchang Liu, Cheng Chen, Luo Luo
2023-03-28T17:53:06Z
http://arxiv.org/abs/2303.16188v6
# Symmetric Rank-\(k\) Methods ###### Abstract This paper proposes a novel class of block quasi-Newton methods for convex optimization which we call symmetric rank-\(k\) (SR-\(k\)) methods. Each iteration of SR-\(k\) incorporates the curvature information with \(k\) Hessian-vector products achieved from the greedy or random strategy. We prove SR-\(k\) methods have the local superlinear convergence rate of \(\mathcal{O}\big{(}(1-k/d)^{t(t-1)/2}\big{)}\) for minimizing smooth and strongly self-concordant function, where \(d\) is the problem dimensional and \(t\) is the iteration counter. This is the first explicit superlinear convergence rate for block quasi-Newton methods and it successfully explains why block quasi-Newton methods converge faster than standard quasi-Newton methods in practice. ## 1 Introduction We study the quasi-Newton methods for solving the minimization problem \[\min_{\mathbf{x}\in\mathbb{R}^{d}}f(\mathbf{x}), \tag{1}\] where \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is smooth and strongly self-concordant. Quasi-Newton methods [2, 3, 4, 6, 8, 33, 36] are widely recognized for their fast convergence rates and efficient updates, which attracts growing attention in many fields such as statistics [1, 16, 37], economics [20, 24] and machine learning [12, 15, 19, 22, 23]. Unlike standard Newton methods which need to compute the Hessian and its inverse, quasi-Newton methods go along the descent direction by the following scheme \[\mathbf{x}_{t+1}=\mathbf{x}_{t}-\mathbf{G}_{t}^{-1}\nabla f(\mathbf{x}_{t}),\] where \(\mathbf{G}_{t}\in\mathbb{R}^{d\times d}\) is an estimator of the Hessian \(\nabla^{2}f(\mathbf{x}_{t})\). The most popular ways to construct the Hessian estimator are the Broyden family updates, including the Davidon-Fletcher-Powell (DFP) method [8, 10], the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method [3, 4, 33], and the symmetric rank 1 (SR1) method [2, 8]. The classical quasi-Newton methods with Broyden family update [3, 4] find the Hessian estimator \(\mathbf{G}_{t+1}\) for the next round by the secant equation \[\mathbf{G}_{t+1}(\mathbf{x}_{t+1}-\mathbf{x}_{t})=\nabla f(\mathbf{x}_{t+1}) -\nabla f(\mathbf{x}_{t}). \tag{2}\] These methods have been proven to exhibit local superlinear convergence in 1970s [5, 9, 28], and their non-asymptotic superlinear rates were established in recent years [17, 30, 31, 35]. For example, Rodomanov and Nesterov [31] showed classical BFGS method enjoys the local superlinear rates of \(\mathcal{O}\big{(}(d\varkappa/t)^{t}/2\big{)}\) which has been later improved to \(\mathcal{O}\big{(}(\exp(d\ln(\varkappa)/t)-1)^{t/2}\big{)}\)[30], and Ye et al. [35] showed classical SR1 method converges with local rate of \(\mathcal{O}\big{(}(d\ln(\varkappa)/t)^{t/2}\big{)}\), where \(\varkappa\) is the condition number of the objective. Some recent works [13, 29] proposed new types of quasi-Newton methods, which construct the Hessian estimator by the following equation \[\mathbf{G}_{t+1}\mathbf{u}_{t}=\nabla^{2}f(\mathbf{x}_{t+1})\mathbf{u}_{t}, \tag{3}\] where \(\mathbf{u}_{t}\in\mathbb{R}^{d}\) is chosen by greedy or random strategies. Rodomanov and Nesterov [29] established the local superlinear rate of \(\mathcal{O}\big{(}(1-1/(\varkappa d))^{t(t-1)/2}\big{)}\) for greedy quasi-Newton methods with Broyden family updates. Later, Lin et al. [21] provided the condition-number free superlinear rate of \(\mathcal{O}\big{(}(1-1/d)^{t(t-1)/2}\big{)}\) for greedy and random quasi-Newton methods with specific BFGS and SR1 updates. Block quasi-Newton methods construct the Hessian estimator along multiple directions at per iteration. The study of these methods dates back to 1980s. Schnabel [32] proposed first block BFGS method by extending equation (2) to multiple secant equations \[\mathbf{G}_{t+1}(\mathbf{x}_{t+1}-\mathbf{x}_{t+1-j})=\nabla f(\mathbf{x}_{t+ 1})-\nabla f(\mathbf{x}_{t+1-j})\] for \(j=1,\cdots,k\). Although block quasi-Newton methods usually have better empirical performance than classical ones [11, 13, 14, 18, 27, 32], their theoretical guarantees are mystery until Gao and Goldfarb [11] proved block BFGS method has asymptotic local superlinear convergence. On the other hand, Gower and Richtarik [13], Gower et al. [14], Kovalev et al. [18] introduced the randomized block BFGS by generalizing condition (3) to \[\mathbf{G}_{t+1}\mathbf{U}_{t}=\nabla^{2}f(\mathbf{x}_{t+1})\mathbf{U}_{t},\] where \(\mathbf{U}_{t}\in\mathbb{R}^{d\times k}\) is some random matrix. The empirical studies show randomized block BFGS performs well on real-world applications. Kovalev et al. [18] showed randomized block BFGS method also has asymptotic local superlinear convergence, but its advantage over vanilla BFGS methods is still unclear in theory. The known results cannot explain why block quasi-Newton methods enjoy faster convergence behavior than vanilla quasi-Newton methods in practice. This naturally leads to the following question: _Can we design a block quasi-Newton method with explicit superior convergence rate?_ In this paper, we give an affirmative answer to above question by proposing symmetric rank-\(k\) (SR-\(k\)) methods. The construction of Hessian estimators in SR-\(k\) methods are based on generalizing the idea of symmetric rank 1 (SR1) [2, 8, 35] methods and the equation of the form (3). We provide the random and greedy strategies to determine \(\mathbf{U}_{t}\) for SR-\(k\). Both of these strategies lead to the explicit local superlinear convergence rate of \(\mathcal{O}\big{(}(1-k/d)^{t(t-1)/2}\big{)}\), where \(k\) is the number of directions used to approximate Hessian at per iteration. For \(k=1\), our convergence rate reduces to the one of greedy and random SR1 methods [21]. For \(k\geq 2\), it is clear that the convergence rate of SR-\(k\) methods is better than existing greedy and random quasi-Newton methods [21, 29]. We also follow the design of SR-\(k\) to propose a variant of randomized block BFGS method [13, 14, 18], resulting an explicit superlinear convergence of \(\mathcal{O}\big{(}(1-k/(\varkappa d))^{t(t-1)}\big{)}.\) We compare proposed methods with existing quasi-Newton methods for minimizing strongly convex function in Table 1. The remainder of this paper is organized as follows. In Section 2, we introduce the notation and the preliminaries throughout this paper. In Section 3, we introduce the SR-\(k\) update in the view of matrix approximation. In Section 4, we propose the quasi-Newton methods with SR-\(k\) updates for solving the strongly self-concordant function and provide their the superior local superlinear convergence rates. In Section 5, we propose a variant of randomized block BFGS method with explicit local superlinear convergence rate. In Section 6, we conduct numerical experiments to show the outperformance of proposed methods. Finally, we conclude our work in Section 7. ## 2 Preliminaries We use \(\{\mathbf{e}_{1},\cdots,\mathbf{e}_{d}\}\) to present the the standard basis in space \(\mathbb{R}^{d}\) and let \(\mathbf{I}_{d}\in\mathbb{R}^{d\times d}\) be the identity matrix. We denote the trace of a square matrix by \(\operatorname{tr}\left(\cdot\right)\). We use \(\|\cdot\|\) to present the spectral norm and Euclidean norm of matrix and vector respectively. Given a positive definite matrix \(\mathbf{A}\in\mathbb{R}^{d\times d}\), we denote the corresponding weighted norm as \(\|\mathbf{x}\|_{\mathbf{A}}\triangleq(\mathbf{x}^{\top}\mathbf{A}\mathbf{x}) ^{1/2}\) for some \(\mathbf{x}\in\mathbb{R}^{d}\). We use the notation \(\|\mathbf{x}\|_{\mathbf{z}}\) to present \(\|\mathbf{x}\|_{\nabla^{2}f(\mathbf{z})}\) for positive definite Hessian \(\nabla^{2}f(\mathbf{z})\), if there is no ambiguity for the reference function \(f(\cdot)\). We also define \[\mathbf{E}_{k}(\mathbf{A})\triangleq[\mathbf{e}_{i_{1}};\cdots;\mathbf{e}_{i_ {k}}]\in\mathbb{R}^{d\times k}, \tag{4}\] where \(i_{1},\ldots,i_{k}\) are the indices for the largest \(k\) entries in the diagonal of \(\mathbf{A}\). Throughout this paper, we suppose the objective in problem (1) satisfies the following assumptions. **Assumption 2.1**.: We assume the objective function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is \(L\)-smooth, i.e., there exists some constant \(L\geq 0\) such that \(\left\|\nabla f(\mathbf{x})-\nabla f(\mathbf{y})\right\|_{2}\leq L\left\| \mathbf{x}-\mathbf{y}\right\|_{2}\) for any \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\). **Assumption 2.2**.: We assume the objective function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is \(\mu\)-strongly-convex, i.e., there exists some constant \(\mu>0\) such that \[f(\lambda\mathbf{x}+(1-\lambda)\mathbf{y})\leq\lambda f(\mathbf{x})+(1- \lambda)f(\mathbf{y})-\frac{\lambda(1-\lambda)\mu}{2}\left\|\mathbf{x}- \mathbf{y}\right\|_{2}^{2}\] for any \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\) and \(\lambda\in[0,1]\). We define the condition number as \(\varkappa\triangleq L/\mu\). The following proposition shows the objective function has bounded Hessian under Assumption 2.1 and 2.2. **Proposition 2.3**.: _Suppose the objective function \(f:\mathbb{R}^{d}\to\mathbb{R}\) satisfies Assumptions 2.1 and 2.2, then it holds_ \[\mu\mathbf{I}_{d}\preceq\nabla^{2}f(\mathbf{x})\preceq L\mathbf{I}_{d} \tag{5}\] _for any \(\mathbf{x}\in\mathbb{R}^{d}\)._ We also impose the assumption of strongly self-concordance [21, 29] as follows. **Assumption 2.4**.: We assume the objective function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is \(M\)-strongly self-concordant, i.e., there exists some constant \(M>0\) such that \[\nabla^{2}f(\mathbf{y})-\nabla^{2}f(\mathbf{x})\preceq M\|\mathbf{y}- \mathbf{x}\|_{\mathbf{x}}\nabla^{2}f(\mathbf{w}), \tag{6}\] for any \(\mathbf{x},\mathbf{y},\mathbf{w},\mathbf{z}\in\mathbb{R}^{d}\). \begin{table} \begin{tabular}{c c c} \hline \hline **Method** & **Rank** & \(\mathbb{E}\left[\lambda_{t+1}/\lambda_{t}\right]\) \\ \hline Newton [25, 26] & \(d\) & \(\mathcal{O}(\lambda_{t})\) \\ \hline Classical Quasi-Newton [17, 30, 31, 35] & 1 or 2 & \(\mathcal{O}\left(1/t\right)\) \\ \hline Greedy/Randomized Broyden Family [21, 29] & 1 or 2 & \(\mathcal{O}\big{(}(1-1/(\varkappa d))^{t}\big{)}\) \\ \hline Greedy/Randomized BFGS [21] & 1 or 2 & \(\mathcal{O}\big{(}(1-1/d)^{t}\big{)}\) \\ \hline Greedy/Randomized SR1 [21] & 1 or 2 & \(\mathcal{O}\big{(}(1-1/d)^{t}\big{)}\) \\ \hline Multi-Secant Block-BFGS [11, 32] & \(k\in[d]\) & implicit \\ \hline Randomized Block-BFGS (v1) [13, 18] & \(k\in[d]\) & implicit \\ \hline Randomized Block-BFGS (v2) & \(k\in[d]\) & \(\mathcal{O}\big{(}(1-k/(\varkappa d))^{t}\big{)}\) \\ Algorithm 3 & & \(\mathcal{O}\big{(}(1-k/(\varkappa d))^{t}\big{)}\) \\ \hline SR-\(k\) & \(k\in[d]\) & \(\begin{cases}\mathcal{O}\big{(}(1-k/d)^{t}\big{)},&k\in[d-1]\\ \mathcal{O}\big{(}\lambda_{t}\big{)},&k=d\end{cases}\) \\ \hline \hline \end{tabular} \end{table} Table 1: We summarize the properties of quasi-Newton methods for convex optimization The strongly-convex function with Lipschitz-continuous Hessian is strongly self-concordant. **Proposition 2.5**.: _Suppose the objective function \(f:\mathbb{R}^{d}\to\mathbb{R}\) satisfies Assumptions 2.2 and its Hessian is \(L_{2}\)-Lipschitz continuous, i.e., we have \(\|\nabla^{2}f(\mathbf{x})-\nabla^{2}f(\mathbf{y})\|\leq L_{2}\|\mathbf{x}- \mathbf{y}\|\), for all \(\mathbf{x}\), \(\mathbf{y}\in\mathbb{R}^{d}\), then \(f\) satisfies is \(M\)-strongly self-concordant with \(M=L_{2}/\mu^{3/2}\)._ ## 3 Symmetric Rank-\(k\) Updates We propose the symmetric rank-\(k\) (SR-\(k\)) update as follows. **Definition 3.1** (SR-\(k\) Update).: Let \(\mathbf{A}\in\mathbb{R}^{d\times d}\) and \(\mathbf{G}\in\mathbb{R}^{d\times d}\) be two positive-definite matrices with \(\mathbf{A}\preceq\mathbf{G}\). For any full rank matrix \(\mathbf{U}\in\mathbb{R}^{d\times k}\) with \(k\leq d\), we define SR-\(k(\mathbf{G},\mathbf{A},\mathbf{U})\triangleq\mathbf{G}\) if \(\mathbf{GU}=\mathbf{A}\mathbf{U}\). Otherwise, we define \[\text{SR-}k(\mathbf{G},\mathbf{A},\mathbf{U})\triangleq\mathbf{G}-(\mathbf{G }-\mathbf{A})\mathbf{U}(\mathbf{U}^{\top}(\mathbf{G}-\mathbf{A})\mathbf{U})^{ -1}\mathbf{U}^{\top}(\mathbf{G}-\mathbf{A}). \tag{7}\] We provide two strategies to select \(\mathbf{U}\in\mathbb{R}^{d\times k}\) for SR-\(k\) update: 1. For randomized strategy, we sample each entry of \(\mathbf{U}\) according to \(\mathcal{N}(0,1)\) independently. 2. For greedy strategy, we construct \(\mathbf{U}=\mathbf{E}_{k}(\mathbf{G}-\mathbf{A})\), where \(\mathbf{E}_{k}(\cdot)\) follows the notation of (4). For \(k=1\), SR-\(k\) updates with above two strategies reduce to randomized or greedy SR1 updates [21, 31]. The remain of this section shows the multiple directions in \(\mathbf{U}\in\mathbb{R}^{d\times k}\) provably make SR-\(k\) update has the advantage over SR1 update in the view of estimating the target matrix \(\mathbf{A}\). First, we provide the following lemma to show the output \(\mathbf{G}\) of SR-\(k\) update does not increase the deviation from \(\mathbf{A}\), which is similar to ordinary Broyden family updates [29]. **Lemma 3.2**.: _For any positive-definite matrices \(\mathbf{A}\in\mathbb{R}^{d\times d}\) and \(\mathbf{G}\in\mathbb{R}^{d\times d}\) with \(\mathbf{A}\preceq\mathbf{G}\preceq\eta\mathbf{A}\) for some \(\eta\geq 1\), we let \(\mathbf{G}_{+}=\text{SR-}k(\mathbf{G},\mathbf{A},\mathbf{U})\) for some full rank matrix \(\mathbf{U}\in\mathbb{R}^{d\times k}\). Then it holds that_ \[\mathbf{A}\preceq\mathbf{G}_{+}\preceq\eta\mathbf{A}. \tag{8}\] Then we introduce the quantity [21, 35] \[\tau_{\mathbf{A}}(\mathbf{G})\triangleq\text{tr}\left(\mathbf{G}-\mathbf{A}\right) \tag{9}\] to characterize the difference between \(\mathbf{A}\) and \(\mathbf{G}\). We can prove SR-\(k\) updates with randomized or greedy strategies enjoy explicit faster convergence rate than SR1 updates for estimating \(\mathbf{A}\). **Theorem 3.3**.: _Let_ \[\mathbf{G}_{+}=\text{SR-}k(\mathbf{G},\mathbf{A},\mathbf{U}) \tag{10}\] _with \(\mathbf{G}\succeq\mathbf{A}\in\mathbb{R}^{d\times d}\) and select \(\mathbf{U}\in\mathbb{R}^{d\times k}\) by one of the following strategies:_ 1. _Sample each entry of_ \(\mathbf{U}\) _according to_ \(\mathcal{N}(0,1)\) _independently._ 2. _Construct_ \(\mathbf{U}=\mathbf{E}_{k}(\mathbf{G}-\mathbf{A})\)_._ _Then, we have_ \[\mathbb{E}\left[\tau_{\mathbf{A}}(\mathbf{G}_{+})\right]\leq\left(1-\frac{k}{d }\right)\tau_{\mathbf{A}}(\mathbf{G}). \tag{11}\] The term \((1-k/d)\) in inequality (11) reveals the advantage of block-type update in SR-\(k\) methods, since the larger \(k\) leads to the faster decay of \(\tau_{\mathbf{A}}(\mathbf{G}_{+})\). As a comparison, the results of randomized or greedy SR1 updates [21] match the special case of Theorem 3.3 with \(k=1\). ## 4 Minimization of Strongly Self-Concordant Function We propose SR-\(k\) methods for minimizing strongly self-concordant function in Algorithm 1, where \(M>0\) follows the notation in Assumption 2.4. Then we provide the convergence analysis for SR-\(k\) methods and show its superiority to existing quasi-Newton methods. The convergence of SR-\(k\) methods (Algorithm 1) is measured by the local gradient norm [29] \[\lambda(\mathbf{x})\triangleq\sqrt{\nabla f(\mathbf{x})^{\top}( \nabla^{2}f(\mathbf{x}))^{-1}\nabla f(\mathbf{x})}. \tag{12}\] The theoretical analysis starts from the following result for quasi-Newton iterations. **Lemma 4.1** (Lemma 4.3 of Rodomanov and Nesterov [29]).: _Suppose that the twice differentiable function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is strongly self-concordant with constant \(M>0\) and the positive definite matrix \(\mathbf{G}_{t}\in\mathbb{R}^{d\times d}\) satisfies_ \[\nabla^{2}f(\mathbf{x}_{t})\preceq\mathbf{G}_{t}\preceq\eta_{t }\nabla^{2}f(\mathbf{x}_{t}) \tag{13}\] _for some \(\eta_{t}\geq 1\). Then the update formula_ \[\mathbf{x}_{t+1}=\mathbf{x}_{t}-\mathbf{G}_{t}^{-1}\nabla f( \mathbf{x}_{t}) \tag{14}\] _holds that_ \[\|\mathbf{x}_{t+1}-\mathbf{x}_{t}\|_{\mathbf{x}_{t}}\leq\lambda (\mathbf{x}_{t})\quad\text{and}\quad\lambda(\mathbf{x}_{t+1})\leq\left(1- \frac{1}{\eta_{t}}\right)\lambda(\mathbf{x}_{t})+\frac{M}{2}(\lambda(\mathbf{x }_{t}))^{2}+\frac{M^{2}}{4\eta_{t}}(\lambda(\mathbf{x}_{t}))^{3}. \tag{15}\] Applying Lemma 4.1 with fixed \(\eta_{t}=3\eta_{0}/2\) and Lemma 3.2, we can establish the linear convergence rate of SR-\(k\) methods. **Theorem 4.2**.: _Under Assumption 2.1, 2.2 and 2.4, we run Algorithm 1 with initial \(\mathbf{x}_{0}\) and \(\mathbf{G}_{0}\) such that_ \[\lambda(\mathbf{x}_{0})\leq\frac{\ln(3/2)}{4\eta_{0}M}\qquad \text{and}\qquad\nabla^{2}f(\mathbf{x}_{0})\preceq\mathbf{G}_{0}\preceq\eta _{0}\nabla^{2}f(\mathbf{x}_{0})\] _for some \(\eta_{0}\geq 1\). Then it holds that_ \[\nabla^{2}f(\mathbf{x}_{t})\preceq\mathbf{G}_{t}\preceq\frac{3 \eta_{0}}{2}\nabla^{2}f(\mathbf{x}_{t})\qquad\text{and}\qquad\lambda(\mathbf{ x}_{t})\leq\left(1-\frac{1}{2\eta_{0}}\right)^{t}\lambda(\mathbf{x}_{0}). \tag{16}\] Note that the choice of \(\eta_{t}\) in inequality (15) is very important for guarantee the convergence rate of the quasi-Newton method. Specifically, we can obtain the superlinear rate for iteration (14) if there exists some \(\eta_{t}\geq 1\) that converges to \(1\). For example, the randomized and greedy SR1 methods [21] corresponds to some \(\eta_{t}\) such that \[\mathbb{E}[\eta_{t}-1]\leq\mathcal{O}\bigg{(}\bigg{(}1-\frac{1}{d} \bigg{)}^{t}\,\bigg{)}.\] As the results shown in Theorem 3.3, the proposed SR-\(k\) updates have the superiority in matrix approximation. So it is natural to construct some \(\eta_{t}\geq 1\) for SR-\(k\) methods (Algorithm 1) such that \[\mathbb{E}[\eta_{t}-1]\leq\mathcal{O}\bigg{(}\bigg{(}1-\frac{k}{ d}\bigg{)}^{t}\,\bigg{)}.\] Based on above intuition, we derive the local superlinear convergence rate for SR-\(k\) methods, which is explicitly sharper than existing randomized and greedy quasi-Newton methods [21, 29]. **Theorem 4.3**.: _Under Assumption 2.1, 2.2 and 2.4, if we run Algorithm 1 with \(k<d\) and set the initial \(\mathbf{x}_{0}\) and \(\mathbf{G}_{0}\) such that_ \[\lambda(\mathbf{x}_{0})\leq\frac{\ln 2}{2}\cdot\frac{(d-k)}{M \eta_{0}d^{2}\varkappa}\qquad\text{and}\qquad\nabla^{2}f(\mathbf{x}_{0})\preceq \mathbf{G}_{0}\preceq\eta_{0}\nabla^{2}f(\mathbf{x}_{0}) \tag{17}\] _for some \(\eta_{0}\geq 1\). Then we have_ \[\mathbb{E}\left[\frac{\lambda(\mathbf{x}_{t+1})}{\lambda( \mathbf{x}_{t})}\right]\leq 2d\varkappa\eta_{0}\left(1-\frac{k}{d}\right)^{t}, \tag{18}\] _which naturally indicates the following two stage convergence:_ * _For SR-_\(k\) _method with randomized update, we have_ \[\lambda(\mathbf{x}_{t_{0}+t})\leq\bigg{(}1-\frac{k}{d+k}\bigg{)} ^{t(t-1)/2}\cdot\bigg{(}\frac{1}{2}\bigg{)}^{t}\cdot\bigg{(}1-\frac{1}{2\eta_{ 0}}\bigg{)}^{t_{0}}\,\lambda(\mathbf{x}_{0}),\] _with probability at least_ \(1-\delta\) _for some_ \(\delta\in(0,1)\)_, where_ \(t_{0}=\mathcal{O}(d\ln(\eta_{0}\varkappa d/\delta)/k)\)_._ * _For SR-_\(k\) _method with greedy update, we have_ \[\lambda(\mathbf{x}_{t_{0}+t})\leq\bigg{(}1-\frac{k}{d}\bigg{)}^{ t(t-1)/2}\cdot\bigg{(}\frac{1}{2}\bigg{)}^{t}\cdot\bigg{(}1-\frac{1}{2\eta_{ 0}}\bigg{)}^{t_{0}}\,\lambda(\mathbf{x}_{0}),\] _where_ \(t_{0}=\mathcal{O}(d\ln(\eta_{0}\varkappa d)/k)\)_._ Additionally, SR-\(k\) methods with \(k=d\) have the local quadratic convergence rate. **Corollary 4.4**.: _Under Assumption 2.1, 2.2 and 2.4, we run Algorithm 1 with \(k=d\) and set the initial \(\mathbf{x}_{0}\) and \(\mathbf{G}_{0}\) such that_ \[\lambda(\mathbf{x}_{0})\leq\frac{\ln(3/2)}{4M\eta_{0}}\quad\text{ and}\quad\nabla^{2}f(\mathbf{x}_{0})\preceq\mathbf{G}_{0}\preceq\eta_{0}\nabla^{2}f( \mathbf{x}_{0}). \tag{19}\] _Then we have_ \[\mathbb{E}\left[\lambda(\mathbf{x}_{t+1})\right]\leq M(\lambda( \mathbf{x}_{t}))^{2} \tag{20}\] _for any \(t\geq 1\)._ ``` 1:Input:\(\mathbf{G}_{0}\) and \(k\). 2:for\(t=0,1\dots\) 3:\(\mathbf{x}_{+}=\mathbf{x}_{t}-\mathbf{G}_{t}^{-1}\nabla f(\mathbf{x}_{t})\) 4:\(\mathbf{x}_{t+1}=\arg\min_{\mathbf{x}\in\left\{\mathbf{x}_{i},\mathbf{x}_{+} \right\}}f(\mathbf{x})\) 5: Construct \(\mathbf{U}_{t}\) by \(\left[\mathbf{U}_{t}\right]_{ij}\overset{\mathrm{i.i.d}}{\sim}\mathcal{N}(0,1)\) 6:\(\mathbf{G}_{t+1}=\mathrm{BlockBFGS}(\mathbf{G}_{t},\nabla^{2}f(\mathbf{x}_{t}), \mathbf{U}_{t})\) 7:endfor ``` **Algorithm 2** Randomized Block BFGS Method (v1) ## 5 Improved Results for Block BFGS In this section, we present the non-asymptotic superlinear convergence rate of randomized block BFGS method [13, 14] by following the idea of SR-\(k\). The block BFGS update [13, 14, 32] is defined as follows. **Definition 5.1**.: Let \(\mathbf{A}\in\mathbb{R}^{d\times d}\) and \(\mathbf{G}\in\mathbb{R}^{d\times d}\) be two positive-definite symmetric matrices with \(\mathbf{A}\preceq\mathbf{G}\). For any full rank matrix \(\mathbf{U}\in\mathbb{R}^{d\times k}\) with \(k\leq d\), we define \(\mathrm{BlockBFGS}(\mathbf{G},\mathbf{A},\mathbf{U})\triangleq\mathbf{G}\) if \(\mathbf{G}\mathbf{U}=\mathbf{A}\mathbf{U}\). Otherwise, we define \[\mathrm{BlockBFGS}(\mathbf{G},\mathbf{A},\mathbf{U})\triangleq\mathbf{G}- \mathbf{G}\mathbf{U}\left(\mathbf{U}^{\top}\mathbf{G}\mathbf{U}\right)^{-1} \mathbf{U}^{\top}\mathbf{G}+\mathbf{A}\mathbf{U}\left(\mathbf{U}^{\top} \mathbf{A}\mathbf{U}\right)^{-1}\mathbf{U}^{\top}\mathbf{A}. \tag{21}\] Gower et al. [14], Kovalev et al. [18] proposed randomized block BFGS method (Algorithm 2) by constructing the Hessian estimator with formula (21) and showed it has asymptotic local superlinear convergence rate. For achieving the explicit superlinear convergence rate, we require providing some properties of randomized BFGS update, which is similar to the counterpart of SR-\(k\) updates. First, we observe that randomized block BFGS update also has non-increasing deviation from the target matrix. **Lemma 5.2**.: _For any positive-definite matrices \(\mathbf{A}\in\mathbb{R}^{d\times d}\) and \(\mathbf{G}\in\mathbb{R}^{d\times d}\) with \(\mathbf{A}\preceq\mathbf{G}\preceq\eta\mathbf{A}\) for some \(\eta\geq 1\), we let \(\mathbf{G}_{+}=\mathrm{BlockBFGS}(\mathbf{G},\mathbf{A},\mathbf{U})\) for some full rank matrix \(\mathbf{U}\in\mathbb{R}^{d\times k}\). Then, it holds that_ \[\mathbf{A}\preceq\mathbf{G}_{+}\preceq\eta\mathbf{A}. \tag{22}\] Then we introduce the quantity [29] \[\sigma_{\mathbf{A}}(\mathbf{G})\triangleq\mathrm{tr}\left(\mathbf{A}^{-1}( \mathbf{G}-\mathbf{A})\right), \tag{23}\] to measure the difference of two positive definite matrices. We show that randomized block BFGS update converges to the target matrix with a faster rate than the ordinary randomized BFGS update [21, 29]. **Theorem 5.3**.: _Consider the block BFGS update_ \[\mathbf{G}_{+}=\mathrm{BlockBFGS}(\mathbf{G},\mathbf{A},\mathbf{U}), \tag{24}\] _where \(\mathbf{G}\succeq\mathbf{A}\in\mathbb{R}^{d\times d}\). If \(\mu\mathbf{I}_{d}\preceq\mathbf{A}\preceq L\mathbf{I}_{d}\) and \(\mathbf{U}\in\mathbb{R}^{d\times k}\) is selected by sample each entry of \(\mathbf{U}\) according to \(\mathcal{N}(0,1)\) independently. Then, we have_ \[\mathbb{E}\left[\sigma_{\mathbf{A}}(\mathbf{G}_{+})\right]\leq\left(1-\frac{k} {d\varkappa}\right)\sigma_{\mathbf{A}}(\mathbf{G}). \tag{25}\] We proposed a variant randomized block BFGS method in Algorithm 3. Based on the observation in Theorem 5.3, we establish its explicit superlinear convergence rate as follows. **Theorem 5.4**.: _Under Assumption 2.1, 2.2 and 2.4, we run Algorithm 3 and set the initial \(\mathbf{x}_{0}\) and \(\mathbf{G}_{0}\) such that_ \[\lambda(\mathbf{x}_{0})\leq\frac{\ln 2}{4}\cdot\frac{1}{M\eta_{0}d}\qquad\text{ and } \qquad\nabla^{2}f(\mathbf{x}_{0})\preceq\mathbf{G}_{0}\preceq\eta_{0}\nabla^{2}f( \mathbf{x}_{0}), \tag{26}\] _for some \(\eta_{0}\geq 1\). Then we have_ \[\mathbb{E}\left[\frac{\lambda(\mathbf{x}_{t+1})}{\lambda(\mathbf{x}_{t})} \right]\leq 2d\eta_{0}\left(1-\frac{k}{d\varkappa}\right)^{t}.\] _Remark 5.5_.: For \(k=1\), Theorem 5.3 and 5.4 match the results of ordinary randomized BFGS methods [21]. ## 6 Numerical Experiments We conduct the experiments on the model of regularized logistic regression, which can be formulated as \[\min_{\mathbf{x}\in\mathbb{R}^{d}}f(\mathbf{x})\triangleq\frac{1}{n}\sum_{i= 1}^{n}\ln(1+\exp(-b_{i}\mathbf{a}_{i}^{\top}\mathbf{x}))+\frac{\gamma}{2}\| \mathbf{x}\|^{2}, \tag{27}\] where \(\{\mathbf{a}_{i},b_{i}\}_{i=1}^{n}\) are the training set with \(\mathbf{a}_{i}\in\mathbb{R}^{d}\), \(b_{i}\in\{-1,+1\}\) and \(\gamma>0\) is the regularization hyperparameter. We refer to SR-\(k\) methods (Algorithm 1) with randomized and greedy strategies as RaSR-\(k\) and GrSR-\(k\) respectively. The corresponding SR1 methods with randomized and greedy strategies are referred as RaSR1 and GrSR1 [21, Algorithm 4] respectively. We also refer to randomized block BFGS (Algorithm 2 [13, 14]) and its variant (Algorithm 3) as BlockBFGSv1 and BlockBFGSv2. We compare the proposed RaSR-\(k\), GrSR-\(k\) and BlockBFGSv2 with baseline methods on problem (27). For all methods, We set the parameters \(\mathbf{G}_{0}\) and \(M\) from \(\{\mathbf{I}_{d},10\cdot\mathbf{I}_{d},10^{2}\cdot\mathbf{I}_{d},10^{3}\cdot \mathbf{I}_{d},10^{4}\cdot\mathbf{I}_{d}\}\) and \(\{2,20,200,2000\}\) respectively. We evaluate the performance for all of methods on four real-world datasets "a9a", "w8a" and "madelon". We conduct our experiments on a PC with Apple M1 and implement all algorithms in Python 3.8.12. Figure 1: We demonstrate “#iteration vs. \(\|\nabla f(\mathbf{x})\|_{2}\)” and “running time (s) vs. \(\|\nabla f(\mathbf{x})\|_{2}\)” on datasets “a9a”, “w8a” and “madelon”, where we take \(k=5\) for all of the block quasi-Newton methods. We present the results of "iteration numbers vs. gradient norm" and "running time (second) vs. gradient norm" in Figure 1 and Figure 2, which corresponds to the settings of \(k=5\) and \(10\) for block quasi-Newton methods RaSR-\(k\), GrSR-\(k\), BlockBFGSv1 and BlockBFGSv2. We observe that the proposed SR-\(k\) methods (RaSR-\(k\) and GrSR-\(k\)) always significantly outperform baselines. ## 7 Conclusion In this paper, we have proposed symmetric rank-\(k\) (SR-\(k\)) methods for convex optimization. We have proved SR-\(k\) methods enjoy the explicit local superlinear convergence rate of \(\mathcal{O}\left((1-k/d)^{t(t-1)/2}\right)\). Our result successfully reveals the advantage of block-type updates in quasi-Newton methods, building a bridge between the theories of ordinary quasi-Newton methods and standard Newton method. As a byproduct, we also provide the convergence rate of \(\mathcal{O}\left((1-k/(\varkappa d))^{t(t-1)/2}\right)\) for randomized block BFGS method. In future work, it would be interesting to establish the global convergence of SR-\(k\) methods and study the convergence for limited memory block quasi-Newton methods. ## Appendix A The Proofs in Section 3 We provide the proofs for the properties of SR-\(k\) updates shown in Section 3. We focus on the case of \(\mathbf{GU}\neq\mathbf{AU}\), since the results are obvious for \(\mathbf{GU}=\mathbf{AU}\). ### The Proof of Lemma 3.2 Proof.: Define \(\mathbf{R}=\mathbf{G}-\mathbf{A}\succeq\mathbf{0}\). According to the update rule, we have \[\mathbf{G}_{+}-\mathbf{A}\] \[= \mathbf{R}-\mathbf{RU}(\mathbf{U}^{\top}\mathbf{RU})^{-1}\mathbf{ U}^{\top}\mathbf{R}\] \[= \left(\mathbf{I}_{d}-\mathbf{RU}(\mathbf{U}^{\top}\mathbf{RU})^{ -1}\mathbf{U}^{\top}\right)\mathbf{R}\left(\mathbf{I}_{d}-\mathbf{U}^{\top}( \mathbf{URU})^{-1}\mathbf{U}^{\top}\mathbf{R}\right)\] \[= \left(\mathbf{I}_{d}-\mathbf{RU}(\mathbf{U}^{\top}\mathbf{RU})^{ -1}\mathbf{U}^{\top}\right)\mathbf{R}\left(\mathbf{I}_{d}-\mathbf{U}(\mathbf{ U}^{\top}\mathbf{RU})^{-1}\mathbf{U}^{\top}\mathbf{R}\right)\succeq\mathbf{0},\] Figure 2: We demonstrate “#iteration vs. \(\|\nabla f(\mathbf{x})\|_{2}\)” and “running time (s) vs. \(\|\nabla f(\mathbf{x})\|_{2}\)” on datasets “a9a”, “w8a” and “madelon”, where we take \(k=10\) for all of the block quasi-Newton methods. which means \[\mathbf{G}_{+}\succeq\mathbf{A}.\] The condition \(\mathbf{G}\preceq\eta\mathbf{A}\) means \[\mathbf{G}_{+}\preceq\eta\mathbf{A}-\underbrace{\mathbf{RU}(\mathbf{U}^{\top} \mathbf{RU})^{-1}\mathbf{U}^{\top}\mathbf{R}}_{\geq\mathbf{0}}\preceq\eta \mathbf{A},\] which finish the proof. ### The Proof of Theorem 3.3 We first provide several lemmas for random matrix and the trace of positive definite matrix. **Lemma A.1**.: _Let \(\mathbf{U}\in\mathbb{R}^{d\times k}\) be a random matrix and each of its entry is independent and identically distributed according to \(\mathcal{N}(0,1)\), then it holds that_ \[\mathbb{E}\left[\mathbf{U}(\mathbf{U}^{\top}\mathbf{U})^{-1} \mathbf{U}^{\top}\right]=\frac{k}{d}\mathbf{I}_{d}. \tag{28}\] Proof.: We use \(\mathcal{V}_{d,k}\) to present the Stiefel manifold which is the set of all \(d\times k\) column orthogonal matrices. We denote \(\mathcal{P}_{k,d-k}\) as the set of all \(m\times m\) orthogonal projection matrices idempotent of rank \(k\). According to Theorem 2.2.1 (iii) of Chikuse [7], the random matrix \[\mathbf{Z}=\mathbf{U}(\mathbf{U}^{\top}\mathbf{U})^{-1/2}\] is uniformly distributed on the Stiefel manifold \(\mathcal{V}_{d,k}\). Applying Theorem 2.2.2 (iii) of Chikuse [7], the random matrix \[\mathbf{P}=\mathbf{Z}\mathbf{Z}^{\top}=\mathbf{U}(\mathbf{U}^{ \top}\mathbf{U})^{-1}\mathbf{U}^{\top}\] is uniformly distributed on \(\mathcal{P}_{k,d-k}\). Combining above results with Theorem 2.2.2 (i) of Chikuse [7] on \(\mathbf{P}\) achieves \[\mathbb{E}[\mathbf{P}]=\frac{k}{d}\mathbf{I}_{d}.\] _Remark A.2_.: The above proof requires the knowledge for statistics on manifold. For the readers who are not familiar with this, we also present a elementary proof of Lemma A.1 by induction in Appendix A.3. **Lemma A.3**.: _For positive semi-definite matrix \(\mathbf{S}\in\mathbb{R}^{d\times d}\) and the column orthonormal matrix \(\mathbf{Q}\in\mathbb{R}^{d\times k}\), we have_ \[\operatorname{tr}\left(\mathbf{Q}^{\top}\mathbf{S}\mathbf{Q} \right)\leq\operatorname{tr}\left(\mathbf{S}\right). \tag{29}\] Proof.: Since matrix \(\mathbf{Q}\) is column orthonormal, we have \[\mathbf{Q}\mathbf{Q}^{\top}=\mathbf{Q}(\mathbf{Q}^{\top}\mathbf{ Q})^{-1}\mathbf{Q}^{\top}\preceq\mathbf{I}_{d}.\] According to Lemma C.1, we have \[\operatorname{tr}\left(\mathbf{Q}^{\top}\mathbf{S}\mathbf{Q} \right)=\operatorname{tr}\left(\mathbf{S}\mathbf{Q}\mathbf{Q}^{\top}\right) \leq\operatorname{tr}\left(\mathbf{S}\right).\] **Lemma A.4**.: _For positive semi-definite matrix \(\mathbf{B}\in\mathbb{R}^{d\times d}\) and full rank matrix \(\mathbf{U}\in\mathbb{R}^{d\times k}\) with \(d\geq k\), it holds that_ \[\operatorname{tr}\left(\mathbf{B}\mathbf{U}\left(\mathbf{U}^{ \top}\mathbf{B}\mathbf{U}\right)^{-1}\mathbf{U}^{\top}\mathbf{B}\right)\geq \operatorname{tr}\left(\mathbf{U}\left(\mathbf{U}^{\top}\mathbf{U}\right)^{-1 }\mathbf{U}^{\top}\mathbf{B}\right). \tag{30}\] Proof.: We denote SVD of \(\mathbf{U}\) as \(\mathbf{U}=\mathbf{Q}\mathbf{\Sigma}\mathbf{V}^{\top}\), where \(\mathbf{Q}\in\mathbb{R}^{d\times k}\), \(\mathbf{V}\in\mathbb{R}^{k\times k}\) are (column) orthogonal and \(\mathbf{\Sigma}\in\mathbb{R}^{k\times k}\) is diagonal. We have \[\mathrm{tr}\left(\mathbf{U}\left(\mathbf{U}^{\top}\mathbf{U} \right)^{-1}\mathbf{U}^{\top}\mathbf{B}\right)=\mathrm{tr}\left(\mathbf{Q} \mathbf{\Sigma}\mathbf{V}^{\top}\left(\mathbf{V}\mathbf{\Sigma}^{2}\mathbf{V}^{ \top}\right)^{-1}\mathbf{V}\mathbf{\Sigma}\mathbf{Q}^{\top}\mathbf{B}\right)= \mathrm{tr}\left(\mathbf{Q}\mathbf{Q}^{\top}\mathbf{B}\right),\] and \[\mathrm{tr}\left(\mathbf{B}\mathbf{U}\left(\mathbf{U}^{\top} \mathbf{B}\mathbf{U}\right)^{-1}\mathbf{U}^{\top}\mathbf{B}\right) =\mathrm{tr}\left(\mathbf{B}\mathbf{Q}\mathbf{\Sigma}\mathbf{V} ^{\top}\left(\mathbf{V}\mathbf{\Sigma}\mathbf{Q}^{\top}\mathbf{B}\mathbf{Q} \mathbf{\Sigma}\mathbf{V}^{\top}\right)^{-1}\mathbf{V}\mathbf{\Sigma} \mathbf{Q}^{\top}\mathbf{B}\right)\] \[=\mathrm{tr}\left(\mathbf{B}\mathbf{Q}\left(\mathbf{Q}^{\top} \mathbf{B}\mathbf{Q}\right)^{-1}\mathbf{Q}^{\top}\mathbf{B}\right)\] \[\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeq Then we have \[\begin{split}\operatorname{tr}\left((\mathbf{U}^{\top}\mathbf{U})^{-1} \mathbf{U}^{\top}\mathbf{B}\mathbf{U}\right)&=\operatorname{tr} \left(\mathbf{I}_{k}\mathbf{U}^{\top}\mathbf{B}\mathbf{U}\right)=\operatorname{ tr}\left(\mathbf{U}^{\top}\mathbf{B}\mathbf{U}\right)\\ &=\sum_{p=1}^{k}\mathbf{u}_{p}^{\top}\mathbf{B}\mathbf{u}_{p}\\ &=b_{1}+b_{2}+\cdots+b_{k}\\ &\overset{\eqref{eq:b_1}}{\geq}\frac{k}{d}\sum_{i=1}^{d}b_{i}= \frac{k}{d}\operatorname{tr}\left(\mathbf{B}\right),\end{split} \tag{32}\] where \(\mathbf{u}_{p}\) is the \(p\)-th column of \(\mathbf{U}\). Setting \(\mathbf{B}=\mathbf{G}-\mathbf{A}\) and applying Lemma A.4, we have \[\begin{split}\operatorname{tr}\left(\mathbf{G}_{+}-\mathbf{A} \right)&=\operatorname{tr}\left(\mathbf{B}\right)-\operatorname {tr}\left((\mathbf{U}^{\top}\mathbf{B}\mathbf{U})^{-1}(\mathbf{U}\mathbf{B}^{ 2}\mathbf{U})\right)\\ &\overset{\eqref{eq:b_1}}{\leq}\operatorname{tr}\left(\mathbf{ B}\right)-\operatorname{tr}\left((\mathbf{U}^{\top}\mathbf{U})^{-1}\mathbf{U}^{ \top}\mathbf{B}\mathbf{U}\right)\\ &\overset{\eqref{eq:b_1}}{\leq}\left(1-\frac{k}{d}\right) \operatorname{tr}\left(\mathbf{B}\right)\\ &\leq\operatorname{tr}\left(\mathbf{G}-\mathbf{A}\right).\end{split}\] ### An Elementary Proof of Lemma a.1 We first provide the following lemma for multivariate normal distribution. **Lemma A.5**.: _Assume \(\mathbf{P}\in\mathbb{R}^{d\times k}\) is column orthonormal \((k\leq d)\) and \(\mathbf{p}\sim\mathcal{N}(\mathbf{0},\mathbf{P}\mathbf{P}^{\top})\) is a \(d\)-dimensional multivariate normal distributed vector. Then we have_ \[\mathbb{E}\left[\frac{\mathbf{p}\mathbf{p}^{\top}}{\mathbf{p}^{\top}\mathbf{p }}\right]=\frac{1}{k}\mathbf{P}\mathbf{P}^{\top}.\] Proof.: The distribution \(\mathbf{p}\sim\mathcal{N}(\mathbf{0},\mathbf{P}\mathbf{P}^{\top})\) implies there exists a \(k\)-dimensional multivariate normal distributed vector \(\mathbf{p}_{1}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{k})\) such that \(\mathbf{p}=\mathbf{P}\mathbf{p}_{1}\). Thus we have \[\begin{split}\mathbb{E}\left[\frac{\mathbf{p}\mathbf{p}^{\top}}{ \mathbf{p}^{\top}\mathbf{p}}\right]&=\mathbb{E}\left[\frac{( \mathbf{P}\mathbf{p}_{1})(\mathbf{P}\mathbf{p}_{1})^{\top}}{(\mathbf{P}\mathbf{ p}_{1})^{\top}(\mathbf{P}\mathbf{p}_{1})}\right]\\ &=\mathbb{E}\left[\frac{\mathbf{P}\mathbf{p}_{1}\mathbf{p}_{1}^{ \top}\mathbf{P}^{\top}}{\mathbf{p}_{1}^{\top}\mathbf{p}_{1}}\right]\\ &=\mathbf{P}\mathbb{E}\left[\frac{\mathbf{p}_{1}\mathbf{p}_{1}^{ \top}}{\mathbf{p}_{1}^{\top}\mathbf{p}_{1}}\right]\mathbf{P}^{\top}\\ &=\frac{1}{k}\mathbf{P}\mathbf{P}^{\top}.\end{split}\] Then we provide an elementary proof of Lemma A.1. Proof.: We prove inequality (28) by induction on \(k\). The induction base \(k=1\) is easily verified. Now we assume \[\mathbb{E}\left[\mathbf{U}(\mathbf{U}^{\top}\mathbf{U})^{-1} \mathbf{U}^{\top}\right]=\frac{k}{d}\mathbf{I}_{d}\] holds for any \(\mathbf{U}\in\mathbb{R}^{d\times k}\) that each of its entries are independently distributed according to \(\mathcal{N}(0,1)\). We define the random matrix \[\bar{\mathbf{U}}=\begin{bmatrix}\mathbf{U}&\mathbf{q}\end{bmatrix}\in \mathbb{R}^{d\times(k+1)},\] where \(\mathbf{q}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d})\) is independent distributed to \(\mathbf{U}\). Then we have \[\bar{\mathbf{U}}(\bar{\mathbf{U}}^{\top}\bar{\mathbf{U}})\bar{ \mathbf{U}}^{\top} =\begin{bmatrix}\mathbf{U}&\mathbf{q}\end{bmatrix}\left(\begin{bmatrix} \mathbf{U}^{\top}\\ \mathbf{q}^{\top}\end{bmatrix}\begin{bmatrix}\mathbf{U}&\mathbf{q}\end{bmatrix} \right)^{-1}\begin{bmatrix}\mathbf{U}^{\top}\\ \mathbf{q}^{\top}\end{bmatrix}=\mathbf{A}+\frac{(\mathbf{I}_{d}-\mathbf{A}) \mathbf{q}\mathbf{q}^{\top}(\mathbf{I}_{d}-\mathbf{A})}{\mathbf{q}^{\top}( \mathbf{I}_{d}-\mathbf{A})\mathbf{q}},\] where \(\mathbf{A}=\mathbf{U}(\mathbf{U}^{\top}\mathbf{U})^{-1}\mathbf{U}^{\top}\). Since the rank of projection matrix \(\mathbf{I}_{d}-\mathbf{A}\) is \(d-k\), we have \(\mathbf{I}_{d}-\mathbf{A}=\mathbf{Q}\mathbf{Q}^{\top}\) for some column orthonormal matrix \(\mathbf{Q}\in\mathbb{R}^{d\times(d-k)}\). Thus, we achieve \[\mathbb{E}[\bar{\mathbf{U}}(\bar{\mathbf{U}}^{\top}\bar{ \mathbf{U}})\bar{\mathbf{U}}^{\top}] =\frac{k}{d}\mathbf{I}_{d}+\mathbb{E}_{\mathbf{U}}\left[\mathbb{E }_{\mathbf{q}}\left[\frac{(\mathbf{I}_{d}-\mathbf{A})\mathbf{q}\mathbf{q}^{ \top}(\mathbf{I}_{d}-\mathbf{A})}{\mathbf{q}^{\top}(\mathbf{I}_{d}-\mathbf{A}) \mathbf{q}}\left|\,\mathbf{U}\right|\right]\right]\] \[=\frac{k}{d}\mathbf{I}_{d}+\mathbb{E}_{\mathbf{U}}\left[\mathbb{E }_{\mathbf{q}}\left[\frac{(\mathbf{Q}\mathbf{Q}^{\top}\mathbf{q})(\mathbf{q}^ {\top}\mathbf{Q}\mathbf{Q}^{\top})}{(\mathbf{q}^{\top}\mathbf{Q}\mathbf{Q}^{ \top})(\mathbf{Q}\mathbf{Q}^{\top}\mathbf{q})}\left|\,\mathbf{U}\right|\right]\right]\] \[=\frac{k}{d}\mathbf{I}_{d}+\frac{1}{d-k}\mathbb{E}_{\mathbf{U}}[ \mathbf{Q}\mathbf{Q}^{\top}]\] \[=\frac{k}{d}\mathbf{I}_{d}+\frac{1}{d-k}\mathbb{E}_{\mathbf{U}}[ \mathbf{I}_{d}-\mathbf{A}]\] \[=\frac{k}{d}\mathbf{I}_{d}+\frac{1}{d-k}\frac{d-k}{d}\mathbf{I}_{d}\] \[=\frac{k+1}{d}\mathbf{I}_{d},\] which completes the induction. In above derivation, the second equality is due to Lemma A.5 and the fact \(\mathbf{Q}\mathbf{Q}^{\top}\mathbf{q}\sim\mathcal{N}(\mathbf{0},\mathbf{Q} \mathbf{Q}^{\top})\) for given \(\mathbf{Q}\); the third equality comes from the inductive hypothesis. ## Appendix B The Proof of Section 4 We provide the proofs for the results of SR-\(k\) methods shown in Section 4. ### Auxiliary Lemmas We first provide some auxiliary lemmas which will be used in our later proof. **Lemma B.1**.: _Let \(\{\lambda_{t}\}\) and \(\{\delta_{t}\}\) be two non-negative random sequences that satisfy_ \[\lambda_{t+1}\leq(1+m\lambda_{t})^{2}(\delta_{t}+b\lambda_{t}) \lambda_{t},\qquad\lambda_{t}\leq\left(1-\frac{1}{\beta}\right)^{t}\lambda_{0},\qquad\delta_{0}+a\lambda_{0}\leq s \tag{33}\] _and_ \[\mathbb{E}_{t}\left[\delta_{t+1}\right]\leq\left(1-\frac{1}{ \alpha}\right)(1+m\lambda_{t})^{2}(\delta_{t}+c\lambda_{t}), \tag{34}\] _for some \(b,c,m,s,\beta\geq 0\) and \(\alpha>1\), where \(a=\max\{b,c\}>1\) and \(\mathbb{E}_{t}[\,\cdot\,]\triangleq\mathbb{E}[\,\cdot\,|\delta_{0},\cdots, \delta_{t},\lambda_{0},\cdots,\lambda_{t}]\). If \(\lambda_{0}\) is sufficient small such that_ \[\lambda_{0}\leq\frac{\ln 2}{\beta(2m+a(\alpha/(\alpha-1)))} \tag{35}\] _then it holds that_ \[\mathbb{E}\left[\frac{\lambda_{t+1}}{\lambda_{t}}\right]\leq \left(1-\frac{1}{\alpha}\right)^{t}2s.\] Proof.: We denote \[\theta_{t}\triangleq\delta_{t}+a\lambda_{t}. \tag{36}\] Since the index \(t+1\geq 1\), we have \[\mathbb{E}_{t}[\delta_{t+1}]\stackrel{{\eqref{eq:2011}}}{{\leq}} \left(1-\frac{1}{\alpha}\right)(1+m\lambda_{t})^{2}(\delta_{t}+a\lambda_{t}) \leq\left(1-\frac{1}{\alpha}\right)\mathrm{e}^{2m\lambda_{t}}\theta_{t}\quad \text{and}\quad\lambda_{t+1}\stackrel{{\eqref{eq:2011}}}{{\leq}} \mathrm{e}^{2m\lambda_{t}}\theta_{t}\lambda_{t}. \tag{37}\] Then it holds that \[\begin{split}\mathbb{E}_{t}[\theta_{t+1}]&\stackrel{{ \eqref{eq:2011}}}{{\leq}}\left(1-\frac{1}{\alpha}\right)\left(1+\frac{ \alpha a}{\alpha-1}\lambda_{t}\right)\mathrm{e}^{2m\lambda_{t}}\theta_{t} \leq\left(1-\frac{1}{\alpha}\right)\mathrm{e}^{(2m+a\alpha/(\alpha-1))\lambda _{t}}\theta_{t}\\ &=\left(1-\frac{1}{\alpha}\right)\mathrm{e}^{m^{\prime}\lambda_{t }}\theta_{t}\stackrel{{\eqref{eq:2011}}}{{\leq}}\left(1-\frac{1 }{\alpha}\right)\mathrm{e}^{m^{\prime}(1-1/\beta)^{t}\lambda_{0}}\theta_{t}, \end{split} \tag{38}\] where \(m^{\prime}=2m+a\alpha/(\alpha-1)\). Taking expectation on both sides of (38), we have \[\mathbb{E}[\theta_{t+1}]\leq\left(1-\frac{1}{\alpha}\right) \mathrm{e}^{m^{\prime}(1-1/\beta)^{t}\lambda_{0}}\mathbb{E}[\theta_{t}], \tag{39}\] where we use the fact \(\mathbb{E}[\mathbb{E}_{t}[\delta_{t+1}]]=\mathbb{E}[\delta_{t+1}]\). Therefore, we have \[\mathbb{E}\left[\frac{\lambda_{t+1}}{\lambda_{t}}\right]\stackrel{{ \eqref{eq:2011}}}{{\leq}}\mathbb{E}[\mathrm{e}^{2m\lambda_{t}}\theta_{t}]{ \leq}\left(1-\frac{1}{\alpha}\right)\mathrm{e}^{m^{\prime}(1-1/\beta)^{t} \lambda_{0}}\mathbb{E}\left[\theta_{t}\right]\stackrel{{\eqref{eq:2011 }}}{{\leq}}\left(1-\frac{1}{\alpha}\right)\mathrm{e}^{(m^{\prime}(1-1/\beta)^ {t}+m^{\prime}(1-1/\beta)^{t-1})\lambda_{0}}\mathbb{E}\left[\theta_{t}\right]\] \[\stackrel{{\eqref{eq:2011}}}{{\leq}}\left(1-\frac{1 }{\alpha}\right)^{t}\mathrm{e}^{m^{\prime}\sum_{p=0}^{t}(1-1/\beta)^{p} \lambda_{0}}\mathbb{E}\left[\theta_{0}\right]\stackrel{{\eqref{eq:2011 }}}{{\leq}}\left(1-\frac{1}{\alpha}\right)^{t}2s.\] **Lemma B.2** (Following Rodomanov and Nesterov [29, Theorem 4.7] and Lin et al. [21, Theorem 23]).: _Let \(\{\lambda_{t}\}\) and \(\{\tilde{\eta}_{t}\}\) be two positive sequences where \(\tilde{\eta}_{t}\geq 1\) that satisfy_ \[\lambda_{t+1}\leq\left(1-\frac{1}{\tilde{\eta}_{t}}\right)\lambda_{t}+\frac{m _{1}}{2}\lambda_{t}^{2}+\frac{m_{1}^{2}}{4\tilde{\eta}_{t}}\lambda_{t}^{3}\quad \text{and}\quad\tilde{\eta}_{t+1}\leq(1+m_{2}\lambda_{t})^{2}\tilde{\eta}_{t}, \tag{40}\] _for some \(m_{1}\) and \(m_{2}>0\). If_ \[m\lambda_{0}\leq\frac{\ln(3/2)}{4\tilde{\eta}_{0}}, \tag{41}\] _where \(m\triangleq\max\{m_{1},m_{2}\}\), then it holds that_ \[\tilde{\eta}_{t}\leq\mathrm{e}^{2m\sum_{i=0}^{t-1}\lambda_{t}} \tilde{\eta}_{0}\leq\frac{3\tilde{\eta}_{0}}{2} \tag{42}\] _and_ \[\lambda_{t}\leq\left(1-\frac{1}{2\tilde{\eta}_{0}}\right)^{t} \lambda_{0}. \tag{43}\] Proof.: We prove results of (42) and (43) by induction. In the case of \(t=0\), inequalities (42) and (43) are satisfied naturally. Now we suppose inequalities (42) and (43) holds for \(t=0,\ldots,t^{\prime}\), then we have \[m\sum_{i=0}^{t^{\prime}}\lambda_{i}\stackrel{{\eqref{eq:2011}}}{{ \leq}}m\lambda_{0}\sum_{i=0}^{t^{\prime}}\left(1-\frac{1}{2\tilde{\eta}_{0}} \right)^{i}\leq 2\tilde{\eta}_{0}m\lambda_{0}\stackrel{{\eqref{eq:2011}}}{{ \leq}}1. \tag{44}\] In the case of \(t=t^{\prime}+1\) we have \[\frac{1-m_{1}\lambda_{t^{\prime}/2}}{\tilde{\eta}_{t^{\prime}}}{ \geq}\frac{\mathrm{e}^{-m_{1}\lambda_{t^{\prime}}}}{\tilde{\eta}_{t^{\prime}}} \stackrel{{\eqref{eq:t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t_t _t_ ### The Proof of Theorem 4.2 Proof.: We denote \(\lambda_{t}\triangleq\lambda(\mathbf{x}_{t})\qquad\text{and}\qquad\tilde{\eta}_{t} \triangleq\min_{\nabla^{2}f(\mathbf{x}_{t})\preceq\eta\mathbf{G}_{t}}\eta\), which means \[\nabla^{2}f(\mathbf{x}_{t})\preceq\mathbf{G}_{t}\preceq\tilde{\eta}_{t}\nabla^ {2}f(\mathbf{x}_{t}).\] According to Lemma 4.1, we have \[\lambda_{t+1}\leq\left(1-\frac{1}{\tilde{\eta}_{t}}\right)\lambda_{t}+\frac{M }{2}\lambda_{t}^{2}+\frac{M^{2}}{4\tilde{\eta}_{t}}\lambda_{t}^{3}.\] According to Lemma B.3, we have \[\nabla^{2}f(\mathbf{x}_{t+1})\preceq\tilde{\mathbf{G}}_{t}\preceq(1+Mr_{t})^{ 2}\tilde{\eta}_{t}\nabla^{2}f(\mathbf{x}_{t}).\] According to Lemma 3.2, we have \[\nabla^{2}f(\mathbf{x}_{t+1})\overset{\eqref{eq:1}}{\preceq} \mathbf{G}_{t+1}\overset{\eqref{eq:1}}{\preceq}(1+Mr_{t})^{2}\tilde{\eta}_{t} \nabla^{2}f(\mathbf{x}_{t+1})\overset{\eqref{eq:1}}{\preceq}(1+M\lambda_{t})^ {2}\tilde{\eta}_{t}\nabla^{2}f(\mathbf{x}_{t+1}),\] which means \[\tilde{\eta}_{t+1}\leq(1+M\lambda_{t})^{2}\tilde{\eta}_{t}.\] Hence, the sequences \(\{\tilde{\eta}_{t}\}\) and \(\{\lambda_{t}\}\) satisfy the conditions of Lemma B.2 with \(m_{1}=m_{2}=M\), then we obtain \[\nabla^{2}f(\mathbf{x}_{t})\preceq\tilde{\mathbf{G}}_{t}\preceq\frac{3\tilde{ \eta}_{0}}{2}\nabla^{2}f(\mathbf{x}_{t})\preceq\frac{3\eta_{0}}{2}\nabla^{2} f(\mathbf{x}_{t}),\] and \[\lambda(\mathbf{x}_{t})\leq\left(1-\frac{1}{2\tilde{\eta}_{0}} \right)^{t}\lambda(\mathbf{x}_{0})\leq\left(1-\frac{1}{2\eta_{0}}\right)^{t} \lambda(\mathbf{x}_{0}).\] ### The Proof of Theorem 4.3 Proof.: Denote \(g_{t}=\operatorname{tr}\big{(}\mathbf{G}_{t}-\nabla^{2}f(\mathbf{x}_{t}) \big{)}/\operatorname{tr}\big{(}\nabla^{2}f(\mathbf{x}_{t})\big{)}\), \(\delta_{t}=d\varkappa g_{t}\), \(\lambda_{t}=\lambda(\mathbf{x}_{t})\) and \(\mathbb{E}_{t}[\cdot]\triangleq\mathbb{E}[\cdot\,|\,\mathbf{U}_{0},\cdots, \mathbf{U}_{t-1}]\). From Theorem 3.3, we have \[\mathbb{E}_{t}\left[\operatorname{tr}\big{(}\mathbf{G}_{t+1}- \nabla^{2}f(\mathbf{x}_{t+1})\big{)}\right]\leq\left(1-\frac{k}{d}\right) \operatorname{tr}\left(\tilde{\mathbf{G}}_{t}-\nabla^{2}f(\mathbf{x}_{t+1}) \right), \tag{51}\] From Lemma B.3, we have \[\mathbb{E}_{t}[\tau_{\nabla^{2}f(\mathbf{x}_{t+1})}(\tilde{\mathbf{G}}_{t})] \overset{\eqref{eq:1}}{\leq}(1+Mr_{t})^{2}(g_{t}+2Mr_{t})\operatorname{tr} \left(\nabla^{2}f(\mathbf{x}_{t+1})\right),\] which means \[\mathbb{E}_{t}[\delta_{t+1}]\overset{\eqref{eq:1},\eqref{eq:1}}{\leq}\left(1 -\frac{k}{d}\right)(1+M\lambda_{t})^{2}(\delta_{t}+2\varkappa dM\lambda_{t}). \tag{52}\] The initial condition (17) means the results of Theorem 4.2 hold, that is \[\lambda_{t}\leq\left(1-\frac{1}{2\eta_{0}}\right)^{t}\lambda_{0} \qquad\text{and}\qquad\nabla^{2}f(\mathbf{x}_{t})\preceq\mathbf{G}_{t}. \tag{53}\] According to Lemma B.4 and the definition of \(\delta_{t}\), we have \[\nabla^{2}f(\mathbf{x}_{t})\overset{\eqref{eq:1}}{\preceq} \mathbf{G}_{t}\overset{\eqref{eq:1}}{\preceq}(1+\delta_{t})\nabla^{2}f( \mathbf{x}_{t}).\] According to Lemma 4.1, we have \[\lambda_{t+1} \stackrel{{\eqref{eq:15}}}{{\leq}}\left(1-\frac{1}{1+ \delta_{t}}\right)\lambda_{t}+\frac{M}{\eta_{t}}\lambda_{t}^{2}+\frac{M^{2}}{4 \eta_{t}}\lambda_{t}^{3}\] \[\leq\left(1+\frac{M\lambda_{t}}{2}\right)\frac{\delta_{t}+M \lambda_{t}/2}{1+\delta_{t}}\lambda_{t}\] \[\leq(1+M\lambda_{t})^{2}\left(\delta_{t}+\frac{M}{2}\lambda_{t} \right)\lambda_{t}.\] According to Lemma B.5 and the initial condition (17), we have \[\delta_{0}=d\varkappa g_{0}\stackrel{{\eqref{eq:15}}}{{\leq}}( \eta_{0}-1)d\varkappa\quad\text{and}\quad\theta_{0}=\delta_{0}+2d\varkappa M \lambda_{0}\stackrel{{\eqref{eq:17}}}{{\leq}}\eta_{0}d\varkappa. \tag{54}\] Hence, the random sequences of \(\{\lambda_{t}\}\) and \(\{\delta_{t}\}\) satisfies the conditions of Lemma B.1 with \[m=M,\quad b=\frac{M}{2},\quad c=2\varkappa dM,\quad\alpha=\frac{d}{k},\quad \beta=\frac{1}{2\eta_{0}}\quad\text{and}\quad s=\eta_{0}d\varkappa,\] which means we can obtain inequality (18). Now, we prove the two-stage convergence of SR-\(k\) methods. 1. For SR-\(k\) method with randomized strategy \(\left[\mathbf{U}_{t}\right]_{ij}\stackrel{{\text{i.i.d}}}{{\sim}} \mathcal{N}(0,1)\), we apply Lemma B.6 with \(\alpha=d/k\) and \(a=2d\varkappa\eta_{0}\) to obtain \[\frac{\lambda_{t+1}}{\lambda_{t}}\leq\frac{2d^{2}\varkappa\eta_{0}}{k\delta} \left(1-\frac{k}{d+k}\right)^{t}\] (55) holds for all \(t\) with probability at least \(1-\delta\). Take \(t_{0}=\mathcal{O}(d\ln(\eta_{0}\varkappa d)/k)\), which satisfies that \[\frac{2d^{2}\varkappa\eta_{0}}{k\delta}\left(1-\frac{k}{d+k}\right)^{t_{0}}\leq \frac{1}{2},\] (56) together with the linear rate (16), we have \[\lambda_{t+t_{0}} \stackrel{{\eqref{eq:15}}}{{\leq}}\left(1-\frac{k}{ d+k}\right)^{t+t_{0}}2d\varkappa\eta_{0}d\lambda_{t+t_{0}-1}\] \[\stackrel{{\eqref{eq:15}}}{{\leq}}\left(1-\frac{k}{ d+k}\right)^{t}\frac{1}{2}\lambda_{t+t_{0}-1}\leq\cdots\stackrel{{\eqref{eq:15}},\eqref{eq:15}}{{\leq}}\left(1-\frac{k}{d+k}\right)^{t(t-1)/2}\left(\frac{1}{ 2}\right)^{t}\lambda_{t_{0}}\] \[\stackrel{{\eqref{eq:15}}}{{\leq}}\left(1-\frac{k}{ d+k}\right)^{t(t-1)/2}\left(\frac{1}{2}\right)^{t}\left(1-\frac{1}{2\eta_{0}} \right)^{t_{0}}\lambda_{0}.\] with probability at least \(1-\delta\). 2. For SR-\(k\) method with greedy strategy \(\mathbf{U}_{t}=\mathbf{E}_{k}(\tilde{\mathbf{G}}_{t}-\nabla^{2}f(\mathbf{x}_{t +1}))\), we choose \(t_{0}=\mathcal{O}\left(d\ln(\eta_{0}\varkappa d)/k\right)\) such that \[\left(1-\frac{k}{d}\right)^{t_{0}}2d\varkappa\eta_{0}\leq\frac{1}{2},\] (57) together with the linear rate (16), we have \[\lambda_{t+t_{0}} \stackrel{{\eqref{eq:18}}}{{\leq}}\left(1-\frac{k}{ d}\right)^{t+t_{0}}2d\varkappa\eta_{0}d\lambda_{t+t_{0}-1}\] \[\stackrel{{\eqref{eq:15}}}{{\leq}}\left(1-\frac{k}{ d}\right)^{t}\frac{1}{2}\lambda_{t+t_{0}-1}\leq\cdots\stackrel{{ \eqref{eq:18}},\eqref{eq:15}}{{\leq}}\left(1-\frac{k}{d}\right)^{t(t-1)/2} \left(\frac{1}{2}\right)^{t}\lambda_{t_{0}}\] \[\stackrel{{\eqref{eq:16}}}{{\leq}}\left(1-\frac{k}{ d}\right)^{t(t-1)/2}\left(\frac{1}{2}\right)^{t}\left(1-\frac{1}{2\eta_{0}} \right)^{t_{0}}\lambda_{0}.\] ### The Proof of Corollary 4.4 Proof.: According to Theorem 4.3, we have \[\mathbb{E}[\tau_{\nabla^{2}f(\mathbf{x}_{t+1})}(\mathbf{G}_{t+1})] \stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def_def__def_def__def_def__def_def__def_def__def__def_def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def___def__def__def__def__def__def___def__def__def__def__def__def__def___def__def___def__def___def__def___def__def___def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def____def___def___def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def__def___def__def__def___def__def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def__def__def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def__def___def___def___def___def__def___def___def___def__def___def__def___def___def__def___def__def___def__def__def___def__def__def__def___def__def___def__def__def___def___def__def__def__def__def___def__def__def___def__def___def__def__def__def___def__def___def__def__def___def__def__def__def__def__def__def___def__def__def__def__def__def___def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def_def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def_def__def__def__def__def_def__def__def__def__def__def_def__def__def__def__def__def_def__def__def_def__def__def_def__def__def__def_def__def__def__def_def__def_def__def_def__def_def__def__def_def_def__def_def_def__def_def__def_def_def__def_def_def__def_def__def_def__def_def_def__def_def_def__def_def_def__def_def__def_def_def__def_def_def_def_def_def_def_def_def_def_def__def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def \[= \mathbf{U}^{\top}\mathbf{G}\mathbf{A}^{-1/2}\underbrace{\left(\mathbf{ I}_{d}-\mathbf{P}\left(\mathbf{P}^{\top}\mathbf{P}\right)^{-1}\mathbf{P}^{\top} \right)}_{\geq\mathbf{0}}\mathbf{A}^{-1/2}\mathbf{G}\mathbf{U}\] \[\succeq \mathbf{0}.\] So we have \[\begin{split}&\operatorname{tr}\left(\left(\mathbf{U}^{\top}\mathbf{G} \mathbf{U}\right)^{-1}\left(\mathbf{U}^{\top}\mathbf{G}\mathbf{A}^{-1}\mathbf{G }\mathbf{U}\right)\right)-\operatorname{tr}\left(\left(\mathbf{U}^{\top} \mathbf{A}\mathbf{U}\right)^{-1}\left(\mathbf{U}^{\top}\mathbf{G}\mathbf{U} \right)\right)\\ =&\operatorname{tr}\left(\left(\mathbf{U}^{\top} \mathbf{G}\mathbf{U}\right)^{-1}\left(\left(\mathbf{U}^{\top}\mathbf{G} \mathbf{A}^{-1}\mathbf{G}\mathbf{U}\right)-\left(\mathbf{U}^{\top}\mathbf{G} \mathbf{U}\right)\left(\mathbf{U}^{\top}\mathbf{A}\mathbf{U}\right)^{-1} \left(\mathbf{U}^{\top}\mathbf{G}\mathbf{U}\right)\right)\right)\\ =&\operatorname{tr}\left(\left(\mathbf{U}^{\top} \mathbf{G}\mathbf{U}\right)^{-1/2}\mathbf{U}^{\top}\mathbf{G}\left(\mathbf{A }^{-1}-\mathbf{U}\left(\mathbf{U}^{\top}\mathbf{A}\mathbf{U}\right)^{-1} \mathbf{U}^{\top}\right)\mathbf{G}\mathbf{U}\left(\mathbf{U}^{\top}\mathbf{G }\mathbf{U}\right)^{-1/2}\right)\\ \overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eq **Lemma C.2**.: _Under the setting of Theorem 5.4, Algorithm 3 holds that_ \[\lambda(\mathbf{x}_{t})\leq\left(1-\frac{1}{2\eta_{0}}\right)^{t} \lambda(\mathbf{x}_{0})\quad\text{and}\quad\nabla^{2}f(\mathbf{x}_{t})\preceq \mathbf{G}_{t}\preceq\frac{3\eta_{0}}{2}\nabla^{2}f(\mathbf{x}_{t}) \tag{68}\] _for all \(t\geq 0\)._ Proof.: We can obtain this result by following the proof of Theorem 4.2 by replacing the use of Lemma 3.2 with Lemma 5.2. Now we prove Theorem 5.4. Proof.: We denote \(\delta_{t}\triangleq\operatorname{tr}\left((\mathbf{G}_{t}-\nabla^{2}f( \mathbf{x}_{t}))(\nabla^{2}f(\mathbf{x}_{t}))^{-1}\right)\) and \(\lambda_{t}\triangleq\lambda(\mathbf{x}_{t})\). The initial condition means we have the results of Lemma C.2. According to Lemma B.4, we have \[\nabla^{2}f(\mathbf{x}_{t})\overset{\eqref{eq:c_t}}{\preceq} \mathbf{G}_{t}\preceq(1+\delta_{t})\nabla^{2}f(\mathbf{x}_{t}).\] Using Theorem 5.3, we have \[\mathbb{E}_{t}[\delta_{t+1}]=\mathbb{E}_{t}[\sigma_{\nabla^{2}f( \mathbf{x}_{t+1})}(\mathbf{G}_{t+1})]\overset{\eqref{eq:c_t}}{\leq}\left(1- \frac{k}{d\varkappa}\right)\sigma_{\nabla^{2}f(\mathbf{x}_{k+1})}(\tilde{ \mathbf{G}}_{t}). \tag{69}\] Using Lemma B.3, we have \[\sigma_{\nabla^{2}f(\mathbf{x}_{k+1})}(\tilde{\mathbf{G}}_{t}) \overset{\eqref{eq:c_t}}{\leq}(1+Mr_{t})^{2}(\delta_{t}+2dMr_{t}). \tag{70}\] Thus, we can obtain following result \[\mathbb{E}_{t}[\delta_{t+1}]\overset{\eqref{eq:c_t}}{\leq}\left( 1-\frac{k}{d\varkappa}\right)(1+M\lambda_{t})^{2}(\delta_{t}+2dM\lambda_{t}).\] According to Lemma 4.1, we have \[\lambda_{t+1}\overset{\eqref{eq:c_t}}{\leq}\left(1+\frac{M\lambda_{t}}{2} \right)\frac{\delta_{t}+M\lambda_{t}/2}{1+\delta_{t}}\lambda_{t}\leq(1+M \lambda_{t})^{2}(\delta_{t}+2dM\lambda_{t})\lambda_{t}.\] According to Lemma C.2, we have \[\lambda_{t}\leq\left(1-\frac{1}{2\eta_{0}}\right)^{t}\lambda_{0}.\] According to Lemma B.5 and the initial condition (26), we have \[\delta_{0}=\operatorname{tr}\left((\mathbf{G}_{0}-\nabla^{2}f( \mathbf{x}_{0}))(\nabla^{2}f(\mathbf{x}_{0}))^{-1}\right)\overset{\eqref{eq: c_t}}{\leq}d(\eta_{0}-1)\quad\text{and}\quad\theta_{0}=\delta_{0}+2dM\lambda_{0} \overset{\eqref{eq:c_t}}{\leq}d\eta_{0}.\] Hence, the random sequences of \(\{\lambda_{t}\}\) and \(\{\delta_{t}\}\) satisfy the conditions of Lemma B.1 with \[m=M,\quad b=2dM,\quad c=2dM,\quad\alpha=\frac{d\varkappa}{k}, \quad\beta=2\eta_{0}\quad\text{and}\quad s=\eta_{0}d,\] which means we have proved Theorem 5.4. ``` 1:Input:\(\mathbf{H}_{0}\), \(M\) and \(k\). 2:for\(t=0,1\dots\) 3:\(\mathbf{z}_{t+1}=\mathbf{z}_{t}-\mathbf{H}_{t}^{-1}\mathbf{J}(\mathbf{z}_{t})^{ \top}\mathbf{F}(\mathbf{z}_{t})\) 4:\(r_{t}=\|\mathbf{z}_{t+1}-\mathbf{z}_{t}\|_{2}\) 5:\(\tilde{\mathbf{H}}_{t}=(1+Mr_{t})\mathbf{H}_{t}\) 6: construct \(\mathbf{U}_{t}\) by \(\left[\mathbf{U}_{t}\right]_{ij}\overset{\mathrm{i.i.d}}{\sim}\mathcal{N}(0,1)\) 7:\(\mathbf{H}_{t+1}=\text{SR-}k(\tilde{\mathbf{H}}_{t},\mathbf{J}(\mathbf{z}_{t+ 1})^{\top}\mathbf{J}(\mathbf{z}_{t+1}),\mathbf{U}_{t})\) 8:endfor ``` **Algorithm 4** Symmetric Rank-\(k\) Method for Nonlinear Equation ## Appendix D Extension for Solving Nonlinear Equations In this section, we apply SR-\(k\) methods to solve the nonlinear equations \[\mathbf{F}(\mathbf{z})=\mathbf{0}, \tag{71}\] where \(\mathbf{F}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is a differentiable vector-valued function. We use \(\mathbf{J}(\mathbf{z})\) to represent the Jacobian of \(\mathbf{F}(\cdot)\) at \(\mathbf{z}\in\mathbb{R}^{d}\) and impose the following assumptions. **Assumption D.1**.: We assume the vector-valued function \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is differentiable and its Jacobian is \(\tilde{L}_{2}\)-Lipschitz continuous, i.e., there exists some \(\tilde{L}_{2}\geq 0\) such that \[\|\mathbf{J}(\mathbf{z})-\mathbf{J}(\mathbf{z}^{\prime})\|\leq\tilde{L}_{2}\| \mathbf{z}-\mathbf{z}^{\prime}\|. \tag{72}\] for any \(\mathbf{z},\mathbf{z}^{\prime}\in\mathbb{R}^{d}\). **Assumption D.2**.: We assume there exists equation (71) has a solution \(\mathbf{z}^{*}\) such that \(\mathbf{J}(\mathbf{z}^{*})\) is non-degenerate. According to Assumption D.2, we denote \[\tilde{\mu}\triangleq\frac{\sigma_{\min}(\mathbf{J}(\mathbf{z}^{*}))}{\sqrt{ 2}},\qquad\tilde{L}\triangleq 2\sigma_{\max}(\mathbf{J}(\mathbf{z}^{*}))\qquad \text{and}\qquad\tilde{\kappa}\triangleq\frac{\tilde{L}}{\tilde{\mu}},\] where \(\sigma_{\min}(\cdot)\) and \(\sigma_{\max}(\cdot)\) are the smallest and the largest singular values of given matrix respectively. We present SR-\(k\) methods for solving nonlinear equations in Algorithm 4. The design of this algorithm is inspired by the recent work of Liu and Luo [22], which applies the quasi-Newton methods to estimate the information of non-degenerate indefinite matrix by its square. We use the Euclidean norm \(\tilde{\lambda}(\mathbf{z})\triangleq\left\|\mathbf{F}(\mathbf{z})\right\|_{2}\) to measure the convergence of our algorithm. The advantage of block updates in SR-\(k\) updates results a faster superlinear convergence than Liu and Luo [22]'s methods. Following the analysis of SR-\(k\) methods for convex optimization, we obtain the results for solving nonlinear equations as follows. **Theorem D.3**.: _Under Assumption D.1 and D.2, we run Algorithm 4 with \(k<d\), \(\tilde{M}=2\tilde{\varkappa}^{2}\tilde{L}_{2}/\tilde{L}\) and set the initial \(\mathbf{z}_{0}\) and \(\mathbf{H}_{0}\) such that_ \[\tilde{\lambda}(\mathbf{z}_{0})\leq\frac{\ln 2}{8}\cdot\frac{(d-k)\tilde{\mu}}{ \tilde{M}\eta_{0}d^{2}\tilde{\varkappa}^{2}}\qquad\text{and}\qquad\mathbf{J}( \mathbf{z}_{0})^{\top}\mathbf{J}(\mathbf{z}_{0})\preceq\mathbf{H}_{0}\preceq \eta_{0}\mathbf{J}(\mathbf{z}_{0})^{\top}\mathbf{J}(\mathbf{z}_{0})\] _for some \(\eta_{0}\geq 1\). Then we have_
2303.00932
On curvature related geometric properties of Hayward black hole spacetime
This paper is devoted to the study of curvature properties of Hayward black hole (briefly, HBH) spacetime, which is a solution of Einstein field equations (briefly, EFE) having non-vanishing cosmological constant. We have proved that the HBH spacetime is an Einstein manifold of level $2$, $2$-quasi Einstein, generalized quasi-Einstein and Roter type manifold. Also, it is shown that the nature of the HBH spacetime is pseudosymmetric and it obeys several types of pseudosymmetries, such as, pseudosymmetry due to concircular, conformal and conharmonic curvature (i.e., $F\cdot F=\mathcal{L}Q(g,F)$ for $F=W,C, K$ with a smooth scalar function $ \mathcal{L} $), and it also possesses the relation $R\cdot R-\mathcal{L} Q(g,C)=Q(S,R)$. It is engrossing to mention that the nature of energy momentum tensor of the HBH spacetime is pseudosymmetric. On the basis of curvature related properties, we have made a comparison among Reissner-Nordstr\"om spacetime, interior black hole spacetime and HBH spacetime. Also, it is shown that the HBH spacetime admits an almost $\eta$-Ricci soliton as well as an almost $\eta$-Ricci-Yamabe soliton. Finally, an elegant comparative study is delineated between the HBH spacetime and the point-like global monopole spacetime with respect to different kinds of symmetry, such as, motion, curvature collineation, curvature inheritance etc.
Absos Ali Shaikh, Shyamal Kumar Hui, Biswa Ranjan Datta, Mousumi Sarkar
2023-02-23T15:02:59Z
http://arxiv.org/abs/2303.00932v1
# On curvature related geometric properties of Hayward black hole spacetime ###### Abstract. This paper is devoted to the study of curvature properties of Hayward black hole (briefly, HBH) spacetime, which is a solution of Einstein field equations (briefly, EFE) having non-vanishing cosmological constant. We have proved that the HBH spacetime is an Einstein manifold of level 2, 2-quasi Einstein, generalized quasi-Einstein and Roter type manifold. Also, it is shown that the nature of the HBH spacetime is pseudosymmetric and it obeys several types of pseudosymmetries, such as, pseudosymmetry due to concircular, conformal and conharmonic curvature (i.e., \(F\cdot F=\mathcal{L}Q(g,F)\) for \(F=W,C,K\) with a smooth scalar function \(\mathcal{L}\)), and it also possesses the relation \(R\cdot R-\mathcal{L}Q(g,C)=Q(S,R)\). It is engrossing to mention that the nature of energy momentum tensor of the HBH spacetime is pseudosymmetric. On the basis of curvature related properties, we have made a comparison among Reissner-Nordstrom spacetime, interior black hole spacetime and HBH spacetime. Also, it is shown that the HBH spacetime admits an almost \(\eta\)-Ricci soliton as well as an almost \(\eta\)-Ricci-Yamabe soliton. Finally, an elegant comparative study is delineated between the HBH spacetime and the point-like global monopole spacetime with respect to different kinds of symmetry, such as, motion, curvature collineation, curvature inheritance etc. Key words and phrases:Hayward metric, Einstein field equation, pseudosymmetric type curvature condition, Weyl conformal curvature tensor, Roter type manifold, Ein(2) 2020 Mathematics Subject Classification: 53B20, 53B30, 53B50, 53C15, 53C25, 53C35, 83C15 ## 1. **Introduction** Let us consider a semi-Riemannian manifold \(M\) of dimension \(n\geq 3\) such that \(\nabla\) is the Levi-Civita connection of the semi-Riemannian metric \(g\) with signature \((t,n-t)\), \(0\leq t\leq n\) and \(R\), \(S\), \(\kappa\) are respectively the Riemann, Ricci, scalar curvature of \(M\). A connected 4-dimensional manifold \(M\) with Lorentzian signature (1,3) or (3,1) is a spacetime. The curvature carries an enormous significance to acquire the shape of a space. In fact, the geometry of a space can be described by curvature explicitly as the relation \(\nabla R=0\) defines the notion of locally symmetric manifolds (see, Cartan [24]). Cartan [25] introduced the notion of semisymmetric manifolds defined as \(R\cdot R=0\) (see also, [122, 123, 124]) and the concept of pseudosymmetric manifolds was introduced by Adamow and Deszcz [18], which is known as Deszcz pseudosymmetric space. A large number of physicists and mathematicians investigated the concept of locally symmetric manifolds and introduced several generalized notions of symmetries, such as, recurrent manifolds by Ruse [80, 81, 82], (see also [129]), different kinds of generalized notion of recurrent manifolds by Shaikh et al. [108, 109, 110, 111, 106, 112, 113], curvature 2-forms of recurrent manifolds by Besse [22, 69, 74, 75, 76], pseudosymmetric manifolds by Chaki [26, 27], weakly symmetric manifolds by Tamassy and Binh [126, 127] etc. Haesen and Verstraelen [59, 60, 61] exhibited the geometrical and physical significance of various pseudosymmetries. We mention that the Deszcz pseudosymmetry achieved a great importance during last four decades due to its applications in the study of general relativity and cosmology as numerous spacetimes (see, [84, 85, 86, 20, 47, 84, 88]) have been found to be pseudosymmetric. It is noteworthy to mention that pseudosymmetries in the sense of Deszcz and Chaki are not equivalent (see, [97]). In 1982, during the study of compact 3-dimensional manifolds with positive Ricci curvature, Hamilton [63] established a process of evolving a Riemannian metric over time, called Ricci flow. The self-similar solutions of the Ricci flow are known as Ricci solitons, which are natural generalizations of Einstein metrics [22, 23, 83, 117]. The notion of Ricci soliton has been generalized in different ways, e.g. almost Ricci soliton, \(\eta\)-Ricci soliton, almost \(\eta\)-Ricci soliton etc. If the Ricci curvature \(S\) and the metric tensor \(g\) of a Riemannian manifold \(M\) realize \[\frac{1}{2}\pounds_{\xi}g+S-\mu g=0\] for a constant \(\mu\), then \(M\) is said to be a Ricci soliton, where \(\pounds_{\xi}\) is the Lie derivative in the direction of the soliton vector field \(\xi\). It is expanding, steady or shrinking according to the condition \(\mu<0\), \(\mu=0\) or \(\mu>0\) respectively. It is called an almost Ricci soliton [78] if \(\mu\) is a non-constant smooth function. We mention that if the corresponding soliton vector field \(\xi\) of a Ricci soliton is Killing, then the Ricci soliton turns into an Einstein manifold. Again, if a non-zero 1-form \(\eta\) on \(M\) satisfies the relation \[\frac{1}{2}\pounds_{\xi}g+S-\mu g+\lambda(\eta\otimes\eta)=0,\] \(\mu,\lambda\) being constants, then \(M\) is called an \(\eta\)-Ricci soliton [16]. The \(\eta\)-Ricci soliton is said to be an almost \(\eta\)-Ricci soliton [15] if \(\mu,\lambda\) are allowed to be smooth functions. On the other hand, simultaneously with the notion of Ricci flow, Hamilton [64] introduced the notion of Yamabe flow. Recently, as a scalar combination of Ricci and Yamabe flow, Guler and Crasmareanu [58] established a new geometric flow, which is called Ricci-Yamabe flow, and Ricci-Yamabe (resp., Yamabe) solitions are the self-similar solutions of Ricci-Yamabe (resp., Yamabe) flow. If in a Riemannian manifold \(M\) the Ricci curvature \(S\) and the metric tensor \(g\) realize the relation \[\frac{1}{2}\pounds_{\xi}g+\alpha_{1}S+\left(\mu-\frac{1}{2}\alpha_{2}\kappa \right)g=0,\] with the constants \(\alpha_{1}\), \(\alpha_{2}\), \(\mu\), scalar curvature \(\kappa\) and the soliton vector field \(\xi\), then \(M\) is called a Ricci-Yamabe soliton [118]. We note that if \(\alpha_{1}=0,\alpha_{2}=2\) (resp., \(\alpha_{1}=1,\alpha_{2}=0\)), then it turns into Yamabe soliton (resp., Ricci soliton). In addition, if \(\alpha_{1}\), \(\alpha_{2}\), \(\mu\) are allowed to be non-constant smooth functions, then \(M\) is known as an almost Ricci-Yamabe soliton [118]. Again, if there is a non-zero 1-form \(\eta\) satisfying \[\frac{1}{2}\pounds_{\xi}g+\alpha_{1}S+\left(\mu-\frac{1}{2}\alpha_{2}\kappa \right)g+\lambda\eta\otimes\eta=0,\] with the constants \(\alpha_{1}\), \(\alpha_{2}\), \(\mu\), \(\lambda\), then \(M\) is called an \(\eta\)-Ricci-Yamabe soliton [118]. If the constants \(\alpha_{1}\), \(\alpha_{2}\), \(\mu\), \(\lambda\) are allowed to be non-constant smooth functions, then \(M\) is called an almost \(\eta\)-Ricci-Yamabe soliton [118]. A plenty of research papers (see, [1, 13, 14, 89] and the references therein) on Ricci soliton, Yamabe soliton and their generalizations are appeared during last three decades, and nowadays it is an abuzz topic of research in differential geometry. To construct gravitational potential, one can impose the symmetry in EFE and hence the geometrical symmetries play a crucial role in the theory of general relativity. Along a vector field, certain geometric quantity is preserved if the Lie derivative of the corresponding tensor vanishes with respect to that vector field, and the vanishing Lie derivative explains the geometrical symmetries. The notions of motion, curvature collineation, Ricci collineation etc. are the examples of such symmetries. Katzin et al. [66, 67] rigorously investigated the role of curvature collineation in general relativity. In 1992, Duggal [17] introduced the notion of curvature inheritance generalizing the concept of curvature collineation for the (1,3)-type curvature tensor. During last three decades, a plenty of articles (see, [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 90, 13, 14, 15, 16, 17, 18, 19, 96]) appeared in the literature regarding the investigations of such kinds of symmetries. Recently, during the investigation of geometric properties of Robinson-Trautman spacetime, Shaikh and Datta [96] introduced the concept of generalized curvature inheritance, which is a generalization of curvature collineation as well as curvature inheritance for the (0,4)-type curvature tensor. We note that the notions of curvature inheritance for the (1,3)-type curvature tensor and for the (0,4)-type curvature tensor are not equivalent [96]. In this paper, we have checked that the HBH spacetime does not admit any of the curvature related symmetries. Finally, a worthy comparison between the HBH spacetime and the point-like global monopole spacetime in terms of such symmetries is exhibited. In 2006, Hayward [65] modeled the famous exact regular black hole metric, which is the solution of the EFE in spherical symmetry, and it is a simple and singularity free black hole spacetime in general relativity. The line element of HBH spacetime, in spherical coordinates \((t,r,\theta,\phi)\), is given by \[ds^{2}=-\left(1-\frac{2mr^{2}}{r^{3}+2mb^{2}}\right)dt^{2}+\left(1-\frac{2mr^{ 2}}{r^{3}+2mb^{2}}\right)^{-1}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right), \tag{1.1}\] where the parameters \(m\) and \(b\) represents mass and length-scale respectively. The metric (1.1) is non-singular, because if \(r\rightarrow\infty\), the metric approaches to \(1-\frac{2m}{r}\), and if \(r\to 0\), it approaches unity smoothly. The metric consists of the least number of free parameters (\(b\) only) with the properties 1. Schwarzschild asymptotic behavior at large radii and 2. regularly at the center such that \(F(r)=1-\frac{2mr^{2}}{r^{3}+2mb^{2}}\to 1+O(r^{2})\),. Hence it is minimal. The importance of Hayward spacetime (1.1) is realized from the several studies, such as, Chiba and Kimura [29] have obtained the timelike geodesics and null geodesics equation of a particle in HBH spacetime and Maluf and Neves [70] have studied certain thermodynamic quantities like Hawking temperature, entropy and heat capacity of HBH spacetime. The stability of the thin-shell wormholes constructed by the HBH spacetime are studied by Halilsoy et al. [62]. However, several curvature properties of HBH spacetime are yet to be investigated. The purpose of the article is to focus and to determine of several geometric properties of HBH spacetime. It is found that the HBH spacetime is neither semisymmetric nor Ricci generalized pseudosymmetric, but it is pseudosymmetric and satisfies several pseudosymmetric type curvature conditions, such as, pseudosymmetry due to concircular, conharmonic and conformal curvature tensors. Also, we have exhibited that both \(Q(g,C)\) and \(Q(S,C)\) are linearly dependent on the difference \((C\cdot R-R\cdot C)\). It is also proved that the HBH spacetime is an Einstein manifold of level 2, 2-quasi Einstein, generalized quasi-Einstein and Roter type manifold. Moreover, the nature of the stress energy momentum tensor of HBH spacetime is pseudosymmetric. The article is embellished as follows: we discuss some definitions of geometric structures in in Section 2, which are essential throughout the paper to investigate the geometric properties of HBH spacetime. Section 3 deals with the study of the HBH spacetime and obtained some interesting results. In Section 4, certain geometric properties of energy momentum tensor of HBH spacetime are determined. Based on the curvature properties of HBH spacetime, an worthy comparison with the point-like global monopole spacetime and Reissner-Nordstrom spacetime has been exhibited in Section 5. Section 6 is concerned with the nature of Ricci soliton and Ricci-Yamabe soliton admited by the HBH spacetime. Section 7 is devoted to a comparative study between between the HBH spacetime and the point-like global monopole spacetime with respect to different kinds of symmetry, such as, motion, curvature collineation and curvature inheritance. ## 2. **Preliminaries** This section consists of various rudimentary facts about various geometric structures, Ricci soliton and symmetries (such as, motion, curvature collineation (also, curvature inheritance) for the (1,3)-type curvature tensor and for the (0,4)-type curvature tensor, Ricci collineation and Ricci inheritance), which are necessary for investigating the geometric structures on HBH spacetime. The Kulkarni-Nomizu product \(A\wedge U\) of two (0,2)-type symmetric tensors \(A\) and \(U\) is defined as ([38, 55, 57, 68]): \[(A\wedge U)_{pq\mu\nu}=A_{p\nu}U_{q\mu}-A_{p\mu}U_{q\nu}+A_{q\mu}U_{p\nu}-A_{q \nu}U_{p\mu}.\] For \(j=1,2,3,4\) we will consider \(\varpi,\varpi_{j}\in\chi(M)\), Lie algebra of all smooth vector fields throughout the paper. For a symmetric \((0,2)\)-type tensor \(Z\), the endomorphism \(h_{1}\wedge_{Z}h_{2}\) can be defined as ([30, 31, 38]) \[(\varpi_{1}\wedge_{Z}\varpi_{2})v=Z(\varpi_{2},v)\varpi_{1}-Z(\varpi_{1},v) \varpi_{2}.\] Now, the endomorphisms \(\mathcal{R},\mathcal{W},\mathcal{C},\mathcal{K}\) and \(\mathcal{P}\) can be defined on \(M\) as follows: ([46, 93, 103, 105]) \[\mathcal{R}(\varpi_{1},\varpi_{2})=[\nabla_{\varpi_{1}},\nabla_{\varpi_{2}}]- \nabla_{[\varpi_{1},\varpi_{2}]},\] \[\mathcal{W}(\varpi_{1},\varpi_{2})=\mathcal{R}(\varpi_{1},\varpi_{2})-\frac{ \kappa}{n(n-1)}\varpi_{1}\wedge_{g}\varpi_{2},\] \[\mathcal{C}(\varpi_{1},\varpi_{2})=\mathcal{R}(\varpi_{1},\varpi_{2})-\frac{ 1}{n-2}\left(\varpi_{1}\wedge_{g}\mathcal{J}\varpi_{2}+\mathcal{J}\varpi_{1} \wedge_{g}\varpi_{2}-\frac{\kappa}{n-1}\varpi_{1}\wedge_{g}\varpi_{2}\right),\] \[\mathcal{K}(\varpi_{1},\varpi_{2})=\mathcal{R}(\varpi_{1},\varpi_{2})-\frac{ 1}{n-2}\left(\varpi_{1}\wedge_{g}\mathcal{J}\varpi_{2}+\mathcal{J}\varpi_{1} \wedge_{g}\varpi_{2}\right),\] \[\mathcal{P}(\varpi_{1},\varpi_{2})=\mathcal{R}(\varpi_{1},\varpi_{2})-\frac{ 1}{n-1}\varpi_{1}\wedge_{S}\varpi_{2},\] where \(\mathcal{J}\) is the Ricci operator defined as \(S(\varpi_{1},\varpi_{2})=g(\varpi_{1},\mathcal{J}\varpi_{2})\). Now, we define the \((0,4)\)-type tensor field \(T\) corresponding to the endomorphism \(\mathscr{T}(\varpi_{1},\varpi_{2})\) on \(M\) as \[T(\varpi_{1},\varpi_{2},\varpi_{3},\varpi_{4})=g(\mathscr{T}(\varpi_{1}, \varpi_{2})\varpi_{3},\varpi_{4}).\] If the endomorphism \(\mathscr{T}\) is replaced by \(\mathcal{R}\) (resp., \(\mathcal{W}\), \(\mathcal{K}\), \(\mathcal{C}\) and \(\mathcal{P}\)) in above, we obtain the Riemann (resp., concircular, conharmonic, conformal and projective) curvature tensor \(R\) (resp., \(W\), \(K\), \(C\) and \(P\)) of type \((0,4)\). These tensors are locally given by \[R_{pq\mu\nu}=g_{p\alpha}(\partial_{n}\Gamma^{\alpha}_{q\mu}-\partial_{\mu} \Gamma^{\alpha}_{q\nu}+\Gamma^{\beta}_{q\mu}\Gamma^{\alpha}_{\beta\nu}- \Gamma^{\beta}_{q\nu}\Gamma^{\alpha}_{\beta\mu}),\] \[C_{pq\mu\nu}=R_{pq\mu\nu}+\frac{\kappa}{2(n-1)(n-2)}(g\wedge g)_{pq\mu\nu}- \frac{1}{n-2}(g\wedge S)_{pq\mu\nu},\] \[W_{pq\mu\nu}=R_{pq\mu\nu}-\frac{\kappa}{2n(n-1)}(g\wedge g)_{pq\mu\nu},\] \[K_{pq\mu\nu}=R_{pq\mu\nu}-\frac{1}{n-2}(g\wedge S)_{pq\mu\nu}\ and\] \[P_{pq\mu\nu}=R_{pq\mu\nu}-\frac{1}{n-1}(g_{p\nu}S_{q\mu}-g_{q\nu}S_{p\mu}),\] where \(\partial_{\alpha}=\frac{\partial}{\partial x^{\alpha}}\) and \(\Gamma^{\alpha}_{q\mu}\) denotes the Christoffel symbols of 2nd kind. Let \(H\) be a \((0,k)\)-type \((k\geq 1)\) tensor on \(M\). Then the \((0,k+2)\)-type tensor \(T\cdot H\) is given by [36, 45, 100] \[(T\cdot H)_{q_{1}q_{2}\cdots q_{k}\nu\mu}=-g^{pr}[T_{\mu\nu q_{1}r}H_{pq_{2} \cdots q_{k}}+\cdots+T_{\mu\nu q_{k}r}H_{q_{1}q_{2}\cdots p}].\] Again, for a symmetric \((0,2)\)-type tensor field \(Z\), the Tachibana tensor \(Q(Z,H)\) of type \((0,k+2)\) is obtained as follows: [43, 97, 125] \[Q(Z,H)_{q_{1}q_{2}\cdots q_{k}\nu\mu}=Z_{\mu q_{1}}H_{\nu q_{2}\cdots q_{k}}+ \cdots+Z_{\mu q_{k}}H_{q_{1}q_{2}\cdots\nu}-Z_{\nu q_{1}}H_{\mu q_{2}\cdots q_ {k}}-\cdots-Z_{\nu q_{k}}H_{q_{1}q_{2}\cdots\mu}.\] **Definition 2.1**.: _[_18, 25, 32, 33, 39, 40, 87, 100, 104, 122, 123, 124_]_ _Let \(M\) be a semi-Riemannian manifold. \(M\) is called a H-semisymmetric type manifold due to T if \(M\) possesses the relation \(T\cdot H=0\). Further, \(M\) is said to be H-pseudosymmetric type manifold due to T if the relation \(T\cdot H=\mathcal{L}_{H}Q(Z,H)\) holds for a smooth function \(\mathcal{L}_{H}\) on \(\{x\in M:Q(Z,H)\neq 0\text{ at }x\}\) (i.e., the tensors \(T\cdot H\) and \(Q(Z,H)\) are linearly dependent)._ In the above definition, if we replace \(T=R\) and \(H=R\) (resp., \(P\), \(K\), \(W\), \(C\) and \(S\)), then the \(H\)-semisymmetric type manifold due to \(T\) turns into semisymmetric (resp., projectively, conharmonically, concircularly, conformally, Ricci semisymmetric) manifold and if \(T=R\), \(H=R\) and \(Z=g\) (resp., \(S\)), then the \(H\)-pseudosymmetric type manifold due to \(T\) becomes Deszcz pseudosymmetric (resp., Ricci generalized pseudosymmetric) manifold. Also, if we replace \(T=W\), \(C\), \(P\) and \(K\), then we obtain several pseudosymmetric type curvature conditions. **Definition 2.2**.: \((\)_[_40, 41, 44, 83, 99, 105, 117_]_\(M\) _is called quasi-Einstein (resp., Einstein and \(2\)-quasi-Einstein) manifold if for a scalar \(\alpha\) the rank of \((S-\alpha g)\) is \(1\) (resp., \(0\) and \(2\)). In particular, for \(\alpha=0\) the quasi-Einstein manifold turns into Ricci simple. A generalized quasi-Einstein manifold (in the sense of Chaki [28]) is defined as_ \[S=\alpha g+\beta\Pi\otimes\Pi+\gamma(\Pi\otimes\phi+\phi\otimes\Pi)\] _where \(\alpha\), \(\beta\) and \(\gamma\) are scalars and \(\Pi\), \(\phi\) are \(1\)-forms._ It may be mentioned that Robertson Walker spacetime [19, 77, 120] is quasi-Einstein, Kaigorodov spacetime [95] is Einstein, Kantowski-Sachs spacetime [93] and Som-Raychaudhuri spacetime [102] are \(2\)-quasi-Einstein, Vaidya metric [107], Godel spacetime [46] and Morris-Thorne spacetime [51] are also a Ricci simple manifold. **Definition 2.3**.: _If the Ricci tensor \(S\) of a semi-Riemannain manifold \(M\) satisfies the relation_ \[(\nabla_{\varpi_{1}}S)(\varpi_{2},\varpi_{3})+(\nabla_{\varpi_{2}}S)(\varpi_{3}, \varpi_{1})+(\nabla_{\varpi_{3}}S)(\varpi_{1},\varpi_{2})=0,\] \[(resp.,(\nabla_{\varpi_{1}}S)(\varpi_{2},\varpi_{3})=(\nabla_{\varpi_{2}}S)( \varpi_{1},\varpi_{3}))\] _then it is known as cyclic parallel Ricci tensor (see, [54, 92, 114, 115]) (resp., Codazzi type Ricci tensor (see [53, 119]))._ It may be noted that the Ricci tensor of \((t-z)\)-type plane wave spacetime [50] is of Codazzi type and the Ricci tensor of cyclic parallel has been found in Godel spacetime [46]. **Definition 2.4**.: \((\)_[22, 100, 105]_) A semi-Riemannian manifold \(M\) is an Einstein manifold of level \(4\) (resp., \(3\) and \(2\)) if it satisfies_ \[\vartheta_{1}g+\vartheta_{2}S+\vartheta_{3}S^{2}+\vartheta_{4}S^{3}+S^{4}=0,\] _(resp., \(\vartheta_{5}g+\vartheta_{6}S+\vartheta_{7}S^{2}+S^{3}=0\) and \(\vartheta_{8}g+\vartheta_{9}S+S^{2}=0\)), where \(\vartheta_{i}\)\((1\leq i\leq 9)\) are smooth functions on \(M\)._ We mention that Vaidya-Bonner spacetime [94] and Lifshitz spacetime [116] are Ein(3) while Siklos spacetime [95] and Nariai spacetime [87] are Ein(2) manifolds. **Definition 2.5**.: _If the Riemann tensor \(R\) can be written in the form_ \[R=S^{2}\wedge(\varsigma_{6}S^{2})+S\wedge(\varsigma_{4}S+\varsigma_{5}S^{2})+ g\wedge(\varsigma_{1}g+\varsigma_{2}S+\varsigma_{3}S^{2})\] _for some scalars \(\varsigma_{i}\)\((1\leq i\leq 6)\), then \(M\) is called generalized Roter type manifold [34, 37, 41, 42, 101, 105]. Further, \(M\) is known as a Roter type manifold [34, 35, 44, 48, 56] if \(g\wedge g,\)\(S\wedge S\) and \(g\wedge S\) are linearly dependent on \(R\) (i.e., \(\varsigma_{3}=\varsigma_{5}=\varsigma_{6}=0\))._ It may be noted that Nariai spacetime [87], Melvin magnetic metric [86] as well as Robinson-Trautman spacetime [84] are Roter type, while Lifshitz metric [116] and Vaidya-Bonner metric [94] are generalized Roter type manifolds. **Definition 2.6**.: _[_126, 127_]_ _A weakly \(T\)-symmetric manifold \(M\) is defined by the equation_ \[(\nabla_{\varpi}T)(\varpi_{1},\varpi_{2},\varpi_{3},\varpi_{4}) = \Pi(\varpi)\otimes T(\varpi_{1},\varpi_{2},\varpi_{3},\varpi_{4}) +\Omega_{1}(\varpi_{1})\otimes T(\varpi,\varpi_{2},\varpi_{3},\varpi_{4})\] \[+ \Omega_{1}(\varpi_{2})\otimes T(\varpi_{1},\varpi,\varpi_{3}, \varpi_{4})+\Omega_{2}(\varpi_{3})\otimes T(\varpi_{1},\varpi_{2},\varpi, \varpi_{3})\] \[+ \Omega_{2}(\varpi_{4})\otimes T(\varpi_{1},\varpi_{2},\varpi_{3},\varpi),\] _where \(\Pi\), \(\Omega_{1}\), \(\Omega_{2}\) are \(1\)-forms on \(M\). In particular, \(M\) reduces to a recurrent [80, 81, 129] (resp., Chaki pseudosymmetric [26, 27]) manifold if \(\Omega_{1}{=}\Omega_{2}{=}0\) (resp., \(\Omega_{1}{=}\Omega_{2}=\Pi/2\))._ **Definition 2.7**.: \((\)_[_31, 37, 71, 72, 73_]_ _Let \(T\) be a \((0,4)\)-type tensor field and \(\mathcal{Z}\) be the endomorphism corresponding to a tensor \(Z\) of type \((0,2)\) on \(M\). Then, the tensor \(Z\) is said to be \(T\)-compatible if \(M\) admits_ \[\underset{\varpi_{1},\varpi_{2},\varpi_{3}}{\mathcal{S}}T(\mathcal{Z}\varpi_ {1},\varpi,\varpi_{2},\varpi_{3})=0,\] _where the cyclic sum over \(\varpi_{1}\), \(\varpi_{2}\) and \(\varpi_{3}\) is denoted by \(\mathcal{S}\). Again, \(T\)-compatibility of an \(1\)-form \(\zeta\) is defied by the \(T\)-compatibility of \(\zeta\otimes\zeta\)._ In the above definition, if we replace \(T\) by \(R\) (resp., \(P\), \(K\), \(W\) and \(C\)) then we obtain Riemann (resp., projective, conharmonic, concircular and conformal) compatibility of \(Z\). **Definition 2.8**.: _Let \(T\) be a \((0,4)\)-type tensor on \(M\) and \(\mathcal{Z}\) be the endomorphism corresponding to a \((0,2)\)-type tensor \(Z\) on \(M\). If \(M\) possesses the relation_ \[\underset{\varpi_{1},\varpi_{2},\varpi_{3}}{\mathcal{S}}(\nabla_{\varpi_{1}}T )(\varpi_{2},\varpi_{3},\varpi_{4},\varpi_{5})=\underset{\varpi_{1},\varpi_{2},\varpi_{3}}{\mathcal{S}}\Sigma(\varpi_{1})T(\varpi_{2},\varpi_{3},\varpi_{4 },\varpi_{5})\] _for an \(1\)-form \(\Sigma\), then the curvature \(2\)-forms \(\Omega_{(T)l}^{m}\)[69] are recurrent [74, 75, 76]. Further, the \(1\)-forms \(\Lambda_{(Z)l}\)[121] are recurrent if_ \[(\nabla_{\varpi_{1}}Z)(\varpi_{2},\varpi)-(\nabla_{\varpi_{2}}Z)(\varpi_{1}, \varpi)=\Sigma(\varpi_{1})Z(\varpi_{2},\varpi)-\Sigma(\varpi_{2})Z(\varpi_{1},\varpi)\] _holds on \(M\) for an \(1\)-form \(\Sigma\)._ **Definition 2.9**.: \((\)_[_79, 128_]_ _Let T be a (0,4)-type tensor on \(M\) and \(L(M)\) be the set of all 1-forms \(\Pi\) on \(M\) satisfying_ \[\underset{\varpi_{1},\varpi_{2},\varpi_{3}}{\mathcal{S}}\Pi(\varpi_{1})\otimes T (\varpi_{2},\varpi_{3},\varpi_{4},\varpi_{5})=0\] _with \(dimL(M)\geq 1\). Then \(M\) is called a \(T\)-space by Venzi._ Several notions of geometrical symmetries, such as, motion, curvature collineation for (0,4)-type curvature tensor and for (1,3)-type curvature tensor, curvature inheritance for (0,4)-type curvature tensor and for (1,3)-type curvature tensor, Ricci collineation and Ricci inheritance, all of which are originated from the Lie derivatives of different tensors, are essential to be reviewed for the study of symmetry in the HBH spacetime. **Definition 2.10**.: _A manifold \(M\) admits motion with respect to some vector field \(\xi\) if \(\pounds_{\xi}g=0\). The vector field \(\xi\) is also called Killing._ Katzin et al. [66, 67], in 1969, introduced the concept of curvature collineation for the (1,3)-type curvature tensor by vanishing Lie derivative of the (1,3)-type Riemann curvature tensor with respect to some vector field. Again, in 1992, by introducing the notion of curvature inheritance for the (1,3)-type curvature tensor, Duggal [17] generalizes the concept of curvature collineation. **Definition 2.11**.: _([17]) A semi-Riemannian manifold \(M\) admits curvature inheritance for the (1,3)-type curvature tensor \(\widetilde{R}\) if \(M\) satisfies_ \[\pounds_{\xi}\widetilde{R}=\lambda\widetilde{R}\] _for a non-Killing vector field \(\xi\), where \(\lambda\) is a scalar function and the (1,3)-type curvature tensor \(\widetilde{R}\) is related to the (0,4)-type curvature tensor \(R\) by \(R(v_{1},v_{2},v_{3},v_{4})=g(\widetilde{R}(v_{1},v_{2})v_{3},v_{4})\). In particular, if \(\lambda=0\), then it turns into curvature collineation [66, 67] for the (1,3)-type curvature tensor \(\widetilde{R}\) (i.e., \(\pounds_{\xi}\widetilde{R}=0\))._ **Definition 2.12**.: _([17]) A semi-Riemannian manifold \(M\) realizes Ricci inheritance if for some vector field \(\xi\) and for some scalar function \(\lambda\), \(M\) possesses the relation_ \[\pounds_{\xi}S=\lambda S.\] _. Further, if \(\lambda=0\), it transforms into Ricci collineation (i.e., \(\pounds_{\xi}S=0\))._ Recently, generalizing the notion of curvature inheritance for (0,4)-type curvature tensor \(R\) Shaikh and Datta [96] introduced the concept of generalized curvature inheritance for (0,4)-type curvature tensor \(R\), which is given as follows: **Definition 2.13**.: _([96]) A semi-Riemannian manifold \(M\) admits generalized curvature inheritance for (0,4)-type curvature tensor \(R\) if there is a non-Killing vector field \(\xi\) which satisfies the relation_ \[\pounds_{\xi}R=\lambda R+\lambda_{1}g\wedge g+\lambda_{2}g\wedge S+\lambda_{3 }S\wedge S,\] _where \(\lambda,\lambda_{1},\lambda_{2},\lambda_{3}\) are the scalar functions. In particular, if \(\lambda_{i}=0\) for \(i=1,2,3\), then \(M\) admits curvature inheritance for (0,4)-type curvature tensor \(R\). Further, if \(\lambda=0=\lambda_{i}\) for \(i=1,2,3\), then it becomes curvature collineation for (0,4)-type curvature tensor \(R\)._ ## 3. **Hayward black hole spacetime admitting geometric structures** In coordinates \((t,r,\theta,\phi)\), the metric tensor of HBH spacetime is given by: \[g=\left(\begin{array}{cccc}-(1-\frac{2mr^{2}}{r^{3}+2mb^{2}})&0&0&0\\ 0&(1-\frac{2mr^{2}}{r^{3}+2mb^{2}})^{-1}&0&0\\ 0&0&r^{2}&0\\ 0&0&0&r^{2}\sin^{2}\theta\end{array}\right).\] Now, the components of the metric \(g\) are \[g_{11}=-\left(1-\frac{2mr^{2}}{r^{3}+2mb^{2}}\right),\;g_{22}= \left(1-\frac{2mr^{2}}{r^{3}+2mb^{2}}\right)^{-1},\] \[g_{33}=r^{2},\;g_{44}=r^{2}\sin^{2}\theta,\;g_{ij}=0,\;\text{ otherwise}.\] Let \(B=2b^{2}m+r^{2}(r-2m)\), \(B_{1}=2b^{2}m+r^{3}\), \(B_{2}=4b^{2}m-r^{3}\), \(B_{3}=b^{2}m-r^{3}\) and \(B_{4}=10b^{2}m-r^{3}\). The non-vanishing components of the Christoffel symbols \((\Gamma_{ij}^{h})\) of 2nd kind are calculated as given below: \[\Gamma_{11}^{2}=-\frac{mrBB_{2}}{B_{1}^{3}},\;\;\Gamma_{12}^{1}=- \frac{mrB_{2}}{BB_{1}}=-\Gamma_{22}^{2},\] \[\Gamma_{23}^{3}=\frac{1}{r}=\Gamma_{24}^{4},\;\;\Gamma_{33}^{2}=- r+\frac{2mr^{3}}{B_{1}},\] \[\Gamma_{34}^{4}=\cot\theta,\;\;\Gamma_{44}^{2}=-\frac{rB\sin^{2} \theta}{B_{1}},\] \[\Gamma_{44}^{3}=-\cos\theta\sin\theta.\] The non-vanishing components of the Riemann-curvature \((R_{abcd})\) and Ricci tensor \((S_{ab})\) and the scalar curvature \(\kappa\) are calculated as given below: \[R_{1212}=-\frac{2m(2b^{2}m(2b^{2}m-7r^{3})+r^{6})}{B_{1}^{3}},\;\; R_{1313}=\frac{mr^{2}BB_{2}}{B_{1}^{3}}=\frac{1}{\sin^{2}\theta}R_{1414},\] \[R_{2323}=\frac{mr^{2}B_{2}}{BB_{1}}=\frac{1}{\sin^{2}\theta}R_{2 424},\;\;R_{3434}=\frac{2mr^{4}\sin^{2}\theta}{B_{1}};\] \[S_{11}=\frac{24b^{2}m^{2}BB_{3}}{B_{1}^{4}},\;\;S_{22}=\frac{24b^{2}m^{2}(-b^{2 }m+r^{3})}{BB_{1}^{2}},\] \[S_{33}=-\frac{12b^{2}m^{2}r^{2}}{B_{1}^{2}},\;\;S_{44}=\sin^{2} \theta S_{33};\] \[\kappa=\frac{24b^{2}m^{2}(r^{3}-4b^{2}m)}{(2b^{2}m+r^{3})^{3}}.\] From the above calculation, one can obtain the following: **Proposition 3.1**.: _The HBH spacetime is neither Einstein nor quasi-Einstein but \((i)\) it is \(2\)-quasi-Einstein for \(\alpha=-\frac{12b^{2}m^{2}}{B_{1}^{2}}\) and \((ii)\) for \(\alpha=-\frac{12b^{2}m^{2}}{B_{1}^{2}}\), \(\beta=1\), \(\gamma=1\), \(\Pi\)=\(\left\{-\frac{B}{B_{1}},1,0,0\right\}\) and \(\phi\)= \(\left\{\frac{36b^{2}m^{2}r^{3}+B_{1}^{2}B_{1}^{2}}{2B_{1}^{3}},\frac{18b^{2}m^{ 2}r^{3}}{B_{1}^{2}B_{1}^{2}}-\frac{1}{2},0,0\right\}\), it is generalized quasi-Einstein in the sense of Chaki._ Let \(\mathcal{K}^{1}=(g\wedge g)\), \(\mathcal{K}^{2}=(g\wedge S)\) and \(\mathcal{K}^{3}=(S\wedge S).\) Then the components other than zero of \(\mathcal{K}^{1}\), \(\mathcal{K}^{2}\) and \(\mathcal{K}^{3}\) are calculated as given below: \[\mathcal{K}^{1}_{1212}=2,\mathcal{K}^{1}_{1313}=\frac{2r^{2}B}{B_ {1}}=\frac{1}{\sin^{2}\theta}\mathcal{K}^{1}_{1414},\] \[\mathcal{K}^{1}_{2323}=-\frac{2r^{2}B_{1}}{B}=\frac{1}{\sin^{2} \theta}\mathcal{K}^{1}_{2424},\ \ \mathcal{K}^{1}_{3434}=-2r^{4}\sin^{2}\theta;\] \[\mathcal{K}^{2}_{1212}=-\frac{48b^{2}m^{2}B_{2}}{B_{1}^{3}},\ \ \mathcal{K}^{2}_{1313}=-\frac{12b^{2}m^{2}r^{2}BB_{2}}{B_{1}^{2}}=\frac{1}{\sin ^{2}\theta}\mathcal{K}^{2}_{1414},\] \[\mathcal{K}^{2}_{2323}=\frac{12b^{2}m^{2}r^{2}B_{2}}{BB_{1}^{2}}= \frac{1}{\sin^{2}\theta}\mathcal{K}^{2}_{2424},\ \ \mathcal{K}^{2}_{3434}=\frac{24b^{2}m^{2}r^{4}\sin^{2}\theta}{B_{1}^{2}};\] \[\mathcal{K}^{3}_{1212}=\frac{1152b^{4}m^{4}B_{3}^{2}}{B_{1}^{6}},\ \ \mathcal{K}^{3}_{1313}=\frac{576b^{4}m^{4}r^{2}B_{3}B}{B_{1}^{6}}=\frac{1}{\sin ^{2}\theta}\mathcal{K}^{3}_{1414},\] \[\mathcal{K}^{3}_{2323}=-\frac{576b^{4}m^{4}r^{2}B_{3}}{B_{1}^{4}B }=\frac{1}{\sin^{2}\theta}\mathcal{K}^{3}_{2424},\ \ \mathcal{K}^{3}_{3434}=-\frac{288b^{4}m^{4}r^{4}\sin^{2}\theta}{B_{1}^{4}}.\] From the above calculation, it follows that \(S\wedge S\), \(g\wedge S\), \(g\wedge g\), and \(R\) are linearly dependent in HBH spacetime, and hence the Riemann tensor \(R\) can be explicitly given as follows: \[R=\varsigma_{1}\mathcal{K}^{1}+\varsigma_{2}\mathcal{K}^{2}+ \varsigma_{3}\mathcal{K}^{3} \tag{3.1}\] where \(\varsigma_{1}=m(\frac{2}{3r^{3}}-\frac{1}{B_{1}})\), \(\varsigma_{2}=\frac{1}{36}(10+\frac{16b^{2}m}{r^{3}}+\frac{r^{3}}{b^{2}m})\) and \(\varsigma_{3}=\frac{B_{2}B_{1}^{3}}{432b^{4}m^{3}r^{3}}.\) On contraction the relation (3.1) entails \[S^{2}+\vartheta_{1}S+\vartheta_{2}g=0 \tag{3.2}\] where \(\vartheta_{1}=\frac{12b^{2}m^{2}B_{2}}{B_{1}^{3}}\) and \(\vartheta_{2}=\frac{288b^{4}m^{4}B_{3}}{B_{1}^{5}}.\) From the relation (3.1) and (3.2), we can state the following: **Proposition 3.2**.: _The HBH spacetime is neither Ein\((3)\) nor generalized Roter type but it fulfills \((i)\) Roter type and \((ii)\) Einstein manifold of level \(2.\)_ The non-vanishing components \(C_{abcd}\) of the conformal curvature tensor \(C\) (upto symmetry) are calculated and given as below: \[C_{1212}=\frac{2mr^{3}B_{2}}{B_{1}^{3}},\ \ C_{1313}=-\frac{mr^{5}B_{ 2}B}{B_{1}^{4}}=\frac{1}{\sin^{2}\theta}C_{1414},\] \[C_{2323}=\frac{mr^{5}B_{2}}{B_{1}^{2}B}=\frac{1}{\sin^{2}\theta }C_{2424},\ \ C_{3434}=-\frac{2mr^{7}B_{2}\sin^{2}\theta}{B_{1}^{3}}.\] If \(\mathscr{D}_{abcd,f}=\nabla_{f}R_{abcd}\) and \(\mathscr{F}_{abcd,f}=\nabla_{f}C_{abcd},\) then the components other than zero of \(\nabla R\) and \(\nabla C\) are obtained as follows: \[\mathscr{D}_{1212,2}=\frac{6m(40b^{4}m^{2}r^{2}-32b^{2}mr^{5}+r^ {8})}{B_{1}^{4}},\ \ \mathscr{D}_{1213,3}=\frac{3mr^{4}B_{4}B}{B_{1}^{4}}=\mathscr{D}_{1313,2},\] \[\mathscr{D}_{1214,4}=-\frac{3mr^{4}B_{4}B\sin^{2}\theta}{B_{1}^{ 2}}=\mathscr{D}_{1414,2},\ \ \mathscr{D}_{2323,2}=-\frac{3mr^{4}B_{4}}{B_{1}^{4}B}=\frac{1}{\sin^{2}\theta} \mathscr{D}_{2424,2},\] \[\mathscr{D}_{2334,4}=\frac{3mr^{6}\sin^{2}\theta}{B_{1}^{2}}=- \mathscr{D}_{2434,3}=-\frac{1}{2}\mathscr{D}_{3434,2};\] \[\mathscr{F}_{1212,2}=\frac{6mr^{2}(8b^{4}m^{2}-12b^{2}mr^{3}+r^{6})}{B_{1}^{4}}=- \frac{1}{r^{4}\sin^{2}\theta}\mathscr{F}_{3434,2},\] \[\mathscr{F}_{1213,3}=-\frac{3mr^{4}BB_{2}}{B_{1}^{4}}=\frac{1}{\sin^{2}\theta} \mathscr{F}_{1214,4},\ \ \mathscr{F}_{1313,2}=-\frac{3mr^{4}(8b^{4}m^{2}-12b^{2}mr^{3}+r^{6})B}{B_{1}^{ 5}}=\frac{1}{\sin^{2}\theta}\mathscr{F}_{1414,2},\] \[\mathscr{F}_{2323,2}=\frac{3mr^{4}(8b^{4}m^{2}-12b^{2}mr^{3}+r^{6})}{B_{1}^{3}B }=\frac{1}{\sin^{2}\theta}\mathscr{F}_{242,2},\ \ \mathscr{F}_{2334,4}=-\frac{3mr^{6}B_{2}\sin^{2}\theta}{B_{1}^{3}}=- \mathscr{F}_{2434,3}.\] From the above components we get the following proposition: **Proposition 3.3**.: _The HBH spacetime is not conformally recurrent but its \((i)\) conformal \(2\)-form are recurrent for the \(1\)-forms \(\{0,-\frac{6b^{2}m(8b^{2}m-5r^{3})}{8b^{4}m^{2}r+2b^{2}mr^{4}-r^{7}},0,0\}\) and \((ii)\) the general form of \(R\)-compatible tensor and \(C\)-compatible tensor are given by_ \[\left(\begin{array}{cccc}\mathscr{Z}_{11}&\mathscr{Z}_{12}&0&0\\ \mathscr{Z}_{12}&\mathscr{Z}_{22}&0&0\\ 0&0&\mathscr{Z}_{33}&\mathscr{Z}_{34}\\ 0&0&\mathscr{Z}_{34}&\mathscr{Z}_{44}\end{array}\right)\] _where \(\mathscr{Z}_{ij}\) are arbitrary scalars._ Let \(\mathcal{M}^{1}=R\cdot R\), \(\mathcal{M}^{2}=R\cdot C\), \(\mathcal{M}^{3}=C\cdot R\), \(\mathscr{P}^{1}=Q(g,R)\), \(\mathscr{P}^{2}=Q(S,R)\), \(\mathscr{P}^{3}=Q(g,C)\) and \(\mathscr{P}^{4}=Q(S,C)\). Then the components of \(\mathcal{M}^{1}\), \(\mathcal{M}^{2}\), \(\mathcal{M}^{3}\), \(\mathscr{P}^{1}\), \(\mathscr{P}^{2}\), \(\mathscr{P}^{3}\) and \(\mathscr{P}^{4}\), which do not vanish, are given upto symmetry as follows: \[\mathcal{M}^{1}_{1223,13}=-\frac{3m^{2}r^{5}(40b^{4}m^{2}-14b^{2} mr^{3}+r^{6})}{B_{1}^{5}}=-\mathcal{M}^{1}_{1213,23},\ \ \mathcal{M}^{1}_{1434,13}=-\frac{3m^{2}r^{7}BB_{2}\sin^{2}\theta}{B_{1}^{5}}=- \mathcal{M}^{1}_{1334,14},\] \[\mathcal{M}^{1}_{1224,14}=-\frac{3m^{2}r^{5}(40b^{4}m^{2}-14b^{2} mr^{3}+r^{6})\sin^{2}\theta}{B_{1}^{5}}=-\mathcal{M}^{1}_{1214,24},\ \ \mathcal{M}^{1}_{2434,23}=-\frac{3m^{2}r^{7}B_{2}\sin^{2}\theta}{B_{1}^{3}B}=- \mathcal{M}^{1}_{2334,24};\] \[\mathcal{M}^{2}_{1223,13}=-\frac{3m^{2}r^{5}B_{2}^{2}}{B_{1}^{5}}=-\mathcal{M} ^{2}_{1213,23},\ \ \mathcal{M}^{2}_{1434,13}=\frac{3m^{2}r^{7}BB_{2}\sin^{2}\theta}{B_{1}^{5}}=- \mathcal{M}^{2}_{1334,14},\] \[\mathcal{M}^{2}_{1224,14}=-\frac{3m^{2}r^{5}B_{2}^{2}\sin^{2}\theta}{B_{1}^{5}} =-\mathcal{M}^{2}_{1214,24},\ \ \mathcal{M}^{2}_{2434,23}=\frac{3m^{2}r^{7}B_{2}^{2}\sin^{2}\theta}{B_{1}^{5}}=- \mathcal{M}^{2}_{2334,24};\] \[\mathcal{M}^{3}_{1223,13}=-\frac{3m^{2}r^{8}(40b^{4}m^{2}-14b^{2} mr^{3}+r^{6})}{B_{1}^{5}}=-\mathcal{M}^{3}_{1213,23},\] \[\mathcal{M}^{3}_{1434,13}=-\frac{3m^{2}r^{10}BB_{2}\sin^{2}\theta }{B_{1}^{5}}=-\mathcal{M}^{3}_{1334,14},\] \[\mathcal{M}^{3}_{1224,14}=-\frac{3m^{2}r^{8}(40b^{4}m^{2}-14b^{2} m^{3}+r^{6})\sin^{2}\theta}{B_{1}^{5}}=-\mathcal{M}^{3}_{1214,24},\] \[\mathcal{M}^{3}_{2434,23}=-\frac{3m^{2}r^{10}B_{2}\sin^{2}\theta }{B_{1}^{4}B}=-\mathcal{M}^{3}_{2334,24};\] \[\mathscr{P}^{1}_{1223,13}=\frac{3mr^{5}B_{4}}{B_{1}^{3}}=-\mathscr{P}^{1}_{1213,23},\ \ \mathcal{P}^{1}_{1434,13}=\frac{3mr^{7}B\sin^{2}\theta}{B_{1}^{3}}=- \mathscr{P}^{1}_{1334,14},\] \[\mathscr{P}^{1}_{1224,14}=\frac{3mr^{5}B_{4}\sin^{2}\theta}{B_{1}^{3}}=- \mathscr{P}^{1}_{1214,24},\ \ \mathcal{P}^{1}_{2434,23}=\frac{3mr^{7}\sin^{2}\theta}{B_{1}B}=- \mathscr{P}^{1}_{2334,24};\] \[\mathscr{P}^{2}_{1223,13}=-\frac{216b^{4}m^{4}r^{5}}{B_{1}^{5}}=- \mathscr{P}^{2}_{1213,23},\ \ \mathscr{P}^{2}_{1434,13}=-\frac{36b^{2}m^{3}r^{7}B\sin^{2}\theta}{B_{1}^{5}}=- \mathscr{P}^{2}_{1334,14},\] \[\mathscr{P}^{2}_{1224,14}=-\frac{216b^{4}m^{4}r^{5}\sin^{2}\theta} {B_{1}^{5}}=-\mathscr{P}^{2}_{1214,24},\ \ \mathscr{P}^{2}_{2434,23}=\frac{36b^{2}m^{3}r^{7}\sin^{2}\theta}{B_{1}^{3}B}=- \mathscr{P}^{2}_{2334,24};\] \[\mathscr{P}^{3}_{1223,13}=\frac{3mr^{5}B_{2}}{B_{1}^{3}}=-\mathscr{P }^{3}_{1213,23},\ \ \mathscr{P}^{3}_{1434,13}=-\frac{3mr^{7}B_{2}\sin^{2}\theta}{B_{1}^{4}}=- \mathscr{P}^{3}_{1334,14},\] \[\mathscr{P}^{3}_{1224,14}=\frac{3mr^{5}B_{2}\sin^{2}\theta}{B_{1}^ {3}}=-\mathscr{P}^{3}_{1214,24},\ \ \mathscr{P}^{3}_{2434,23}=-\frac{3mr^{7}B_{2}\sin^{2}\theta}{B_{1}^{ 3}B}=-\mathscr{P}^{3}_{2334,24};\] \[\mathscr{P}^{4}_{1223,13}=-\frac{72b^{4}m^{4}r^{5}B_{2}}{B_{1}^{6}}= -\mathscr{P}^{4}_{1213,23},\] \[\mathscr{P}^{4}_{1434,13}=-\frac{36b^{2}m^{3}r^{7}(2b^{2}m^{-r^{3} })B_{2}B\sin^{2}\theta}{B_{1}^{5}}=-\mathscr{P}^{4}_{1334,14},\] \[\mathscr{P}^{4}_{1224,14}=-\frac{72b^{4}m^{4}r^{5}B_{2}\sin^{2} \theta}{B_{1}^{6}}=-\mathscr{P}^{4}_{1214,24},\] \[\mathscr{P}^{4}_{2434,23}=\frac{36b^{2}m^{3}r^{7}(8b^{4}m^{2-6}b^{ 2}mr^{3}+r^{6})\sin^{2}\theta}{B_{1}^{5}B}=-\mathscr{P}^{4}_{2334,24}.\] From the above components we get the following: **Proposition 3.4**.: _The HBH spacetime is not Ricci generalized pseudosymmetric but it is pseudosymmetric and realizes several pseudosymmetric type curvature relations:_ 1. \(R\cdot R=-\frac{mB_{2}}{B_{1}^{2}}Q(g,R)\) _and hence_ \(R\cdot C=-\frac{mB_{2}}{B_{1}^{2}}Q(g,C)\)_,_ 2. \(C\cdot R=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,R)\) _and hence_ \(C\cdot C=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,C),\)__ 3. \(R\cdot R-\frac{m(16b^{2}m-r^{3})}{B_{2}B_{1}}Q(g,C)=Q(S,R),\)__ 4. \(C\cdot R-R\cdot C=\mathcal{L}_{1}\ Q(g,R)+\mathcal{L}_{2}\ Q(S,R),\) _where_ \(\mathcal{L}_{1}=-\frac{8b^{2}m^{2}B_{2}}{(16b^{2}m-r^{3})B_{1}^{2}}\) _and_ \(\mathcal{L}_{2}=\frac{16b^{4}m^{2}-8b^{2}mr^{3}+r^{6}}{(r^{3}-16b^{2}m)B_{1}},\)__ 5. \(C\cdot R-R\cdot C=\mathcal{L}_{3}\ Q(g,C)+\mathcal{L}_{4}\ Q(S,C),\) _where_ \(\mathcal{L}_{3}=\frac{8b^{2}m^{2}B_{2}}{B_{1}^{3}}\) _and_ \(\mathcal{L}_{4}=1\)_._ The non-vanishing components \(P_{abcd}\) of the projective curvature tensor \(P\) (upto symmetry) of the HBH spacetime are calculated as follows: \[\begin{array}{ll}P_{1212}=\frac{2mr^{3}B_{4}}{B_{1}^{3}}=-P_{1221},\ \ P_{1313}=-\frac{mr^{5}B_{4}B}{B_{1}^{4}}=\frac{1}{\sin^{2}\theta}P_{1414}, \\ P_{1331}=-\frac{mr^{5}B}{B_{1}^{4}}=\frac{1}{\sin^{2}\theta}P_{1441},\ \ P_{2323}=\frac{m^{5}B_{4}}{B_{1}^{4}B}=\frac{1}{\sin^{2}\theta}P_{2424}, \\ P_{2332}=\frac{mr^{5}}{B_{1}B}=\frac{1}{\sin^{2}\theta}P_{2442},\ \ P_{3434}=\frac{2mr^{7}\sin^{2}\theta}{B_{1}^{2}}=-P_{3443}.\end{array}\] Let \(\mathcal{M}^{4}=P\cdot S\) and \(\mathscr{P}^{5}=Q(g,S)\). Then the non-vanishing components of the tensor \(\mathcal{M}^{4}\) and \(\mathscr{P}^{5}\) are obtained as follows: \[\begin{array}{ll}\mathcal{M}^{4}_{13,13}=-\frac{36b^{2}m^{3}r^{5}B_{2}B}{B_{ 1}^{6}}=-\mathcal{M}^{4}_{13,31},&\mathcal{M}^{4}_{14,14}=-\frac{36b^{2}m^{3}r^ {5}B_{2}B\sin^{2}\theta}{B_{1}^{6}}=-\mathcal{M}^{4}_{14,41},\\ \mathcal{M}^{4}_{23,23}=\frac{36b^{2}m^{3}r^{5}B_{2}}{BB_{1}^{4}}=-\mathcal{M} ^{4}_{23,32},&\mathcal{M}^{4}_{24,24}=\frac{36b^{2}m^{3}r^{5}B_{2}\sin^{2} \theta}{BB_{1}^{4}}=-\mathcal{M}^{4}_{24,42};\end{array}\] \[\mathscr{P}^{5}_{1313}=\frac{36b^{2}m^{2}r^{5}B}{B_{1}^{4}}=\frac{1}{\sin^{2} \theta}\mathscr{P}^{5}_{1414},\ \ \ \mathscr{P}^{5}_{2323}=-\frac{36b^{2}m^{2}r^{5}}{B_{1}^{2}B}=\frac{1}{\sin^{2} \theta}\mathscr{P}^{5}_{2424}.\.\] From the above components we get the following: **Proposition 3.5**.: _The HBH spacetime fulfills the curvature conditions \((i)\)\(R\cdot P=-\frac{mB_{2}}{B_{1}^{2}}Q(g,P)\)\((ii)\)\(P\cdot S=-\frac{mB_{2}}{B_{1}^{2}}Q(g,S)\) and \((iii)\) the general form of \(P\)-compatible tensor is given by_ \[\left(\begin{array}{cccc}\mathscr{Z}_{11}&\mathscr{Z}_{12}&0&0\\ \mathscr{Z}_{12}&\mathscr{Z}_{22}&0&0\\ 0&0&\mathscr{Z}_{33}&\mathscr{Z}_{34}\\ 0&0&\mathscr{Z}_{34}&\mathscr{Z}_{44}\end{array}\right)\] _where \(\mathscr{Z}_{ij}\) are arbitrary scalars._ The non-vanishing components of the concircular curvature tensor \(W\) and conharmonic curvature tensor \(K\) (upto symmetry) are computed as follows: \[\begin{array}{l}W_{1212}=\frac{2mr^{3}(13b^{2}m-r^{3})}{B_{1}^{3}},\ \ W_{1313}=-\frac{mr^{5}B_{2}B}{B_{1}^{4}}=\frac{1}{\sin^{2}\theta}W_{1414}, \\ W_{2323}=\frac{mr^{5}B_{2}}{B_{1}^{2}B}=\frac{1}{\sin^{2}\theta}W_{2424},\ \ W_{3434}= \frac{2mr^{7}(5b^{2}m+r^{3})\sin^{2}\theta}{B_{1}^{4}};\\ \\ K_{1212}=\frac{2mB_{2}}{B_{1}^{4}},\ \ K_{1313}=-\frac{mr^{2}B_{2}^{2}B}{B_{1}^{ 2}}=\frac{1}{\sin^{2}\theta}K_{1414},\\ K_{2323}=\frac{mr^{2}B_{2}^{2}}{B_{1}^{2}B}=\frac{1}{\sin^{2}\theta}K_{2424}, \ \ K_{3434}=-\frac{2mr^{4}B_{2}\sin^{2}\theta}{B_{1}^{2}}.\end{array}.\] If \(\mathcal{M}^{5}=W\cdot R\) and \(\mathcal{M}^{6}=K\cdot R\), then the non-vanishing components of the tensors \(\mathcal{M}^{5}\) and \(\mathcal{M}^{6}\) are given as below: \[\begin{array}{l}\mathcal{M}^{5}_{1223,13}=-\frac{3m^{2}r^{8}(40b^{4}m^{2}-1 4b^{2}mr^{3}+r^{6})}{B_{1}^{6}}=-\mathcal{M}^{5}_{1213,23},\\ \mathcal{M}^{5}_{1434,13}=-\frac{3m^{2}r^{10}BB_{2}\sin^{2}\theta}{B_{1}^{6}} =-\mathcal{M}^{5}_{1334,14},\\ \mathcal{M}^{5}_{1224,14}=-\frac{3m^{2}r^{8}(40b^{4}m^{2}-14b^{2}mr^{3}+r^{6}) \sin^{2}\theta}{B_{1}^{6}}=-\mathcal{M}^{5}_{1214,24},\\ \mathcal{M}^{5}_{2434,23}=-\frac{3m^{2}r^{10}B_{2}\sin^{2}\theta}{B_{1}^{1}B}=- \mathcal{M}^{5}_{2334,24}\end{array}\] \[\begin{array}{l}\mathcal{M}^{6}_{1223,13}=\frac{3m^{2}r^{5}B_{4}B_{2}^{2}}{B _{1}^{6}}=-\mathcal{M}^{6}_{1213,23},\ \ \mathcal{M}^{6}_{1434,13}=-\frac{3m^{2}r^{7}B_{2}^{2}B\sin^{2}\theta}{B_{1}^{6}} =-\mathcal{M}^{6}_{1334,14},\\ \mathcal{M}^{6}_{1224,14}=\frac{3m^{2}r^{5}B_{4}B_{2}^{2}\sin^{2}\theta}{B_{1} ^{6}}=-\mathcal{M}^{6}_{1214,24},\ \ \mathcal{M}^{6}_{2434,23}=\frac{3m^{2}r^{7}B_{2}^{2}\sin^{2}\theta}{B_{1}^{4}B} =-\mathcal{M}^{6}_{2334,24}.\end{array}\] From the above components we get the following: **Proposition 3.6**.: _The HBH spacetime fulfills the curvature conditions_ \[(i)\ W\cdot R=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,R),\] \((ii)\)\(K\cdot R=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,R)\) and \((iii)\) the general form of \(W\)and \(K\)-compatible tensor are given by_ \[\left(\begin{array}{cccc}\mathscr{Z}_{11}&\mathscr{Z}_{12}&0&0\\ \mathscr{Z}_{12}&\mathscr{Z}_{22}&0&0\\ 0&0&\mathscr{Z}_{33}&\mathscr{Z}_{34}\\ 0&0&\mathscr{Z}_{34}&\mathscr{Z}_{44}\end{array}\right)\] _where \(\mathscr{Z}_{ij}\) are arbitrary scalars._ From the above propositions (3.1)-(3.6), we can conclude the curvature restricted geometric properties of HBH spacetime as follows: **Theorem 3.1**.: _The HBH spacetime admits the following curvature properties:_ 1. \(R\cdot R=-\frac{mB_{2}}{B_{1}^{2}}Q(g,R).\) _Hence_ \(R\cdot S=-\frac{mB_{2}}{B_{1}^{2}}Q(g,S)\)_,_ \(R\cdot C=-\frac{mB_{2}}{B_{1}^{2}}Q(g,C)\)_,_ \(R\cdot P=-\frac{mB_{2}}{B_{1}^{2}}Q(g,P)\)_,_ \(R\cdot W=-\frac{mB_{2}}{B_{1}^{2}}Q(g,W)\) _and_ \(R\cdot K=-\frac{mB_{2}}{B_{1}^{2}}Q(g,K)\)_;_ 2. \(C\cdot R=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,R).\) _Hence_ \(C\cdot S=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,S)\)_,_ \(C\cdot C=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,C)\)_,_ \(C\cdot P=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,P)\)_,_ \(C\cdot W=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,W)\) _and_ \(C\cdot K=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,K)\)_;_ 3. \(W\cdot R=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,R).\) _Hence_ \(W\cdot S=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,S)\)_,_ \(W\cdot C=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,C)\)_,_ \(W\cdot P=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,P)\)_,_ \(W\cdot W=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,W)\) _and_ \(W\cdot K=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,K)\)_;_ 4. \(K\cdot R=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,R)\) _. Hence_ \(K\cdot S=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,S)\)_,_ \(K\cdot C=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,C)\)_,_ \(K\cdot P=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,P)\)_,_ \(K\cdot W=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,W)\) _and_ \(K\cdot K=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,K)\)_;_ 5. _it satisfies the pseudosymmetric type curvature conditions_ \(R\cdot R-\mathcal{L}Q(g,C)=Q(S,R)\)_, where_ \(\mathcal{L}=\frac{m(16b^{2}m-r^{3})}{B_{2}B_{1}}\)_;_ 6. _the tensor_ \(C\cdot R-R\cdot C\) _depends linearly on the tensors_ \(Q(g,C)\)_,_ \(Q(S,C)\)_,_ \(Q(g,R)\) _and_ \(Q(S,R)\)_,_ 7. _it is Ricci pseudosymmetric due to projective curvature i.e.,_ \(P\cdot S=-\frac{mB_{2}}{B_{1}^{2}}Q(g,S)\) _is satisfied,_ 8. _its conformal_ \(2\)_-forms are recurrent for the for_ \(1\)_-forms_ \(\{0,-\frac{6b^{2}m(8b^{2}m-5r^{3})}{8b^{4}m^{2}r+2b^{2}mr^{4}-r^{7}},0,0\},\)__ 9. _it is a Roter type spacetime,_ 10. _it is_ \(\text{Ein}(2)\) _spacetime as it possesses_ \(S^{2}+\psi_{1}S+\psi_{2}g=0\) _for_ \(\psi_{1}=\frac{12b^{2}m^{2}B_{2}}{B_{1}^{3}}\) _and_ \(\psi_{2}=\frac{288b^{4}m^{4}B_{3}}{B_{1}^{5}},\)__ 11. _it is_ \(2\)_-quasi-Einstein as_ \(\alpha=-\frac{12b^{2}m^{2}}{B_{1}^{2}},\)__ _;_ * _for_ \(\alpha=-\frac{12b^{2}m^{2}}{B_{1}^{2}}\)_,_ \(\beta=1\)_,_ \(\gamma=1\)_,_ \(\phi\)_=_ \(\left\{\frac{36b^{2}m^{2}r^{3}+B_{1}^{2}B}{2B_{1}^{3}},\frac{18b^{2}m^{2}r^{3}} {B_{1}^{2}B}-\frac{1}{2},0,0\right\}\) _and_ \(\Pi\)_=_\(\left\{-\frac{B}{B_{1}},1,0,0\right\}\)_, the HBH spacetime is generalized quasi-Einstein in the sense of Chaki,_ * _the general form of_ \(R\)_,_ \(C\)_,_ \(P\)_,_ \(W\) _and_ \(K\)_-compatible tensors in HBH spacetime are given by_ \[\left(\begin{array}{cccc}\mathscr{Z}_{11}&\mathscr{Z}_{12}&0&0\\ \mathscr{Z}_{12}&\mathscr{Z}_{22}&0&0\\ 0&0&\mathscr{Z}_{33}&\mathscr{Z}_{34}\\ 0&0&\mathscr{Z}_{34}&\mathscr{Z}_{44}\end{array}\right)\] _where_ \(\mathscr{Z}_{ij}\) _are arbitrary scalars,_ * _its Ricci tensor is compatible for_ \(C\)_,_ \(P\)_,_ \(R\)_,_ \(K\) _and_ \(W\)_._ **Remark 3.1**.: The HBH spacetime does not admit the following geometric structures: * \(\nabla P\neq 0\) and hence \(\nabla R\neq 0\), \(\nabla C\neq 0\), \(\nabla K\neq 0\) and \(\nabla W\neq 0\), * for any 1-form \(\Pi\), \(\nabla P\neq\Pi\otimes P\) and hence it is not recurrent for \(P\), \(R\), \(W\), \(K\) and \(C\), * it does not satisfy the semi-symmetric type condition \(R\cdot H=0\) where \(H=P,K,W,C,S\), * it is not Ricci generalized pseudosymmetric, * it does not realize \(P\cdot R=\mathcal{L}Q(g,R)\) for any smooth function \(\mathcal{L}\). Hence it is neither \(P\cdot W=\mathcal{L}Q(g,W)\), \(P\cdot K=\mathcal{L}Q(g,K)\) nor \(P\cdot C=\mathcal{L}Q(g,C)\), * it is not \(T\)-space by Venzi for \(T=C,R,P,W,K\), * it is neither Einstein nor quasi-Einstein, * the curvature 2-forms for \(R\), \(K\), \(W\) and \(P\) are not recurrent, * the Ricci tensor of HBH spacetime is neither cyclic parallel nor Codazzi type, * the HBH spacetime is neither weakly symmetric nor Chaki pseudosymmetric for \(P\), \(W\), \(K\), \(R\) and \(C\). ## 4. **Energy momentum tensor of Hayward black hole spacetime** In Einstein field equation (briefly, EFE), the energy momentum tensor \(T^{EM}\) in terms of curvature restrictions is presented as \(T^{EM}=\frac{1}{\nu}[S-\frac{\kappa}{2}g+\Lambda g]\), where \(\Lambda\) is the cosmological constant, \(\nu=\frac{8\pi G}{c^{4}}\) (\(G\) being the Newton's gravitational constant and c being the speed of light in vacuum). The components other than zero of the energy momentum tensor \(T^{EM}_{ab}\) are given below: \[T^{EM}_{11}=-\frac{3b^{2}m^{2}P^{2}B}{2B_{1}^{3}},\ \ T^{EM}_{22}=-\frac{3b^{2}m^{2}}{2B_{1}B},\] \[T^{EM}_{33}=\frac{3b^{2}m^{2}r^{2}B_{3}}{B_{1}^{3}}=\frac{1}{\sin^ {2}\theta}T^{EM}_{44}.\] The non-vanishing components of the covariant derivative of energy momentum tensor are calculated as follows: \[T^{EM}_{11,2}=\frac{9b^{2}m^{2}r^{2}B}{B_{1}^{4}},\ \ T^{EM}_{22,2}=-\frac{9b^{2}m^{2}r^{2}}{B _{1}^{2}B},\] \[T^{EM}_{23,3}=\frac{9b^{2}m^{2}r^{4}}{2B_{1}^{3}}=\frac{1}{\sin^ {2}\theta}T^{EM}_{24,4},\] \[T^{EM}_{33,2}=\frac{9b^{2}m^{2}r^{4}(-5b^{2}m+2r^{3})}{B_{1}^{4} }=\frac{1}{\sin^{2}\theta}T^{EM}_{44,2}.\] Let \(\mathcal{V}^{1}=R\cdot T^{EM}\), \(\mathcal{V}^{2}=C\cdot T^{EM}\), \(\mathcal{V}^{3}=W\cdot T^{EM}\), \(\mathcal{V}^{4}=K\cdot T^{EM}\), and \(\mathcal{U}^{1}=Q(g,T^{EM})\). Then the non-vanishing components of the tensors \(\mathcal{V}^{1}\), \(\mathcal{V}^{2}\), \(\mathcal{V}^{3}\), \(\mathcal{V}^{4}\) and \(\mathcal{U}^{1}\) are obtained as follows: \[\mathcal{V}^{1}_{1313}=-\frac{9b^{2}m^{3}r^{5}B_{2}B}{2B_{1}^{4}}=\frac{1}{\sin ^{2}\theta}\mathcal{V}^{1}_{1414},\ \ \mathcal{V}^{1}_{2323}=\frac{9b^{2}m^{3}r^{5}B_{2}}{2B_{1}^{4}B}=\frac{1}{\sin^ {2}\theta}\mathcal{V}^{1}_{2424};\] \[\mathcal{V}^{2}_{1313}=-\frac{9b^{2}m^{3}r^{8}B_{2}B}{2B_{1}^{4}}=\frac{1}{\sin ^{2}\theta}\mathcal{V}^{2}_{1414},\ \ \mathcal{V}^{2}_{2323}=\frac{9b^{2}m^{3}r^{8}B_{2}}{2B_{1}^{5}B}=\frac{1}{\sin^ {2}\theta}\mathcal{V}^{2}_{2424};\] \[\mathcal{V}^{3}_{1313}=-\frac{9b^{2}m^{3}r^{8}B_{2}B}{2B_{1}^{4}}=\frac{1}{\sin ^{2}\theta}\mathcal{V}^{3}_{1414},\ \ \mathcal{V}^{3}_{2323}=\frac{9b^{2}m^{3}r^{8}B_{2}}{2B_{1}^{5}B}=\frac{1}{\sin^ {2}\theta}\mathcal{V}^{3}_{2424};\] \[\mathcal{V}^{4}_{1313}=-\frac{9b^{2}m^{3}r^{5}B_{2}^{2}B}{2B_{1}^{4}}=\frac{1}{ \sin^{2}\theta}\mathcal{U}^{4}_{1414},\ \ \mathcal{V}^{4}_{2323}=-\frac{9b^{2}m^{3}r^{5}B_{2}^{2}}{2B_{1}^{5}B}=\frac{1}{ \sin^{2}\theta}\mathcal{V}^{4}_{2424};\] \[\mathcal{U}^{1}_{1313}=\frac{9b^{2}m^{2}r^{5}B}{2B_{1}^{4}}=\frac{1}{\sin^{2} \theta}\mathcal{U}^{1}_{1414},\ \ \mathcal{U}^{1}_{2323}=-\frac{9b^{2}m^{2}r^{5}}{2B_{1}^{2}B}=\frac{1}{\sin^{2} \theta}\mathcal{U}^{1}_{2424}.\] From the above components we get the following theorem: **Theorem 4.1**.: _The energy momentum tensor of the HBH spacetime admits the following geometric properties:_ 1. \(R\cdot T^{EM}=-\frac{mB_{2}}{B_{1}^{2}}Q(g,T^{EM})\) _i.e., the nature of the energy momentum tensor is pseudosymmetric,_ 2. \(C\cdot T^{EM}=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,T^{EM})\)_,_ 3. \(W\cdot T^{EM}=-\frac{mr^{3}B_{2}}{B_{1}^{3}}Q(g,T^{EM}),\)__ 4. \(K\cdot T^{EM}=\frac{mB_{2}^{2}}{B_{1}^{3}}Q(g,T^{EM})\) _and_ 5. _the energy momentum tensor is Riemann compatible, projective compatible, conharmonic compatible, concircular compatible and conformal compatible._ . **Hayward black hole spacetime Vs interior black hole spacetime and Reissner-Nordstrom spacetime** The interior black hole spacetime [49, 98] is a spherically symmetric non-static solution of EFE. Physically, it describes the empty spacetime in the exterior region of a black hole. A comparison between HBH spacetime and interior black hole spacetime in terms of their curvature properties is delineated as follows: **Similarities:** 1. both the spacetimes are pseudosymmetric, 2. both the spacetimes are pseudosymmetric due to conharmonic, concircular as well as conformal curvature, 3. both the spacetimes are Einstein manifolds of level 2 and 2-quasi Einstein manifolds, 4. both the spacetimes are Roter type, 5. Ricci tensor is Riemann compatible as well as Weyl compatible. Again, the exterior gravitational field of a non-rotating charged body can be described by Reissner-Nordstrom spacetime [68], which is a spherically symmetric solution of EFE having cosmological constant zero. This solution is more general than the Schwarzschild solution of EFE as the Reissner-Nordstrom solution admits non-vanishing charges. An elegant comparison between HBH spacetime and Reissner-Nordstrom spacetime based on the curvature properties is described as follows: **Dissimilarities:** 1. the conharmonic 2-forms of Reissner-Nordstrom spacetime are recurrent while HBH spacetime does not admit such recurrence, 2. HBH spacetime does not vanish scalar curvature while for the Reissner-Nordstrom spacetime the scalar curvature vanishes. However, the HBH spacetime and the Reissner-Nordstrom spacetime have the following similar properties: 1. both spacetimes are Roter type, 2. both the spacetimes are Einstein manifolds of level 2, 3. both are pseudosymmetric as well as pseudosymmetric due to Weyl conformal tensor, 4. conformal 2-forms for both the spacetimes are recurrent, 5. both are 2-quasi-Einstein manifold, 6. Ricci tensor of both the spacetimes are Riemann compatible as well as Weyl compatible. ## 6. **Ricci soliton and symmetries on Hayward black hole spacetime** Let \(\mathcal{K}(M)\) be the set of all Killing vector fields on \(M\). Then \(\mathcal{K}(M)\) is a Lie subalgebra of the Lie algebra \(\chi(M)\) of all smooth vector fields on \(M\) and \(\mathcal{K}(M)\) contains at most \(n(n+1)/2\) linearly independent Killing vector fields, and if \(\mathcal{K}(M)\) consists of exactly \(n(n+1)/2\) linearly independent Killing vector fields, then \(M\) is known as a maximally symmetric space. We mention that \(M\) is a maximally symmetric space if \(M\) is of constant scalar curvature. We note that the scalar curvature \(\kappa\) of HBH spacetime is not constant as shown in Section 3 by \(\kappa=\frac{24b^{2}m^{2}(r^{3}-4b^{2}m)}{(2b^{2}m+r^{3})^{3}}\) and hence it is not maximally symmetric. Now, we investigate some Killing and non-Killing vector fields on HBH spacetime given as follows: **Proposition 6.1**.: _The vector fields \(\frac{\partial}{\partial t}\) and \(\frac{\partial}{\partial\phi}\) on the HBH spacetime are Killing (i.e., \(\pounds_{\frac{\partial}{\partial t}}g=0=\pounds_{\frac{\partial}{\partial\phi}}g\))._ **Corollary 6.1**.: _For each real number \(\lambda_{1}\) and \(\lambda_{2}\), the vector field \(\lambda_{1}\frac{\partial}{\partial t}+\lambda_{2}\frac{\partial}{\partial \phi}\) on the HBH spacetime is also Killing._ The vector field \(\frac{\partial}{\partial r}\) is non-Killing, and if \(\mathcal{A}=\pounds_{\frac{\partial}{\partial r}}g\), then the non-zero components of \(\mathcal{A}\) are calculated as follows: \[\mathcal{A}_{11}=\frac{2mrB_{2}}{B_{1}^{2}},\quad\mathcal{A}_{22}=\frac{2mrB_ {2}}{B^{2}},\] \[\mathcal{A}_{33}=2r,\quad\mathcal{A}_{44}=2r\sin^{2}\theta.\] Therefore, for the non-Killing vector field \(\frac{\partial}{\partial r}\) and the 1-form \(\eta=(0,1,0,0)\), the HBH spacetime possesses the following relation: \[\pounds_{\frac{\partial}{\partial r}}g+2\sigma_{1}S+2\sigma_{2}g-2\sigma_{3} \eta\otimes\eta=0,\] where \(\sigma_{1},\sigma_{2},\sigma_{3}\) are given by \[\left.\begin{aligned} \sigma_{1}&=\frac{B_{1}^{2}(4b^{4}m^{2}+4b ^{2}mr^{3}-3mr^{5}+r^{6})}{36b^{2}m^{2}r^{4}B},\\ \sigma_{2}&=\frac{4b^{4}m^{2}-2b^{2}mr^{3}+3mr^{5}-2r ^{6}}{3r^{4}B},\\ \sigma_{3}&=\frac{2mrB_{2}}{B^{2}}.\end{aligned}\right\} \tag{6.1}\] This leads to the following: **Theorem 6.1**.: _The HBH spacetime realizes almost \(\eta\)-Ricci-Yamabe soliton for the non-Killing soliton vector field \(\frac{\partial}{\partial r}\) and the 1-form \(\eta=(0,1,0,0)\) provided \((2b^{2}m-2mr^{2}+r^{3})\neq 0\), i.e., for the soliton vector field \(\xi=\frac{\partial}{\partial r}\), the HBH spacetime possesses_ \[\frac{1}{2}\pounds_{\xi}g+\sigma_{1}S+\left(\lambda-\frac{1}{2}\sigma_{4} \kappa\right)g+\sigma_{3}\eta\otimes\eta=0,\] _where \(\sigma_{4}=2\), \(\lambda=\sigma_{2}+\kappa\), and \(\sigma_{1},\sigma_{2},\sigma_{3}\) are given in (6.1)._ **Theorem 6.2**.: _If \((2b^{2}m+r^{3})^{2}(4b^{4}m^{2}+4b^{2}mr^{3}-3mr^{5}+r^{6})=36b^{2}m^{2}r^{4}(2b^{2} m-2mr^{2}+r^{3})\) with \((2b^{2}m-2mr^{2}+r^{3})\neq 0\), then for the soliton vector field \(\frac{\partial}{\partial r}\), the HBH spacetime admits an almost \(\eta\)-Ricci soliton with the \(1\)-form \(\eta=(0,1,0,0)\), i.e., for the vector field \(\xi=\frac{\partial}{\partial r}\), the HBH spacetime realizes_ \[\frac{1}{2}\pounds_{\xi}g+S+\sigma_{2}g-\sigma_{3}\eta\otimes\eta=0.\] _where \(\sigma_{2}\), \(\sigma_{3}\) is given in (6.1)._ Let \(\mathcal{E}=\pounds_{\frac{\partial}{\partial r}}S\), \(\mathcal{G}=\pounds_{\frac{\partial}{\partial r}}\widetilde{R}\) and \(\mathcal{H}=\pounds_{\frac{\partial}{\partial r}}R\). Then the non-vanishing components of \(\mathcal{E}\), \(\mathcal{G}\) and \(\mathcal{H}\) are computed as follows: \[\mathcal{E}_{11}=-\frac{24b^{2}m^{2}r^{2}\{2(7m-3r)r^{6}+b^{2}mr^ {3}(3r-40m)+2b^{4}m^{2}(4m+15r)\}}{B_{1}^{5}},\] \[\mathcal{E}_{22}=-\frac{24b^{2}m^{2}r\{2b^{4}m^{2}(4m-15r)+b^{2}m (20m-3r)r^{3}+2r^{6}(3r-5m)\}}{B_{1}^{3}B^{2}},\] \[\mathcal{E}_{33}=-\frac{48b^{2}m^{2}rB_{3}}{B_{1}^{3}}=\frac{1}{ \sin^{2}\theta}\mathcal{E}_{44}\] \[\mathcal{G}_{212}^{1}=\frac{2mr\left\{16b^{6}m^{3}(2m-15r)+r^{9}(4m-3r)+24b^{4 }m^{2}r^{3}(5m+3r)+6b^{2}mr^{6}(15r-26m)\right\}}{B_{1}^{5}}\] \[\mathcal{G}_{313}^{1}=\frac{mr(16b^{4}m^{2}-26b^{2}mr^{3}+r^{6})}{B_{1}^{3}}=- \mathcal{G}_{331}^{1}=-\frac{1}{\sin^{2}\theta}\mathcal{G}_{441}^{1}=\mathcal{ G}_{323}^{2}\] \[=-\mathcal{G}_{332}^{1}=\frac{1}{\sin^{2}\theta}\mathcal{G}_{424}^{2}=-\frac{1 }{\sin^{2}\theta}\mathcal{G}_{442}^{2},\] \[\mathcal{G}_{112}^{2}=\frac{2mr\left\{(8m-3r)r^{9}+72b^{4}m^{2}r^{3}(5m+r)+6b^ {2}mr^{6}(15r-38m)-16b^{6}m^{3}(2m+15r)\right\}}{B_{1}^{5}}=-\mathcal{G}_{121} ^{2},\] \[\mathcal{G}_{113}^{3}=-\frac{mr\left\{(8m-3r)r^{6}+4b^{2}mr^{3}(6r-19m)+4b^{4} m^{2}(8m+15r)\right\}}{B_{1}^{4}}=-\mathcal{G}_{131}^{3}=\mathcal{G}_{114}^{4} =-\mathcal{G}_{141}^{4},\] \[\mathcal{G}_{223}^{3}=\frac{mr\left\{(4m-3r)r^{6}+4b^{2}mr^{3}(6r-11m)+4b^{4} m^{2}(15r-8m)\right\}}{B_{1}^{2}B^{2}}=-\mathcal{G}_{232}^{3}=\mathcal{G}_{224}^{4 }=-\mathcal{G}_{242}^{4},\] \[\mathcal{G}_{434}^{3}=\frac{2mrB_{2}\sin^{2}\theta}{B_{1}^{2}}=-\mathcal{G}_{4 33}^{4}=-\sin^{2}\theta\mathcal{G}_{334}^{4}=\sin^{2}\theta\mathcal{G}_{343}^{ 4}.\] \[\mathcal{H}_{1212}=\frac{6m(40b^{4}m^{2}r^{2}-32b^{2}mr^{5}+r^{8})}{B_{1}^{4}}= -\mathcal{H}_{1221}=\mathcal{H}_{2121},\] \[\mathcal{H}_{1313}=\frac{mr\left\{-32b^{6}m^{3}+(4m-r)r^{8}+4b^{2}mr^{3}(6r-17m )+4b^{4}m^{2}r^{2}(16m+9r)\right\}}{B_{1}^{5}}=\mathcal{H}_{3131}=-\mathcal{H} _{1331}\] \[=\frac{1}{\sin^{2}\theta}\mathcal{H}_{4141}=\frac{1}{\sin^{2}\theta} \mathcal{H}_{1414}=-\frac{1}{\sin^{2}\theta}\mathcal{H}_{1441},\] \[\mathcal{H}_{2323}=\frac{mr\left\{32b^{6}m^{3}-36b^{4}m^{2}r^{3}+12b^{2}m(3m-2r) r^{5}+r^{9}\right\}}{B_{1}^{2}B^{2}}=-\mathcal{H}_{2332}=\frac{1}{\sin^{2} \theta}\mathcal{H}_{2424}\] \[=-\frac{1}{\sin^{2}\theta}\mathcal{H}_{2442}=\mathcal{H}_{3232}=\frac{1}{\sin^{ 2}\theta}\mathcal{H}_{4242},\] \[\mathcal{H}_{3434}=\frac{2mr^{3}(8b^{2}m+r^{3})\sin^{2}\theta}{B_{1}^{2}}=- \mathcal{H}_{3443}=\mathcal{H}_{4343}.\] If \(\mathcal{M}=\pounds_{\frac{\partial}{\partial\theta}}g\), \(\mathcal{N}=\pounds_{\frac{\partial}{\partial\theta}}S\), \(\mathcal{Q}=\pounds_{\frac{\partial}{\partial\theta}}\widetilde{R}\) and \(\mathcal{O}=\pounds_{\frac{\partial}{\partial\theta}}R\), then the non-zero components of \(\mathcal{M}\), \(\mathcal{N}\), \(\mathcal{Q}\) and \(\mathcal{O}\) are given as follows: \[\mathcal{Q}_{414}^{1}=\frac{mr^{2}B_{2}\sin 2\theta}{B_{1}^{2}}=-\mathcal{Q}_{441}^{1}= \mathcal{Q}_{424}^{2}=-\mathcal{Q}_{42}^{2},\ \ \mathcal{Q}_{434}^{3}=\frac{2mr^{2}\sin 2\theta}{B_{1}}=-\mathcal{Q}_{443}^{3},\] \[\mathcal{O}_{1414}=\frac{mr^{2}B\sin 2\theta}{B_{1}^{3}}=-\mathcal{O}_{1441 }=\frac{1}{B_{2}}\mathcal{O}_{4141},\ \ \mathcal{O}_{2424}=\frac{mr^{2}B_{2}\sin 2 \theta}{B_{1}B}=-\mathcal{O}_{2442}=\mathcal{O}_{4242},\] \[\mathcal{O}_{3434}=\frac{2mr^{4}\sin 2\theta}{B_{1}}=-\mathcal{O}_{3443 }=\mathcal{O}_{1343}.\] From the above calculation of the Lie derivative of various curvature tensors it can be easily checked that with respect to the non-Killing vector fields \(\frac{\partial}{\partial r}\), \(\frac{\partial}{\partial\theta}\) and \(\lambda_{1}\frac{\partial}{\partial r}+\lambda_{2}\frac{\partial}{\partial \theta}\) (\(\lambda_{1},\lambda_{2}\) being real numbers), the HBH spacetime admits 1. neither Ricci collineation nor Ricci inheritance, 2. neither curvature collineation for (1,3)-type curvature tensor nor curvature collineation for (0,4)-type curvature tensor, 3. neither curvature inheritance for (1,3)-type curvature tensor nor curvature inheritance for (0,4)-type curvature tensor. ## 7. **Hayward black hole spacetime Vs point-like global monopole spacetime** The point-like global monopole spacetime [85, 21] is a static and spherically symmetric solution of EFE. It is a heavy object characterized by divergent mass and spherically symmetry, and against polar as well as spherical perturbation it is expected to be stable. A comparative study between the HBH spacetime and the point-like global monopole spacetime with respect to various kind of symmetries and Ricci soliton is given as follows: **Similarities:** 1. both the spacetimes admits motion for the vector fields \(\frac{\partial}{\partial t}\) and \(\frac{\partial}{\partial\phi}\), i.e., the vector fields \(\frac{\partial}{\partial t}\) and \(\frac{\partial}{\partial\phi}\) are Killing in both the spacetimes, 2. the vector fields \(\frac{\partial}{\partial r}\) and \(\frac{\partial}{\partial\theta}\) are non-Killing in both the spacetimes, 3. with respect to the non-Killing vector field \(\frac{\partial}{\partial\theta}\), both the spacetimes realize neither curvature collineation nor curvature inheritance for (1,3)-type curvature tensor, 4. with respect to the non-Killing vector field \(\frac{\partial}{\partial\theta}\), both the spacetimes possess neither Ricci collineation nor Ricci inheritance. Nevertheless, they have the following dissimilar properties: **Dissimilarities:** 1. with respect to the non-Killing vector field \(\frac{\partial}{\partial r}\), the point-like global monopole spacetime admits Ricci collineation as well as curvature collineation for (1,3)-type curvature tensor, whereas HBH spacetime does not admit such collineations, 2. for the non-Killing vector fields \(\frac{\partial}{\partial r}\), \(\frac{\partial}{\partial\theta}\) and \(\lambda_{1}\frac{\partial}{\partial r}+\lambda_{2}\frac{\partial}{\partial \theta}\) (\(\lambda_{1},\lambda_{2}\) being real numbers), the point-like global monopole spacetime possesses curvature inheritance for the (0,4)-type curvature tensor, but HBH spacetime does not realize such inheritance, 3. with respect to the soliton vector field \(\frac{\partial}{\partial r}\), the HBH spacetime admits both the almost \(\eta\)-Ricci soliton and almost \(\eta\)-Ricci-Yamabe soliton for the 1-form \(\eta=(0,1,0,0)\), but the point-like global monopole spacetime realizes neither almost \(\eta\)-Ricci soliton nor almost \(\eta\)-Ricci-Yamabe soliton with respect to the non-Killing vector field \(\frac{\partial}{\partial r}\). ## 8. **Acknowledgment** The third author is grateful to the Council of Scientific and Industrial Research (CSIR File No.: 09/025(0253)/2018-EMR-I), Govt. of India, for the award of SRF (Senior Research Fellowship). The fourth author greatly acknowledges to The University Grants Commission, Government of India for the award of Senior Research Fellow. All the algebraic computations of Section 3 to 7 are performed by a program in Wolfram Mathematica developed by the first author (A. A. Shaikh).
2308.13382
Prompting Visual-Language Models for Dynamic Facial Expression Recognition
This paper presents a novel visual-language model called DFER-CLIP, which is based on the CLIP model and designed for in-the-wild Dynamic Facial Expression Recognition (DFER). Specifically, the proposed DFER-CLIP consists of a visual part and a textual part. For the visual part, based on the CLIP image encoder, a temporal model consisting of several Transformer encoders is introduced for extracting temporal facial expression features, and the final feature embedding is obtained as a learnable "class" token. For the textual part, we use as inputs textual descriptions of the facial behaviour that is related to the classes (facial expressions) that we are interested in recognising -- those descriptions are generated using large language models, like ChatGPT. This, in contrast to works that use only the class names and more accurately captures the relationship between them. Alongside the textual description, we introduce a learnable token which helps the model learn relevant context information for each expression during training. Extensive experiments demonstrate the effectiveness of the proposed method and show that our DFER-CLIP also achieves state-of-the-art results compared with the current supervised DFER methods on the DFEW, FERV39k, and MAFW benchmarks. Code is publicly available at https://github.com/zengqunzhao/DFER-CLIP.
Zengqun Zhao, Ioannis Patras
2023-08-25T13:52:05Z
http://arxiv.org/abs/2308.13382v2
# Prompting Visual-Language Models for Dynamic Facial Expression Recognition ###### Abstract This paper presents a novel visual-language model called DFER-CLIP, which is based on the CLIP model and designed for in-the-wild Dynamic Facial Expression Recognition (DFER). Specifically, the proposed DFER-CLIP consists of a visual part and a textual part. For the visual part, based on the CLIP image encoder, a temporal model consisting of several Transformer encoders is introduced for extracting temporal facial expression features, and the final feature embedding is obtained as a learnable "class" token. For the textual part, we use as inputs textual descriptions of the facial behaviour that is related to the classes (facial expressions) that we are interested in recognising - those descriptions are generated using large language models, like ChatGPT. This, in contrast to works that use only the class names and more accurately captures the relationship between them. Alongside the textual description, we introduce a learnable token which helps the model learn relevant context information for each expression during training. Extensive experiments demonstrate the effectiveness of the proposed method and show that our DFER-CLIP also achieves state-of-the-art results compared with the current supervised DFER methods on the DFEW, FERV39k, and MAFW benchmarks. Code is publicly available at [https://github.com/zengqunzhao/DFER-CLIP](https://github.com/zengqunzhao/DFER-CLIP). ## 1 Introduction Facial expression is an important aspect of daily conversation and communication []. Because of its application in various fields, such as human-computer interaction (HCI) [], driving assistance [], and mental health [], facial expression recognition (FER) attracts increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines []. When using a discrete emotion model [], FER aims to classify an image or video sequence into one of several basic emotions, i.e., neutral, happiness, sadness, surprise, fear, disgust, and anger. Traditional facial expression recognition approaches have mainly focused on static images or video frames, which do not capture the temporal dynamics of facial expressions [], []. However, the recognition of dynamic facial expressions involves capturing the changes in facial movements over time, which can provide more information for accurate emotion recognition. Therefore, the recognition of dynamic facial expressions has become an increasingly important research area within the field of computer vision and affective computing [], [].
2305.02273
Linear seesaw mechanism from dark sector
We propose a minimal model where a dark sector seeds neutrino mass generation radiatively within the linear seesaw mechanism. Neutrino masses are calculable, since tree-level contributions are forbidden by symmetry. They arise from spontaneous lepton number violation by a small Higgs triplet vacuum expectation value. Lepton flavour violating processes e.g. $\mu \to e\gamma$ can be sizeable, despite the tiny neutrino masses. We comment also on dark-matter and collider implications.
A. E. Cárcamo Hernández, Vishnudath K. N., José W. F. Valle
2023-05-03T17:06:59Z
http://arxiv.org/abs/2305.02273v3
# Linear seesaw mechanism from dark sector ###### Abstract We propose a minimal model where a dark sector seeds neutrino mass generation radiatively within the linear seesaw mechanism. Neutrino masses are calculable, since tree-level contributions are forbidden by symmetry. They arise from spontaneous lepton number violation by a small Higgs triplet vacuum expectation value. Lepton flavour violating processes e.g. \(\mu\to e\gamma\) can be sizeable, despite the tiny neutrino masses. We comment also on dark-matter and collider implications. ## I Introduction Two solid indications for new physics beyond the Standard Model (SM) are the existence of neutrino masses [1; 2] and dark matter [3]. There are many ways to induce neutrino masses, and at the moment we do not know which one is nature's choice. There are also many options to add new electrically neutral fermions and/or scalars to the SM so as to provide a viable dark matter (DM) candidate. Typically the latter is made stable through the imposition of an adequate "dark parity" symmetry. In most of these SM extensions there is no relation between dark matter and neutrino mass generation. There has been a recent suggestion that neutrino mass generation proceeds _a la seesaw_ within the \(\mathrm{SU(3)_{c}\otimes SU(2)_{L}\otimes U(1)_{Y}}\) framework [4; 5], seeded by a dark sector [6; 7; 8; 9; 10; 11]1. This way neutrino mass generation becomes intimately connected with dark matter physics. Footnote 1: Low-scale seesaw schemes have been investigated extensively in recent years [12; 13; 14]. In particular, radiative constructions implementing, e.g., supersymmetry, extended gauge and/or family symmetries have been proposed, see [15; 16; 17; 18; 19; 20; 21; 22; 23]. In this paper we suggest that neutrino masses are seeded by a dark sector within the context of the linear seesaw mechanism [24; 25; 26]. Several SM extensions implementing the linear seesaw mechanism have been used recently to approach the flavor problem [27; 28; 29]. Here we focus on the dark matter issue, proposing a dark-seeded linear seesaw mechanism, as an alternative to the dark-seeded extension of the inverse seesaw [30; 31]. This new realization makes use of the simplest template structure shared by all low-scale seesaw schemes, and employs a very simple dark sector. The latter consists of one SM doublet and a singlet dark scalar, while for dark fermions we employ three SM singlet two-component Majorana fermions. The Higgs sector contains, besides the SM doublet, a complex isotriplet involved in seeding neutrino mass generation. Many phenomenological implications of our proposal are also expected in generic linear seesaw setups [32; 33]. Besides these, we have a Weakly Interacting Massive Particle (WIMP) dark-matter candidate, that can be identified with the lightest electrically neutral dark particle (LDP). Rather than being associated to supersymmetry [34], WIMP dark-matter emerges here as a neutrino mass mediator, in a manner distinct from scotogenic approaches [35; 36], and also inequivalent to the dark inverse-seesaw realization [10]. We examine the most salient phenomenological implications of the dark linear seesaw mechanism, concerning charged lepton flavor violation, and comment also on dark matter and collider physics implications. The model Our proposed model can be seen as a minimal extension of the inert doublet model where the linear seesaw mechanism producing the tiny neutrino masses is implemented at the one-loop level, seeded by the dark sector. The SM lepton sector is enlarged by the inclusion of the neutral leptons \(N_{i}^{c}\) and \(S_{i}\) (\(i=1,2,3\)), characteristic of low-scale seesaw schemes. The dark sector contains three copies of SM singlet two-component Majorana fermions \(F_{i}\), plus a SM doublet dark scalar \(\eta\), and a dark gauge singlet \(\xi\). These dark scalars \(\eta\) and \(\xi\) and the dark fermions \(F_{i}\) seed linear-seesaw neutrino mass generation as seen in Fig 1. The \(\mathrm{SU(3)_{c}\otimes SU(2)_{L}\otimes U(1)_{Y}}\) gauge symmetry is supplemented by the inclusion of the global \(U\left(1\right)_{\mathcal{L}}\) lepton number symmetry, which spontaneously breaks to a preserved \(\mathcal{Z}_{2}\) symmetry. This remnant symmetry ensures the stability of the dark matter candidate as well as the radiative nature of neutrino mass generation through the linear seesaw mechanism, The scalar sector of our model also requires Higgs bosons to drive spontaneous breaking of the gauge and global symmetries. Besides the SM doublet \(\Phi\), we include a complex scalar isotriplet \(\Xi\) whose vacuum expectation value (VEV) is restricted by precision electroweak measurements, i.e. the \(\rho\) parameter [37]. The leptons and scalars of the model and their transformation properties under the \(\mathrm{SU(3)_{c}\otimes SU(2)_{L}\otimes U(1)_{Y}}\) gauge symmetry and the global lepton number symmetry are given in Table. 1. Notice that the leptons have the conventional lepton number assignment characteristic of low-scale seesaw schemes. Together with the dark scalars, the new Majorana neutral fermions \(F_{i}\) play a key role in seeding non-zero neutrino masses. Except for the SM scalar doublet \(\Phi\), all scalars carry non-zero lepton number. The relevant neutrino Yukawa couplings and mass terms invariant under these symmetries are, \[-\mathcal{L}_{Y}^{(\nu)}= \sum_{i,j=1}^{3}Y_{ij}^{(\Phi)}L_{i}^{T}CN_{j}^{c}\Phi+\sum_{i,j= 1}^{3}Y_{ij}^{(\eta)}L_{i}^{T}CF_{j}\eta+\sum_{i,j=1}^{3}Y_{ij}^{(\xi)}S_{i}^{T }CF_{j}\xi\] \[+\sum_{i=1}^{3}\left(m_{F}\right)_{i}F_{i}^{T}CF_{i}+\sum_{i,j=1} ^{3}M_{ij}N_{i}^{cT}CS_{j}+\sum_{i,j=1}^{3}Y^{\prime}_{ij}^{(\xi)}F_{i}^{T}CN^{ c}{}_{j}\xi^{*}+H.c. \tag{1}\] The scalar potential contains, \[\mathcal{V}_{(s)}= -\mu_{\Phi}^{2}(\Phi^{\dagger}\Phi)-\mu_{\Xi}^{2}Tr(\Xi^{\dagger} \Xi)+\mu_{\eta}^{2}(\eta^{\dagger}\eta)+\mu_{\xi}^{2}(\xi^{*}\xi)\ \ +A_{\Phi}(\Phi^{\dagger}\Xi\Phi+\Phi^{\dagger}\Xi^{ \dagger}\Phi)\] \[+\lambda_{1}(\Phi^{\dagger}\Phi)^{2}+\lambda_{2}(\eta^{\dagger} \eta)^{2}+\lambda_{3}(\xi^{*}\xi)^{2}+\lambda_{4}\left[Tr(\Xi^{\dagger}\Xi) \right]^{2}+\lambda_{5}Tr\left([\Xi^{\dagger}\Xi)^{2}\right]\] \[+\lambda_{6}(\Phi^{\dagger}\Phi)(\eta^{\dagger}\eta)+\lambda_{7}( \Phi^{\dagger}\eta)(\eta^{\dagger}\Phi)+\lambda_{8}(\Phi^{\dagger}\Phi)Tr(\Xi ^{\dagger}\Xi)+\lambda_{9}\Phi^{\dagger}\Xi\Xi^{\dagger}\Phi\] \[+\lambda_{10}(\Phi^{\dagger}\Phi)(\xi^{*}\xi)+\lambda_{11}(\eta^ {\dagger}\eta)Tr(\Xi^{\dagger}\Xi)+\lambda_{12}\eta^{\dagger}\Xi\Xi^{ \dagger}\eta+\lambda_{13}(\eta^{\dagger}\eta)(\xi^{*}\xi)\] \[+\lambda_{14}(\xi^{*}\xi)Tr(\Xi^{\dagger}\Xi)+\lambda_{15}\left( \eta^{\dagger}\Xi^{*}\Phi\xi^{*}+h.c\right). \tag{2}\] The \(\mathrm{U(1)_{\mathcal{L}}}\) symmetry is broken by the VEV of the neutral part of \(\Xi\). The presence of the trilinear term \(A_{\Phi}\) in Eq. (2) also breaks the global \(\mathrm{U(1)_{\mathcal{L}}}\) symmetry of Eqn. (1), explicitly but softly. Dark matter stability is ensured by the remnant unbroken \(\mathcal{Z}_{2}\) symmetry preserved after the breaking of the \(U\left(1\right)_{\mathcal{L}}\) symmetry. To ensure this we require that the \(\mathcal{Z}_{2}\)-odd scalars \(\eta\) and \(\xi\) do not acquire vacuum expectation values. \begin{table} \begin{tabular}{|c||c|c|c|c||c||c||c||c|} \hline & \(L_{i}\) & \(l_{i}^{c}\) & \(N_{i}^{c}\) & \(S_{i}\) & \(\Phi\) & \(F_{i}\) & \(\Xi\) & \(\eta\) & \(\xi\) \\ \hline \hline \(SU(2)_{L}\times U(1)_{Y}\) & \((2,-\frac{1}{2})\) & \((1,1)\) & \((1,0)\) & \((1,0)\) & \((2,\frac{1}{2})\) & \((1,0)\) & \((3,0)\) & \((2,\frac{1}{2})\) & \((1,0)\) \\ \(U\left(1\right)_{\mathcal{L}}\) & \(1\) & \(-1\) & \(-1\) & \(1\) & \(0\) & \(0\) & \(2\) & \(-1\) & \(-1\) \\ \hline \end{tabular} \end{table} Table 1: Fields and their quantum numbers. All fermions come in three copies, \(i=1,2,3\). The scalar fields \(\Phi\), \(\Xi\), \(\eta\) and \(\xi\) can be written as follows, \[\Phi =\begin{pmatrix}\phi^{+}\\ \frac{v_{\Phi}+\phi_{\Phi}^{0}+i\Xi_{\Phi}^{0}}{\sqrt{2}}\end{pmatrix}, \eta =\begin{pmatrix}\eta^{+}\\ \frac{\eta_{0}^{0}+i\eta_{0}^{0}}{\sqrt{2}}\end{pmatrix},\] \[\Xi =\begin{pmatrix}\frac{v_{\Xi}+\Xi_{\Phi}^{0}+i\Xi_{\Phi}^{0}}{ \sqrt{2}}&\Xi_{\Phi}^{+}\\ \Xi_{2}&-\frac{v_{\Xi}+\Xi_{\Phi}^{0}+i\Xi_{\Phi}^{0}}{\sqrt{2}}\end{pmatrix}, \xi =\frac{\xi_{R}+i\xi_{I}}{\sqrt{2}}.\] We have two charged Higgs scalars \(\Xi_{1}^{\pm}\) and \(\Xi_{2}^{\pm}\) with mass-squared given as, \[m_{\Xi_{1,2}^{\pm}}^{2}=\frac{\sqrt{2}A_{\Phi}(v_{\Phi}^{2}+4v_{ \Xi}^{2})\mp\sqrt{32A_{\Phi}^{2}v_{\Xi}^{4}+v_{\Xi}^{2}v_{\Xi}^{2}(v_{\Phi}^{ 2}+8v_{\Xi}^{2})\lambda_{9}^{2}}}{4v_{\Xi}}, \tag{3}\] and a charged dark scalar \(\eta^{\pm}\) with mass-squared given as, \[m_{\eta^{\pm}}^{2}=\frac{1}{2}v_{\Xi}^{2}(2\lambda_{11}+\lambda_{1 2})+\frac{1}{2}v_{\Phi}^{2}\lambda_{6}+\mu_{\eta}^{2}. \tag{4}\] We also note that, in the presence of the cubic term \(A_{\Phi}\) the two charged components of the triplet scalar will have an adequate mass-squared term. Electroweak symmetry breaking is driven mainly by the VEV of \(\Phi\). The resulting mass squared matrices for the CP-even neutral Higgs scalars are given as, \[M_{\phi_{R}^{0}}^{2}\equiv_{\eta}^{0}=\begin{pmatrix}2\lambda_{1 }v_{\Phi}^{2}&v_{\Phi}(-\sqrt{2}A_{\Phi}+v_{\Xi}(2\lambda_{8}+\lambda_{9}))\\ v_{\Phi}(-\sqrt{2}A_{\Phi}+v_{\Xi}(2\lambda_{8}+\lambda_{9}))&\frac{A_{\Phi}v_ {\Xi}^{2}}{\sqrt{2}v_{\Xi}}+4v_{\Xi}^{2}(2\lambda_{4}+\lambda_{5})\end{pmatrix}, \tag{5}\] while the corresponding neutral dark scalar mass squared matrices are given as, \[M_{\eta_{R}^{0}}^{2}\ \xi_{R}=\begin{pmatrix}\frac{v_{\Xi}^{2}}{2}(2 \lambda_{11}+\lambda_{12})+\frac{v_{\Xi}^{2}}{2}(\lambda_{6}+\lambda_{7})+\mu _{\eta}^{2}&-\frac{1}{2}\lambda_{15}v_{\Phi}v_{\Xi}\\ -\frac{1}{2}\lambda_{15}v_{\Phi}v_{\Xi}&\frac{1}{2}\lambda_{10}v_{\Phi}^{2}+ \lambda_{14}v_{\Xi}^{2}+\mu_{\xi}^{2}\end{pmatrix}, \tag{6}\] \[M_{\eta_{R}^{0}}^{2}\ \xi_{I}=\begin{pmatrix}\frac{v_{\Xi}^{2}}{2}(2 \lambda_{11}+\lambda_{12})+\frac{v_{\Xi}^{2}}{2}(\lambda_{6}+\lambda_{7})+\mu _{\eta}^{2}&\frac{1}{2}\lambda_{15}v_{\Phi}v_{\Xi}\\ \frac{1}{2}\lambda_{15}v_{\Phi}v_{\Xi}&\frac{1}{2}\lambda_{10}v_{\Phi}^{2}+ \lambda_{14}v_{\Xi}^{2}+\mu_{\xi}^{2}\end{pmatrix}. \tag{7}\] There is also a CP-odd scalar coming from the imaginary part of the neutral component of \(\Xi\), whose mass is given as, \[m_{\Xi_{i}^{0}}=\frac{A_{\Phi}v_{\Phi}^{2}}{\sqrt{2}v_{2}}. \tag{8}\] Thus, one sees that the physical scalar spectrum includes four CP-even scalars: two neutral Higgs \(H_{1}\) and \(H_{2}\) arising from the mixing of \(\Xi_{R}^{0}\) and \(\phi_{R}^{0}\), and containing the 125 GeV SM Higgs boson [38; 39], plus two dark neutral scalars \(D_{1}\) and \(D_{2}\) arising from the mixing of \(\eta_{R}^{0}\) and \(\xi_{R}\). Moreover, we have two dark neutral CP-odd scalars, \(D_{A_{1}}\) and \(D_{A_{2}}\) arising from the mixing of \(\eta_{I}^{0}\) and \(\xi_{I}\) and another CP-odd scalar corresponding to \(\Xi^{0}{}_{I}\). The doublet-singlet mixing angles in these matrices are expected to be naturally small, thanks to the limit \(v_{\Xi}\lesssim 3\)GeV from the \(\rho\) parameter. Note also that, thanks to the cubic lepton number soft-breaking term \(A_{\Phi}\) present in Eq. (2), all physical scalars are massive. This avoids the existence of a Majoron [5; 40], a physical Nambu-Goldstone boson associated to spontaneous lepton number violation. This gets an adequately large mass from the explicit \(A_{\Phi}\)-induced breaking of lepton number. An alternative full-fledged Majoron scheme can also be implemented, along the lines of Ref. [41]. However we do not pursue such extension here, as it is not essential for the neutrino mass generation. We now turn to the radiative neutrino mass generation via linear seesaw mechanism. The lepton Yukawa interactions yield the following neutrino mass terms, \[-\mathcal{L}_{mass}^{(\nu)}=\frac{1}{2}\left(\begin{array}{cc}\nu^{T}&N^{c }T&S^{T}\end{array}\right)M_{\nu}C\left(\begin{array}{c}\nu\\ N^{c}\\ S\end{array}\right)+H.c., \tag{9}\] (10) Here the submatrix \(M\) is a bare mass, and \(m_{D}\) is generated at tree-level after electroweak symmetry breaking, \[m_{D}=Y^{(\Phi)}\frac{v_{\Phi}}{\sqrt{2}}. \tag{11}\] In contrast, the small entry \(\varepsilon\) arises radiatively, mediated by the one-loop level exchange of the dark fermions and scalars. The one-loop level Feynman diagram is given in Fig. 1 and the resulting submatrix \(\varepsilon\) is given as, \[\varepsilon_{ij}=\sum_{k=1}^{3}\frac{Y_{ik}^{(\eta)}Y_{jk}^{(\xi)}M_{F_{k}}}{1 6\pi^{2}}\left\{\left[f\left(m_{D_{1}}^{2},m_{F_{k}}^{2}\right)-f\left(m_{D_{2} }^{2},m_{F_{k}}^{2}\right)\right]\sin 2\theta_{D}-\left[f\left(m_{D_{A_{1}}}^{2},m_{F_{k }}^{2}\right)-f\left(m_{D_{A_{2}}}^{2},m_{F_{k}}^{2}\right)\right]\sin 2\theta_{D_{A }}\right\}, \tag{12}\] where \(f\left(m_{1},m_{2}\right)\) is the function defined as, \[f\left(m_{1},m_{2}\right)=\frac{m_{1}^{2}}{m_{1}^{2}-m_{2}^{2}}\ln\left(\frac {m_{1}^{2}}{m_{2}^{2}}\right). \tag{13}\] Here \(m_{D_{1}}\) and \(m_{D_{2}}\) are the masses of the physical CP even dark scalars, whereas \(m_{D_{A_{1}}}\) and \(m_{D_{A_{2}}}\) are those that of the dark pseudoscalars. Their mixing matrices are defined as, \[\left(\begin{array}{c}D_{1}\\ D_{2}\end{array}\right)=\left(\begin{array}{cc}\cos\theta_{D}&\sin\theta_{D} \\ -\sin\theta_{D}&\cos\theta_{D}\end{array}\right)\left(\begin{array}{c}\eta_{ R}\\ \xi_{R}\end{array}\right),\hskip 28.452756pt\left(\begin{array}{c}D_{A_{1}}\\ D_{A_{2}}\end{array}\right)=\left(\begin{array}{cc}\cos\theta_{D_{A}}&\sin \theta_{D_{A}}\\ -\sin\theta_{D_{A}}&\cos\theta_{D_{A}}\end{array}\right)\left(\begin{array}[] {c}\eta_{I}\\ \xi_{I}\end{array}\right). \tag{14}\] where the small doublet-singlet mixing angles \(\theta_{D}\) and \(\theta_{D_{A}}\) are obtained by diagonalizing Eqns. (6) and (7), respectively, from which one sees \(\theta_{D_{A}}=-\theta_{A}\). Given a non-zero submatrix \(\varepsilon\), the light active neutrino masses arise from the linear seesaw mechanism [24; 25; 26], so that the resulting active-neutrino mass matrix has the form, \[m_{\rm light}=-\left[m_{D}M^{-1}\varepsilon^{T}+\varepsilon M^{-1}m_{D}^{T} \right]. \tag{15}\] One sees that spontaneous lepton number violation through \(v_{\Xi}\) provides a radiative seed for light neutrino mass generation that proceeds _a la seesaw_. The smallness of the light neutrino masses is ascribed to the smallness of loop-suppressed \(\varepsilon\) as well as the small, but not necessarily negligible, \(\frac{m_{D}}{M}\) ratio. It is worth mentioning that small neutrino masses are symmetry-protected, making the model natural in t'Hooft's sense. On the other hand, as in all low-scale seesaw models, our heavy mediator neutrino sector consists of three pairs of quasi-Dirac [42; 43; 44] heavy neutrinos, whose mass matrices are given as, \[M_{N^{-}}=-M-m_{D}^{T}m_{D}M^{-1}+\frac{1}{2}\left[m_{D}M^{-1} \varepsilon^{T}+\varepsilon M^{-1}m_{D}^{T}\right], \tag{16}\] \[M_{N^{+}}=M+m_{D}^{T}m_{D}M^{-1}+\frac{1}{2}\left[m_{D}M^{-1} \varepsilon^{T}+\varepsilon M^{-1}m_{D}^{T}\right]. \tag{17}\] Figure 1: Feynman diagram for neutrino mass generation in dark linear seesaw mechanism (plus symmetrization). Notice that their mass-splitting persists even as \(\varepsilon\to 0\), a fact that may have interesting phenomenological implications. ## III Phenomenology ### Charged lepton flavor violation In this section we discuss the implications of our model for charged lepton flavor violation (cLFV). In particular, we focus on the radiative decays \(\ell_{i}\to\ell_{j}\gamma\), the most sensitive of which is the decay \(\mu\to e\gamma\). A key conceptual feature of low-scale seesaw, such as our proposed dark linear seesaw-scheme, is that leptonic flavour and CP can be violated even in the limit of massless neutrinos [45; 46; 47; 48]. That means that the cLFV rates are not suppressed by the small neutrino masses, and can therefore be sizeable. In the same spirit as [49] one can give an approximate expression for extracting the "Dirac" submatrix in terms of the measured oscillation parameters as follows [50; 51], \[m_{D}=U_{lep}\sqrt{m_{\nu}}A^{T}\sqrt{m_{\nu}}U_{lep}^{T}\left( \varepsilon^{T}\right)^{-1}M^{T},\qquad\text{ with }\quad A=\left(\begin{array}{ccc}\frac{1}{2}&a&b\\ -a&\frac{1}{2}&c\\ -b&-c&\frac{1}{2}\end{array}\right), \tag{18}\] where \(a\), \(b\) and \(c\) are taken to be real numbers, \(m_{\nu}=\text{diag}\left(m_{1},m_{2},m_{3}\right)\) is given by the light neutrino masses, and \(U_{lep}\) is approximately the lepton mixing matrix determined in oscillation experiments [52]. We assume the basis in which the charged lepton mass matrix is diagonal. The \(\mu\to e\gamma\) decay amplitude involves the Feynman diagrams in Fig. 2. There are two types of contributions, i.e. the charged-current (CC) contribution (left diagram) and a contribution arising from the dark sector (right diagram). In order to determine the CC contribution the key ingredient is the full lepton mixing matrix, which has a rectangular form [4; 5]. Such rectangular mixing matrix describes not only the CC couplings of the light neutrinos, which gives a sizeable contribution due to the effective unitarity violation, of order \(\left(\frac{m_{D}}{M}\right)^{2}\), but also the heavy mediator neutrino admixture in the left-handed CC weak interaction, of order \(\left(\frac{m_{D}}{M}\right)\). We find that the CC light-neutrino contribution to the \(\mu\to e\gamma\) decay can be sizeable, thanks to effective unitarity violation of the relevant coupling sub-matrix. Moreover, one has potentially large contributions also due to the exchange of the six sub-dominantly coupled heavy quasi-Dirac states, that can lie at the TeV scale, or even lower. In our dark linear seesaw, charged lepton flavor violation can also be mediated by the charged scalar \(\eta^{\pm}\) and the dark fermions \(F_{i}\) through the couplings \(Y^{(\eta)}\), as shown in Fig. 2. This second contribution is especially interesting as the same dark sector Yukawa couplings \(Y^{(\eta)}\) generating neutrino masses radiatively via the linear seesaw can also give rise to charged lepton flavor violation. The Feynman diagrams for these two contributions are shown in Fig. 2. Figure 2: Feynman diagrams that contribute to \(l_{i}\to l_{j}\gamma\) processes. The left diagram shows the charged-current contribution, whereas the one in the right shows the dark-sector contribution. The total branching ratio for the process \(\mu\to e\gamma\) thus takes the form [46; 53; 54], \[Br\left(\mu\to e\gamma\right) = \frac{3\left(4\pi\right)\alpha_{em}}{4G_{F}^{2}}\left|\sqrt{\frac{ \alpha_{W}^{2}s_{W}}{m_{W}^{4}}}\sum_{k=1}^{9}K_{2k}^{*}K_{1k}G_{F}\left(\frac {M_{k}^{2}}{m_{W}^{2}}\right)\right. \tag{19}\] \[+\sum_{k=1}^{3}\frac{Y_{2k}^{(\eta)}Y_{2k}^{(\eta)}}{2m_{\eta^{ \pm}}^{2}}G_{\eta^{\pm}}\left(\frac{m_{F_{k}}^{2}}{m_{\eta^{\pm}}^{2}}\right) \right|^{2},\] with, \[G_{F}\left(x\right) = \frac{10-43x+78x^{2}-49x^{3}+18x^{3}\ln x+4x^{4}}{12\left(1-x \right)^{4}},\] (20) \[G_{\eta^{\pm}}\left(x\right) = \frac{1-6x+3x^{2}+2x^{3}-6x^{2}\ln x}{6\left(1-x\right)^{4}}. \tag{21}\] In Eqn. (19), the matrix \(K\) is the \(3\times 9\) rectangular mixing matrix describing the CC weak interaction and includes the exchange of the three light active neutrinos with \(k=1,2,3\) as well as the six mediators, with \(k=4,5,..9\). As mentioned earlier these form three quasi-Dirac heavy-neutrino pairs. The complete form of the lepton mixing matrix \(K\) is given by: \[K=\left(K_{L},K_{H}\right), \tag{22}\] where \(K_{L}\) and \(K_{H}\) are \(3\times 3\) and \(3\times 6\) matrices, respectively. These submatrices take the form: \[K_{L} = \left(1_{3\times 3}-\frac{1}{2}m_{D}\left(M^{-1}\right)^{T}M^{-1}m_{ D}^{\dagger}\right)U_{lep}=\left(1_{3\times 3}-\frac{1}{2}VV^{\dagger}\right)U_{ lep}\,, \tag{23}\] \[K_{H} = \left(-\frac{i}{\sqrt{2}}V,\frac{1}{\sqrt{2}}V\right),\hskip 28.452756ptV =m_{D}\left(M^{-1}\right)^{T}. \tag{24}\] In Fig. 3, we present the correlations of \(Br(\mu\to e\gamma)\) against \(\text{Tr}[Y^{(\Phi)}{Y^{(\Phi)}}^{\dagger}]\) (left) and \(\text{Tr}[Y^{(\eta)}{Y^{(\eta)}}^{\dagger}]\) (right). In order to optimize our parameter scan to generate these figures, ensuring that only viable solutions consistent with \begin{table} \begin{tabular}{|c||c|c|c|c|c||c|c|} \hline Parameters & \(Y_{ij}^{\eta}\) & \(Y_{ij}^{\xi}=y\delta_{ij}\) & \(m_{F_{i}}\) & \(M_{ij}=M_{N}\delta_{ij}\) & \(m_{D_{1},D_{2},D_{A_{1}},D_{A_{2}},\eta^{\pm}}\) & \(\theta_{D}\) & \(a,b,c\) \\ \hline Range & \([10^{-10},4\pi]\) & \([10^{-16},4\pi]\) & \([200,5000]\) GeV & \([200,5000]\) GeV & \([200,5000]\) GeV & \([200,5000]\) GeV & 0.01 & [-20,20] \\ \hline \end{tabular} \end{table} Table 2: The sampling region used in generating the plots of Fig. 3. neutrino oscillation data are included, it is useful to use the analytical approximation in Eq. (18). Note however, that in presenting the numerical results we use the exact expressions for the diagonalization matrices. In generating Fig. 3, the neutrino oscillation parameters are varied in their \(3\sigma\) ranges [52], the parameters \(a\), \(b\) and \(c\) are varied in the range \([-20,20]\) and the couplings \(Y_{ij}^{(\eta)}\) are varied up to \(4\pi\). For simplicity we took the heavy neutrinos as degenerate, varying their masses in the range \([200,5000]\) GeV. Concerning the dark sector parameters, \(Y_{ij}^{(\xi)}\) is taken as \(y\delta_{ij}\) with \(y\) varied up to \(4\pi\). The masses of the dark fermions \(F_{i}\) and the scalar masses are varied in the range \([200,5000]\) GeV, while the scalar mixing angle \(\theta_{D}\) is fixed to be \(0.01\) implying \(\theta_{D_{A}}=-0.01\). The sampling region is summarized in table 2. In Fig. 3, the cyan and the blue points show the CC and the dark-sector contributions to \(Br(\mu\to e\gamma)\), respectively. The horizontal pink-shaded region corresponds to the current bound [55], \(Br(\mu\to e\gamma)<4.2\times 10^{-13}\) as obtained from the MEG experiment, whereas the black line corresponds to the projected future sensitivity of \(6\times 10^{-14}\) for MEG-II [56; 57]. From the expression for the branching ratio, one can see that the CC contribution depends on \(Y^{(\Phi)}\), whereas the dark sector contribution depends only on \(Y^{(\eta)}\). This correlation can also be seen from Fig. 3. We find that even if the CC contribution is low (in the regions of small \(Y^{(\Phi)}\)), for large values of \(Y^{(\eta)}\) the dark sector contribution can take values as large as the existing limit. Part of this parameter space will be probed by MEG-II. ### Dark matter phenomenology In this section we discuss the implications of our model for dark matter. Due to the remnant \(\mathcal{Z}_{2}\) symmetry arising from the spontaneous breaking of the global \(U\left(1\right)_{\mathcal{L}}\) lepton number symmetry, our model will have a stable dark matter candidate, which we call the LDP. We start by considering a scenario in which the dark matter candidate is fermionic, i.e. one of the heavy Majorana fermions \(F_{i}\) (\(i=1,2,3\)). This can annihilate into a pair of SM active neutrinos via the \(t\)-channel exchange of the CP-even and CP-odd parts of the neutral components of the dark scalar doublet \(\eta\) as shown in the Feynman diagram of Fig. 4. In this case, the thermally-averaged annihilation cross section is given by [58], \[\left\langle\sigma v\right\rangle\simeq\frac{9\left(Y_{11}^{(\eta)}\right)^{4} }{32\pi}\frac{m_{F}^{2}\left(2m_{F}^{2}+m_{D_{1}}^{2}+m_{D_{A_{1}}}^{2} \right)}{\left(m_{F}^{2}+m_{D_{1}}^{2}\right)^{2}\left(m_{F}^{2}+m_{D_{A_{1}} }^{2}\right)^{2}}, \tag{25}\] where we have assumed \(F_{1}\) to be the lightest among \(F_{i}\). Here \(Y_{11}^{(\eta)}\) is the Yukawa coupling with the dark scalar doublet \(\eta\). From the previous relation, we find the following estimate for the DM relic abundance [37], \[\frac{\Omega_{DM}h^{2}}{0.12}=\frac{0.1pb}{0.12\left\langle\sigma v\right\rangle }=\frac{0.1pb}{0.12}\left[\frac{9\left(Y_{11}^{(\eta)}\right)^{4}}{32\pi} \frac{m_{F}^{2}\left(2m_{F}^{2}+m_{D_{1}}^{2}+m_{D_{A_{1}}}^{2}\right)}{\left( m_{F}^{2}+m_{D_{1}}^{2}\right)^{2}\left(m_{F}^{2}+m_{D_{A_{1}}}^{2}\right)^{2}} \right]^{-1}, \tag{26}\] Figure 4: Annihilation of a pair of fermionic dark matter candidates into a pair of active neutrinos. which in turn can reproduce the observed DM relic abundance of [59], \[\Omega_{DM}h^{2}=0.1198. \tag{27}\] An alternative interesting scenario is to have one of the neutral scalars of the dark sector to be the LDP and hence the DM candidate. Scenarios with a scalar DM candidate have been well studied within the framework of generalized scotogenic or inert doublet models [60; 61; 62; 63; 64; 65; 66; 67]. This DM possibility can arise in our present model by assuming that the LDP is the lightest among the neutral scalar particles \(D_{1}\), \(D_{2}\), \(D_{A_{1}}\), \(D_{A_{2}}\), and lighter than the heavy neutral Majorana fermions \(F_{i}\). In the following discussion, we consider \(D_{A_{1}}\) to be the DM candidate, which is mainly the imaginary part of the neutral component of the dark doublet \(\eta\). We assume that the main coannihilation channels for the DM candidate lead to a pair of SM particles, as shown in Fig. 5. For instance, the coannihilation into the triplet scalars can be neglected by assuming \(\lambda_{11}\) and \(\lambda_{12}\) to be very small. Fig. 5 includes the annihilation of a pair of scalar DM particles into (i) a pair of SM fermions or bosons via the \(s\)-channel exchange of the SM Higgs boson \(H_{1}\) (first diagram of Fig. 5), (ii) a pair of SM Higgs boson via the \(t\)-channel exchange of the dark scalar (second diagram of Fig. 5) and (iii) a pair of \(W/Z/H_{1}\) via the contact interactions (third and fourth diagrams of Fig. 5). The corresponding annihilation cross sections are given as [68], \[v_{rel}\sigma\left(D_{A_{1}}D_{A_{1}}\to WW\right) =\frac{\lambda}{32\pi}\frac{s\left(1+\frac{12m_{W}^{4}}{s^{2}}- \frac{4m_{W}^{2}}{s}\right)}{\left(s-m_{H_{1}}^{2}\right)^{2}+m_{H_{1}}^{2} \Gamma_{H_{1}}^{2}}\sqrt{1-\frac{4m_{W}^{2}}{s}}, \tag{28}\] \[v_{rel}\sigma\left(D_{A_{1}}D_{A_{1}}\to ZZ\right) =\frac{\lambda}{64\pi}\frac{s\left(1+\frac{12m_{W}^{4}}{s^{2}}- \frac{4m_{W}^{2}}{s}\right)}{\left(s-m_{H_{1}}^{2}\right)^{2}+m_{H_{1}}^{2} \Gamma_{H_{1}}^{2}}\sqrt{1-\frac{4m_{Z}^{2}}{s}},\] (29) \[v_{rel}\sigma\left(D_{A_{1}}D_{A_{1}}\to q\overline{q}\right) =\frac{N_{c}\lambda^{2}m_{q}^{2}}{16\pi}\frac{\sqrt{\left(1-\frac {4m_{Z}^{2}}{s}\right)^{3}}}{\left(s-m_{H_{1}}^{2}\right)^{2}+m_{H_{1}}^{2} \Gamma_{H_{1}}^{2}},\] (30) \[v_{rel}\sigma\left(D_{A_{1}}D_{A_{1}}\to H_{1}H_{1}\right) =\frac{\lambda^{2}}{64\pi s}\left(1+\frac{3m_{H_{1}}^{2}}{s-m_{H_ {1}}^{2}}-\frac{2\lambda v^{2}}{s-2m_{H_{1}}^{2}}\right)^{2}\sqrt{1-\frac{4m_{ H_{1}}^{2}}{s}}, \tag{31}\] where \(\sqrt{s}\) is the centre-of-mass energy, \(N_{c}=3\) stands for the color factor, \(m_{H_{1}}=125.7\) GeV and \(\Gamma_{H_{1}}\) is the total decay width of the SM Higgs boson which is equal to 4.1 MeV. Here \(\lambda\) is the quartic scalar coupling describing the coeficient of the interaction, \(H_{1}^{2}D_{A_{1}}^{2}\). This arises from the \(\lambda_{6}(\Phi^{\dagger}\Phi)(\eta^{\dagger}\eta)\) and \(\lambda_{7}(\Phi^{\dagger}\eta)(\eta^{\dagger}\Phi)\) terms of the potential in Eqn. 2 and is given as \(\lambda=\frac{1}{4}\left(\lambda_{6}+\lambda_{7}\right)\). From this, the relic Dark Matter density in the present Universe is estimated as follows (c.f. Ref. [69; 37]), \[\Omega h^{2}=\frac{0.1\ \text{pb}}{\langle\sigma v\rangle},\ \ \ \ \ \ \ \ \ \langle\sigma v\rangle=\frac{A}{n_{eg}^{2}}, \tag{32}\] Figure 5: Feynman diagrams contributing to dark-matter annihilation into a pair of SM particles. where \(\langle\sigma v\rangle\) is the thermally averaged annihilation cross section, \(A\) is the total annihilation rate per unit volume at temperature \(T\) and \(n_{eq}\) is the equilibrium value of the particle density, which are given as [69], \[A =\frac{T}{32\pi^{4}}\int_{4m_{D_{A_{1}}}^{2}}^{\infty}\sum_{p=W,Z,t, b,h}g_{p}^{2}\frac{s\sqrt{s-4m_{D_{A_{1}}}^{2}}}{2}v_{rel}\sigma\left(D_{A_{1}}D_{A_ {1}}\to SMSM\right)K_{1}\left(\frac{\sqrt{s}}{T}\right)ds,\] \[n_{eq} =\frac{T}{2\pi^{2}}\sum_{p=W,Z,t,b,h}g_{p}m_{D_{A_{1}}}^{2}K_{2} \left(\frac{m_{D_{A_{1}}}}{T}\right) \tag{33}\] with \(K_{1}\) and \(K_{2}\) being the modified Bessel functions of the second kind of order 1 and 2, respectively [69]. For the relic density calculation, we take \(T=m_{m_{A_{1}}}/20\) as in Ref. [69], which corresponds to a typical freeze-out temperature. The DM relic density thus determined should match the required value as in Eqn. 27. In Fig. 6, we show the allowed parameter space in the \(m_{D_{A_{1}}}-\lambda\) plane that produces the correct relic abundance. The pink band is disfavored by perturbativity. In calculating the relic abundance, we have considered the annihilation of the DM into \(WW\), \(ZZ\), \(H_{1}H_{1}\), \(t\overline{t}\) and \(b\overline{b}\), which are the dominant channels. From this figure, one sees that as the mass of the DM particle increases, the values of the quartic couplings required to obtain the correct relic density also increases, a well known feature of the WIMP dark matter scenario. We now comment on WIMP dark matter direct detection in this case. Such a scalar DM candidate would scatter off a nuclear target through Higgs boson exchange in the \(t\)-channel. This gives rise to a direct Higgs portal dark-matter detection mechanism that can be used to probe the coupling parameter characterizing the \(H_{1}^{2}D_{A_{1}}^{2}\) interaction. Below we comment on dark matter detection at colliders. ### Collider experiments In this subsection we briefly discuss the collider signatures associated to the dark matter candidates in our model. Due to the remnant \(\mathcal{Z}_{2}\) symmetry, the scalar and fermionic dark matter candidates will be produced in pairs. Because of the small mixing between \(\eta_{R}^{0}\) and \(\xi_{R}^{0}\) as well as \(\eta_{I}^{0}\) and \(\xi_{I}^{0}\), we can take them aproximatelly as the mass eigenstates. The neutral components of the dark doublet can be produced in pairs via the Drell-Yan mechanism mediated by the \(Z\)-boson or through vector boson fusion. Detailed studies of collider signatures arising from pair production of neutral components of the dark doublet via vector boson fusion are provided in [70]. Fig. 7 displays the total cross section for the pair production of \(\eta_{R}^{0}\) and \(\eta_{0}^{I}\) via the Drell-Yan mechanism at a proton-proton collider for \(\sqrt{s}=14\) TeV (red line) and \(\sqrt{s}=100\) TeV (blue line) as a function of the CP-odd dark scalar mass \(m_{D_{A_{1}}}\), taken to vary in the range from \(500\) GeV up to \(1.0\) TeV. Here the mass of the CP even dark scalar, \(m_{D_{1}}\) has been set to be equal to \(1\) TeV. As shown in Figure 7, the total cross section for the CP-even and CP-odd Figure 6: Allowed parameter space in the \(m_{D_{A_{1}}}-\lambda\) plane that reproduces the correct DM relic density, where \(m_{D_{A_{1}}}\) is the DM mass and \(\lambda\) is the effective quartic coupling of the DM particle to the SM Higgs (see text for details). The pink band is disfavored by perturbativity. dark scalar production at the LHC via the Drell-Yan mechanism reaches values of the order of \(10^{-5}\) pb for \(m_{D_{A_{1}}}\) equals to \(0.5\) TeV, and decreases as \(m_{D_{A_{1}}}\) takes larger values. The total Drell Yan production cross section increases by two orders of magnitude when one consider a 100 TeV proton-proton collider. In this case, the cross section reaches a value as high as \(3\times 10^{-3}\) pb when the CP-odd dark scalar mass is set to \(0.5\) TeV. For the case of the fermionic DM candidate, the pair production of the charged components of the dark doublet through the Drell-Yan mechanism and their subsequent decays can give rise to a signature with opposite-sign dileptons plus missing energy in the final state. The observation of an excess of events of this opposite-sign dileptons final state configuration with respect to the SM background could provide support of this model at the LHC. A detailed study of collider signatures lies beyond the scope of the present work and will be taken up elsewhere. ## IV Summary and conclusions In this work we have proposed a minimal model where neutrino mass generation arises at the one-loop level within the linear seesaw mechanism. Lepton number violation is seeded by a dark sector involving three Majorana fermions and two types of dark scalars, one isodoublet and the other isosinglet under the \(\mathrm{SU(3)_{c}\otimes SU(2)_{L}\otimes U(1)_{Y}}\) symmetry. The small neutrino masses arise from the spontaneous lepton number violation by a small Higgs triplet vacuum expectation value and involve the interplay of the one-loop dark sector seed with the linear seesaw mechanism. Our multiplet choice prevents the appearance of unwanted tree-level mass terms that could contribute to neutrino masses, making them genuinely calculable. We have studied the predicted rates for charged lepton flavour violation, Figs. 2 and 3. We briefly discuss the prospects for testing our framework with the results of current and future lepton flavour violation searches. We have commented also on the WIMP dark-matter phenomenology of our model, Figs. 4, 5 and 6, focusing on the case in which the lightest dark particle (DM) is the lightest neutral scalar arising from the dark sector. Finally we make some comments on possible collider implications, Fig. 7. However, these would require a dedicated scrutiny outside the scope of this paper. **Note added** As this work was being completed we came to know that A. Batra, H. Camara and F. R. Joaquim have come up with an alternative realization of the same idea [71]. Prompted by a discussion with them, we noticed and corrected an inconsistency in the first version of our paper. We stress that, thanks to their different multiplet structures, the phenomenology of the two proposals is quite different. Figure 7: Total cross section for the CP-even and CP-odd scalar dark-matter production via the Drell-Yan mechanism at a proton-proton collider for \(\sqrt{s}=14\) TeV (red line) and \(\sqrt{s}=100\) TeV (blue line) as a function of the CP-odd dark scalar mass \(m_{D_{A_{1}}}\). ## Acknowledgments AECH has received funding from Chilean grants ANID-Chile FONDECYT 1210378, ANID PIA/APOYO AFB220004, ANID Millennium Program code ICN2019_044. V.K.N. is supported by ANID-Chile Fondecyt Postdoctoral grant 3220005. The work of J.V. is supported by the Spanish grants PID2020-113775GB-I00 (AEI/10.13039/501100011033) and Prometeo CIPROM/2021/054 (Generalitat Valenciana).
2304.03250
Exceptional hypersurfaces of transfer matrices of finite-range lattice models and their consequences on quantum transport properties
We investigate the emergence and corresponding nature of exceptional points located on exceptional hyper-surfaces of non-hermitian transfer matrices for finite-range one-dimensional lattice models. We unravel the non-trivial role of these exceptional points in determining the system size scaling of electrical conductance in non-equilibrium steady state. We observe that the band edges of the system always correspond to the transfer matrix exceptional points. Interestingly, albeit the lower band edge always occurs at wave-vector $k=0$, the upper band edge may or may not correspond to $k=\pi$. Nonetheless, in all the cases, the system exhibits universal subdiffusive transport for conductance at every band edge with scaling $N^{-b}$ with scaling exponent $b= 2$. However, for cases when the upper band edge is not located at $k=\pi$, the conductance features interesting oscillations with overall $N^{-2}$ scaling. Our work further reveals that this setup is uniquely suited to systematically generate higher order transfer matrix exceptional points at upper band edge when one considers finite range hoppings beyond nearest neighbour. Additional exceptional points other than those at band edges are shown to occur, although interestingly, these do not give rise to anomalous transport.
Madhumita Saha, Manas Kulkarni, Bijay Kumar Agarwalla
2023-04-06T17:34:38Z
http://arxiv.org/abs/2304.03250v2
Exceptional hyper-surfaces of transfer matrices of finite-range lattice models and their consequences on quantum transport properties ###### Abstract We investigate the emergence and corresponding nature of exceptional points located on exceptional hyper-surfaces of non-hermitian transfer matrices for finite-range one-dimensional lattice models. We unravel the non-trivial role of these exceptional points in determining the system size scaling of electrical conductance in non-equilibrium steady state. We observe that the band edges of the system always correspond to the transfer matrix exceptional points. Interestingly, albeit the lower band edge always occurs at wave-vector \(k=0\), the upper band edge may or may not correspond to \(k=\pi\). Nonetheless, in all the cases, the system exhibits universal subdiffusive transport for conductance at every band edge with scaling \(N^{-b}\) with scaling exponent \(b=2\). However, for cases when the upper band edge is not located at \(k=\pi\), the conductance features interesting oscillations with overall \(N^{-2}\) scaling. Our work further reveals that this setup is uniquely suited to systematically generate higher order transfer matrix exceptional points at upper band edge when one considers finite range hoppings beyond nearest neighbour. Additional exceptional points other than those at band edges are shown to occur, although interestingly, these do not give rise to anomalous transport. ## I Introduction Understanding of the emergence of exceptional points and exceptional surfaces in non-hermitian Hamiltonian systems is an active and rapidly growing area of research [1; 2; 3; 4; 5; 6; 7; 8; 9]. Typically these exceptional points are extremely sensitive to external perturbations and therefore are useful for potential applications in cavity quantum electrodynamics, spectral filtering, sensing, lasing, and thermal imaging [4; 7; 8; 9]. Moreover, exceptional hyper-surfaces i.e., hyper-surfaces hosting exceptional points, are more beneficial than a discrete exceptional point. This is because, in realistic setups, tuning and stabilizing a system to a discrete exceptional point, especially in large parameter space is highly challenging and often impossible [1; 2; 3; 4]. Similar to non-hermitian Hamiltonian, for one-dimensional (1D) nearest neighbour tight-binding systems, the underlying non-hermitian transfer matrix of the lattice is known to have exceptional points at the band edges [10]. However exceptional hyper-surfaces (i.e., higher dimensional) of transfer matrices for such lattice systems have not been reported earlier. Interestingly, beyond the nearest neighbor hopping model, due to the increased dimensionality of the transfer matrix, there is a strong possibility of the emergence of exceptional hyper-surfaces and thereby higher-order exceptional points. One of the main aims of this work is to unravel the nature of transfer matrices for finite-range hopping model (involving \(n\) number of neighbors where \(n\) does not scale with system size \(N\)). Understanding non-equilibrium steady-state transport properties in low-dimensional lattice systems is another important area of research [11; 12; 13; 14; 15]. This is crucial both from a fundamental perspective as well as from a technological point of view. A deep understanding of transport behaviour is paramount to realise efficient quantum devices [16; 17; 18; 19]. The study of quantum transport in low-dimensional systems is interesting as often it shows deviation from the normal diffusive behaviour or standard Ohm's law/Fourier's law which is one of the reasons why low-dimensional systems have been of fundamental interest [11; 12; 13]. Sample examples of low-dimensional systems include 1D, 2D systems with random and quasi-periodic disorder [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. For the random disorder case, Anderson localization occurs in 1D and 2D which essentially unravels the exponential nature of localization of all the single particle states [20; 21; 22]. As a consequence of exponentially localized single particle states, the transport exponentially decays as a function of system size \(N\). Recall that, in absence of disorder, transport is independent of system size (ballistic transport) [11; 15]. Therefore, both the clean and disordered systems show deviations from normal diffusive behaviour akin to the Ohm's law. Quasi-periodic disordered systems in low dimensions are known to show unusual and rich transport properties. The study of transport properties in low-dimensional quasi-periodic systems gained a lot of attention because of very successful experimental realizations in various platforms [32; 33; 34; 35; 36; 37; 38; 39]. These systems often show anomalous transport in different cases [28; 29; 31; 40; 41; 42; 43]. Note that, in anomalous transport conductance \(\mathcal{G}\sim N^{-b}\) where \(0<b\neq 1\). \(b=1\) is the limit of diffusive transport. \(b>1\) refers to subdiffusive transport whereas \(0<b<1\) refers to superdiffusive transport. Though such quasi-periodic disordered systems often show anomalous transport, the microscopic understanding of the anomalous transport is far from being fully understood. Interestingly, in a recent work, it was shown that for a 1D nearest neighbor tight-binding fermionic lattice with periodic on-site potential, the conductance displays sub diffusive scaling at the band edges of the system. The origin of this effect was shown to be connected to the presence of exceptional points corresponding to the non-hermitian transfer matrices of the lattice [10]. Moreover, such a subdiffusive scaling at the band edges was also observed in long-range lattice systems with power-law hopping (involving \(n\) numbers of neighbors where \(n\) scales with system size \(N\)) [44] where the transfer matrix approach is not well suited. The understanding behind this effect for long-range systems is still lacking. To bridge the gap between nearest neighbor hopping systems and long-range hopping systems, investigation of the transport properties of finite-range hopping model and its connection with underlying transfer matrices is crucial. In this work, we provide an in-depth understanding of emergence of exceptional hyper-surfaces of transfer matrices in finite-range hopping model and their impact on non-equilibrium-steady state (NESS) quantum transport properties. Our main findings can be summarized as follows, 1. We establish the non-trivial connection between non-equilibrium steady state (NESS) conductance and underlying \(2n\times 2n\) dimensional non-hermitian transfer matrix for finite-range lattice models. 2. We always find the appearance of exceptional points of various orders at the band edges of the lattice, which crucially depends on \(n\). It is important to note that by "points", we also mean "hyper-surfaces" in more general sense. We unravel the non-trivial role played by these exceptional points in determining the universal system size scaling of NESS conductance with scaling exponent \(b=2\). We further demonstrate that the value of the scaling exponent is remarkably robust to the order of the exceptional point. 3. We find that for finite-range hopping model (\(n>1\)) the location of the upper band edge does not always corresponds to \(k=\pi\). In such cases, we observed interesting oscillation features in conductance with overall \(N^{-2}\) scaling. The plan of the paper is as follows: In section II, we provide the lattice Hamiltonian and dispersion relation for the finite-range hopping model. In section III, we discuss the open-system transport properties. First, we provide the non-equilibrium-steady-state (NESS) conductance in detail (section. III.1). To calculate conductance, its important to compute the Greens function. It can be calculated using transfer matrix approach. Thus, in section. III.2 we discuss the connection between transfer matrices and NESS conductance. This connection involves the exponents of transfer matrices. In section. IV, we discuss the details of eigenvalues (section. IV.1), eigenvectors (section. IV.2) and exponents of transfer matrices (section. IV.3). In section. V, we first provide the results for transfer matrix properties for finite-range hopping model with \(n=2\) followed by the scaling of NESS conductance (section. V.1). Next, in section. V.2 we provide generalizations of some results beyond \(n=2\). Then, in section. V.3 we discuss the robustness of some results. Finally, in section. VI, we conclude and discuss future directions. In Appendix. A we provide detailed calculations establishing the connection between transfer matrix and conductance. In Appendix. B, we show the nature of transfer matrix eigenvalues analytically for \(n=2\). ## II Lattice Hamiltonian and dispersion relation In this section, we introduce the tight-binding Hamiltonian for the finite-range hopping model and provide the details of the dispersion relation. The Hamiltonian for our set-up is given as, \[\hat{H}=-\sum_{i=1}^{N}\sum_{m=1}^{n}t_{m}\hat{c}_{i}^{\dagger}\hat{c}_{i+m}+ \text{h.c.} \tag{1}\] Here \(\hat{c}_{i}^{\dagger}(\hat{c}_{i})\) is the fermionic creation (annihilation) operator. \(t_{m}\) is the hopping strength for \(m-\)th neighbor. We consider the lattice size as \(N\) with \(n\) being the total number of neighbors to the left and to the right of a particular lattice site, if any. In the thermodynamic limit, the dispersion relation for this set-up is given as, \[\omega(k)=-2\sum_{m=1}^{n}t_{m}\cos mk. \tag{2}\] We immediately notice from Eq. 2 that the minimum value of \(\omega(k)\) which corresponds to the lower band edge always occurs at wave-vector value \(k=0\). The value of the energy at the lower band edge (\(k=0\)) is, \[\omega(k=0)=-2\sum_{m=1}^{n}t_{m}. \tag{3}\] Interestingly, the maximum value of \(\omega(k)\) which corresponds to upper band edge may or may not always occur at \(k=\pi\) and crucially depends on the range of hopping \(n\) and strength of hopping \(t_{m}\). We now find a condition which decides whether or not \(k=\pi\) is an upper band edge. For that purpose, we use Eq. 2 and demand (negative second derivative implying maxima at \(k=\pi\)), \[\frac{d^{2}\omega(k)}{dk^{2}}\Big{|}_{k=\pi}=2\sum_{m=1}^{n}(-1)^{m}m^{2}t_{m }<0. \tag{4}\] Thus, from Eq. 4 we immediately receive the condition for getting the upper band edge at \(k=\pi\) which is given as, \[\sum_{m\in\text{even}}m^{2}t_{m}\!<\!\sum_{m\in\text{odd}}m^{2}t_{m}. \tag{5}\] Note that, the sum in above Eq. 5 is over the range of hopping only. With this condition (Eq. 5) being satisfied and using Eq. 2, we see that the value for upper band edge energy is given as, \[\omega(k=\pi)=2\sum_{m=1}^{n}(-1)^{m+1}t_{m}. \tag{6}\] Now, for the condition, \[\sum_{m\in\mathrm{even}}m^{2}t_{m}>\sum_{m\in\mathrm{odd}}m^{2}t_{m} \tag{7}\] the upper band edge will occur at some different \(k=k_{1}\neq\pi\). An interesting situation appears when, \[\sum_{m\in\mathrm{even}}m^{2}t_{m}=\sum_{m\in\mathrm{odd}}m^{2}t_{m} \tag{8}\] in which case the second derivative in Eq. 4 disappears. Hence one needs to look at the higher order derivatives to conclude about the upper band edge. Now, as the third derivative at \(k=\pi\) is always zero, we look at the fourth order derivative which is given as, \[\frac{d^{4}\omega(k)}{dk^{4}}\Big{|}_{k=\pi}=2\sum_{m=1}^{n}(-1)^{m+1}m^{4}t_{m}. \tag{9}\] Interestingly, for even number of hoppings (\(n\)=even) along with the condition in Eq. 9, we find that \[\frac{d^{4}\omega(k)}{dk^{4}}\Big{|}_{k=\pi}<0 \tag{10}\] which implies a local maximum at \(k=\pi\). Thus, in such a scenario the \(\omega(k)\) value given in Eq. 6 is the upper band edge. However for odd number of hoppings (\(n\)=odd), the condition in Eq. 9 implies \[\frac{d^{4}\omega(k)}{dk^{4}}\Big{|}_{k=\pi}>0 \tag{11}\] is greater than zero, ensuring a local minimum. But, since \(k=0\) always corresponds to a global minimum (lower band edge), in this situation \(k=\pi\) will not correspond to any band edge and therefore the upper band edge will occur at some different \(k=k_{1}\) where \(k_{1}\neq\pi\). In summary, the above analysis points out that for finite-range hopping model, \(k=0\) always corresponds to the lower band edge. But, \(k=\pi\) may or may not correspond to the upper band edge and depends crucially on the conditions as given in Eqs. 5,7, 8. In what follows, we will see interesting consequences of this fact in the NESS transport properties. It is important to note that for the nearest neighbor hopping model i.e., \(n=1\), the upper band edge is always at \(k=\pi\) which is also clear from Eq. 2 and Eq. 4. ## III Open quantum system transport properties ### Non-equilibrium steady state conductance In this section, we are interested in computing the NESS conductance when the finite range hopping lattice chain is connected with two fermionic baths at its two ends i.e., at site 1 and site \(N\). The baths are modelled by infinite number of fermionic modes and the associated spectral functions are denoted by \(\mathcal{J}_{1}(\omega)\) and \(\mathcal{J}_{N}(\omega)\), respectively. At the initial time \(t=0\), both the baths are kept at zero temperature (\(\beta=\infty\)) but at slightly different chemical potentials \(\mu\) and \(\mu-\delta\mu\), respectively. The finite lattice system can however be in any arbitrary initial state. Note that, if the bandwidth of the baths is larger than the bandwidth of the system, the lattice system usually reaches a unique NESS in the long-time limit. In this study, we are interested in the linear response regime and NESS conductance. Using the non-equilibrium-Green's-Function (NEGF) [45; 46; 47; 48; 49; 50], we can write down the NESS conductance as [50] \[\mathcal{G}(\mu)=\frac{1}{2\pi}\mathcal{J}_{1}(\mu)\mathcal{J}_{N}(\mu)|\mathbf{ G}_{1N}(\mu)|^{2}. \tag{12}\] Here \(\mathbf{G}(\mu)\) is the \(N\times N\) retarded NEGF matrix and is given as \[\mathbf{G}(\mu)=\Big{[}\mu\,\mathbb{I}-\mathbf{H}-\mathbf{\Sigma}_{1}-\mathbf{ \Sigma}_{N}\Big{]}^{-1}, \tag{13}\] where \(\mathbf{H}\) is the \(N\times N\) single-particle lattice Hamiltonian matrix corresponding to \(\hat{H}\) in Eq. 1, \(\mathbb{I}\) is a \(N\times N\) identity matrix, \(\mathbf{\Sigma}_{1}\) and \(\mathbf{\Sigma}_{N}\) are the diagonal \(N\times N\) self-energy matrices for the left and right baths with non-zero entries only at \(\left(\mathbf{\Sigma}_{1}\right)_{11}\) and \(\left(\mathbf{\Sigma}_{N}\right)_{NN}\). Thus, following Eq. 12, to infer the scaling property of conductance \(\mathcal{G}(\mu)\) with system size \(N\) we need to investigate the system size scaling only for \(G_{1N}(\mu)\) as the spectral functions, being the property of the baths, are independent of \(N\). Moreover, as the baths are attached at the two ends of the lattice, the scaling of \(\mathbf{G}_{1N}(\mu)\) with \(N\) is directly governed by the scaling of the bare part of the retarded greens function \(\mathbf{g}_{1N}(\mu)\)[44] defined as \[\mathbf{g}(\mu)=\big{[}\mu\,\mathbb{I}-\mathbf{H}\big{]}^{-1}. \tag{14}\] In section III.2, we focus on the calculation of \(\mathbf{g}(\mu)\) for the finite-range model by introducing the transfer matrix approach. ### Connection between retarded bare Greens function and transfer matrix In this section to facilitate further discussion, we first provide the details of the transfer matrix for the finite-range lattice model with hopping range \(n\)[51; 52]. To construct the transfer matrix, we write the discrete version of the time-independent Schrodinger equation \(\hat{H}|\psi\rangle=\omega|\psi\rangle\) as, \[\omega\psi_{\ell}=-t_{1}\psi_{\ell+1}-t_{1}\psi_{\ell-1}-t_{2}\psi_ {\ell+2}-t_{2}\psi_{\ell-2}+\ldots \tag{15}\] \[-t_{n}\psi_{\ell+n}-t_{n}\psi_{\ell-n},\] where \(\psi_{\ell}\) is the amplitude of wave-function at the \(\ell\)th site. We can rewrite Eq. 15 as, \[\psi_{\ell+n} =-\frac{\omega}{t_{n}}\psi_{\ell}-\frac{t_{1}}{t_{n}}\Big{[}\psi_ {\ell-1}+\psi_{\ell+1}\Big{]}- \tag{16}\] \[\frac{t_{2}}{t_{n}}\Big{[}\psi_{\ell-2}+\psi_{\ell+2}\Big{]}+ \ldots-\psi_{\ell-n}\] Following Eq. 16, we can write how the amplitude of wave-function at \((\ell+n)\), \((\ell+n-1)\), \(\ldots(\ell-n+2)\), \((\ell-n+1)\)th sites are connected with \((\ell+n-1),(\ell+n-2)\ldots\), \((\ell-n+1)\), \((\ell-n)\) th site via a \(2n\times 2n\) transfer matrix \(\mathbf{T}^{(l)}(\omega)\), given as, \[\begin{pmatrix}\psi_{\ell+n}\\ \psi_{\ell+n-1}\\ \vdots\\ \psi_{\ell-n+3}\\ \psi_{\ell-n+2}\\ \psi_{\ell-n+1}\end{pmatrix}=\begin{pmatrix}-\frac{t_{n-1}}{t_{n}}&-\frac{t_{n- 2}}{t_{n}}&\ldots&-\frac{\omega}{t_{n}}&\ldots&-\frac{t_{n-2}}{t_{n}}&-\frac{t _{n-1}}{t_{n}}&-1\\ 1&0&0&\ldots&\ldots&0&0&0\\ 0&1&\ldots&0&\ldots&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&\ldots&\ldots&\ldots&\ldots&0&0\\ 0&0&\ldots&\ldots&\ldots&\ldots&1&0\end{pmatrix}\begin{pmatrix}\psi_{\ell+n- 1}\\ \psi_{\ell+n-2}\\ \vdots\\ \psi_{\ell-n+2}\\ \psi_{\ell-n+1}\\ \psi_{\ell-n}\end{pmatrix}=\mathbf{T}^{(\ell)}(\omega)\begin{pmatrix}\psi_{ \ell+n-1}\\ \psi_{\ell+n-2}\\ \vdots\\ \psi_{\ell-n+2}\\ \psi_{\ell-n+1}\\ \psi_{\ell-n}\end{pmatrix}. \tag{17}\] It is clear from the above equation that the transfer matrix of the lattice \(\mathbf{T}^{(l)}(\omega)\) connects the amplitude of the single particle wave-function between \(\ell+n\)-th site to \(\ell-n\)-th site which results in connecting \(2n\) number of total sites. As we are dealing with clean system, the transfer matrix \(\mathbf{T}^{(\ell)}(\omega)\) is independent of site \(\ell\). Thus, we write the transfer matrix as \(\mathbf{T}(\omega)\) instead of \(\mathbf{T}^{(\ell)}(\omega)\). By defining, \[a(|m|) =\omega/t_{n},|m|=0\] \[a(|m|) =t_{|m|}/t_{n},\quad|m|<n\ \&\ |m|\neq 0,\] \[a(|m|) =1,|m|=n\quad\text{and}\] \[a(|m|) =0,|m|>n \tag{18}\] Following Eq. 16, we can write how the amplitude of wave-function at \((\ell+n)\), \((\ell+n-1)\), \(\ldots(\ell-n+2)\), \((\ell-n+1)\)th sites are connected with \((\ell+n-1),(\ell+n-2)\ldots\), \((\ell-n+1)\), \((\ell-n)\) th site via a \(2n\times 2n\) transfer matrix \(\mathbf{T}^{(l)}(\omega)\), given as, \[\begin{pmatrix}\psi_{\ell+n}\\ \psi_{\ell+n-1}\\ \vdots\\ \psi_{\ell-n+3}\\ \psi_{\ell-n+2}\\ 0\\ 0\\ 0\end{pmatrix}=\begin{pmatrix}-\frac{t_{n-1}}{t_{n}}&-\frac{t_{n-2}}{t_{n}}& \ldots&-\frac{\omega}{t_{n}}&\ldots&-\frac{t_{n-2}}{t_{n}}&-\frac{t_{n-1}}{t_{n} }&-1\\ 1&0&0&\ldots&\ldots&0&0&0\\ 0&1&\ldots&0&\ldots&0&0&0\\ 0&1&\ldots&0&\ldots&0&0&0\\ 0&0&\ldots&\ldots&\ldots&\ldots&1&0\end{pmatrix}\begin{pmatrix}\psi_{\ell+n- 1}\\ \psi_{\ell+n-2}\\ \vdots\\ \psi_{\ell-n+2}\\ \psi_{\ell-n+1}\\ \psi_{\ell-n}\end{pmatrix}=\mathbf{T}^{(\ell)}(\omega)\begin{pmatrix}\psi_{ \ell+n-1}\\ \psi_{\ell+n-2}\\ \vdots\\ \psi_{\ell-n+2}\\ \psi_{\ell-n+1}\\ \psi_{\ell-n}\end{pmatrix}. \tag{17}\] It is clear from the above equation that the transfer matrix of the lattice \(\mathbf{T}^{(l)}(\omega)\) connects the amplitude of the single particle wave-function between \(\ell+n\)-th site to \(\ell-n\)-th site which results in connecting \(2n\) number of total sites. As we are dealing with clean system, the transfer matrix \(\mathbf{T}^{(\ell)}(\omega)\) is independent of site \(\ell\). Thus, we write the transfer matrix as \(\mathbf{T}(\omega)\) instead of \(\mathbf{T}^{(\ell)}(\omega)\). By defining, \[a(|m|) =\omega/t_{n},|m|=0\] \[a(|m|) =t_{|m|}/t_{n},\quad|m|<n\ \&\ |m|\neq 0,\] \[a(|m|) =1,|m|=n\quad\text{and}\] \[a(|m|) =0,|m|>n \tag{18}\] Following Eq. 16, we can write how the amplitude of wave-function at \((\ell+n)\), \((\ell+n-1)\), \(\ldots(\ell-n+2)\), \((\ell-n+1)\)th sites are connected with \((\ell+n-1),(\ell+n-2)\ldots\), \((\ell-n+1)\), \((\ell-n)\) th site via a \(2n\times 2n\) transfer matrix \(\mathbf{T}^{(l)}(\omega)\), given as, \[\mathbf{T}(\omega)=\begin{pmatrix}-a(n-1)&-a(n-2)&\ldots&-a(0)&\ldots&-a(n-2)&- a(n-1)&-1\\ 1&0&\ldots&\ldots&\ldots&\ldots&0&0\\ 0&1&\ldots&\ldots&\ldots&\ldots&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&\ldots&\ldots&\ldots&0&0\\ 0&0&\ldots&\ldots&\ldots&\ldots&1&0\end{pmatrix}. \tag{19}\] We now establish the connection between the bare Greens function defined in Eq. 14 and the transfer matrix of the lattice, introduced above in Eq. 19. We rescale the single particle Hamiltonian \(\mathbf{H}\) by \(t_{n}\) where we recall that \(t_{n}\) corresponds to the hopping strength of a particular site to its furthest neighbour, as allowed by the model. With this rescaling, we can write down \(\mathbf{g}\) as \[\mathbf{g}(\mu)=\mathbf{M}(\mu)^{-1}/t_{n}, \tag{20}\] where \[\mathbf{M}(\mu)=\frac{1}{t_{n}}\Big{[}\mu\mathbb{I}-\mathbf{H}\Big{]} \tag{21}\] is a symmetric banded Toeplitz matrix which is also the case for \(\mathbf{H}\)[53; 54; 55]. More explicitly, the \((i,j)\)th matrix element of \(\mathbf{M}(\mu)\) is given as \[\langle i|\mathbf{M}(\mu)|j\rangle=a(|m|) \tag{22}\] with \(m=j-i\) and \(a(0)=\mu/t_{n}\). As shown in Appendix. A, it turns out that one can write the matrix elements of \(\mathbf{M}(\mu)^{-1}\) given in Eq. 21 in terms of the transfer matrix \(\mathbf{T}(\mu)\) given in Eq. 19 as [53], \[\langle i|\mathbf{M}(\mu)^{-1}|j\rangle=\begin{cases}\sum\limits_{m=1}^{n} \left\langle n|\mathbf{T}(\mu)^{-i}|n+m\right\rangle\langle m|\mathbf{M}(\mu)^ {-1}|j\rangle,&\text{if}\;\;j>i\\ \sum\limits_{m=1}^{n}\left\langle n|\mathbf{T}(\mu)^{-i}|n+m\right\rangle \langle m|\mathbf{M}(\mu)^{-1}|j\rangle-\left\langle n|\mathbf{T}(\mu)^{-(i-j +1)}|1\right\rangle,&\text{if}\;j\leq i\end{cases} \tag{23}\] with \(i,j=1,2,\cdots N\). Any matrix element of \(\mathbf{M}(\mu)^{-1}\) in Eq. 23 involves the information of \(\langle m|\mathbf{M}(\mu)^{-1}|j\rangle\) with \(m=1,2\ldots n\). To determine these unknown matrix elements we use the following relation (see Appendix. A for the details) \[\sum\limits_{m=1}^{n}\langle s+n|\mathbf{T}(\mu)^{-N}|n+m\rangle \left\langle m|\mathbf{M}(\mu)^{-1}|j\right\rangle\] \[-\left\langle s+n|\mathbf{T}(\mu)^{-(N-j+1)}|1\right\rangle=0, \tag{24}\] where \(s=1,2,3\ldots n\). Therefore using Eq. 24 and Eq. 23 one can determine all the matrix elements of the bare Green's function \(\mathbf{g}(\mu)\). Note that for the conductance calculation, we only need the component \(\mathbf{g}_{1N}(\mu)\) which can be directly calculated using Eq. 24. Furthermore, it is worth noting that Eq. 24 involves different powers of the transfer matrix \(\mathbf{T}(\mu)\) which can be calculated by knowing the eigenspectra of the matrix. In section IV, we provide the relevant details on eigenvalues and eigenvectors of the transfer matrices considered here. ## IV Transfer matrix properties ### Transfer matrix eigenvalues and its relation with lattice dispersion In this section, we discuss the eigenvalues of the non-hermitian transfer matrix \(\mathbf{T}(\mu)\) given in Eq. 19 and its connection with the lattice dispersion relation. The characteristic equation for \(\mathbf{T}(\mu)\) turns out to be, \[\sum\limits_{r=0}^{2n}a(|n-r|)\lambda^{r}=0, \tag{25}\] where we recall that \(a(|m|)\) is given in Eq. 18. \(\lambda\) denote the eigenvalues of the transfer matrix \(\mathbf{T}(\mu)\). We substitute \(r-n=r^{\prime}\) and rewrite Eq. 25 as, \[\sum\limits_{r^{\prime}=-n}^{n}a(|r^{\prime}|)\lambda^{n+r^{\prime}}=0. \tag{26}\] We write the eigenvalue in the form \[\lambda=e^{i\theta},\quad\text{where}\;\;\theta\in\mathcal{C}. \tag{27}\] The characteristic equation in Eq. 26 then takes the form, \[e^{in\theta}F(\theta)=0, \tag{28}\] where we introduce, \[F(\theta)=\left[2\sum\limits_{r^{\prime}=1}^{n-1}a(r^{\prime})\cos r^{\prime} \theta+2\cos n\theta+a(0)\right]. \tag{29}\] Since \(e^{in\theta}\neq 0\) in Eq. 28, the eigenvalue spectra of the transfer matrix \(\mathbf{T}(\mu)\) is obtained from the solution \[F(\theta)=0. \tag{30}\] Note that the function \(F(\theta)\) defined in Eq. 30 is an even function of \(\theta\). Therefore the eigenvalues of the transfer matrix (Eq. 27) always appear in the form \(e^{i\theta}\) and \(e^{-i\theta}\). Interestingly, if we associate the variable \(\theta\) in Eq. 30 with the lattice wave-vector \(k\) (\(-\pi\leq k\leq\pi\)), then Eq. 30 reads \[F(k)=0 \tag{31}\] which remarkably is the the dispersion relation of the finite-range hopping model in Eq. 2 with \(\omega(k)\) replaced by \(\mu\). For example, with nearest neighbour hopping model i.e., \(n=1\), Eq. 31 reads, \[F(k)=2\cos k+\frac{\mu}{t_{1}}=0,\quad\text{for}\;\;n=1. \tag{32}\] which gives \[\mu=-2t_{1}\cos k,\qquad-\pi\leq k\leq\pi\ \ \text{for}\ \ n=1, \tag{33}\] and therefore matches with the the dispersion relation given in Eq. 2. The detailed discussion on finite-range hopping model with \(n=2\) is given in section. V. ### Eigenvectors of the transfer matrix \(\mathbf{T}(\mu)\) In this section, we provide details about the left and right eigenvectors of non-hermitian transfer matrix \(\mathbf{T}(\mu)\). Given \(2n\times 2n\) transfer matrix \(\mathbf{T}(\mu)\) with determinant 1, the eigenvalues are \(\lambda_{k}\) and \(\lambda_{k}^{-1}\) with \(k=1,2,\ldots n\). Given an eigenvalue \(\lambda_{k}\), the corresponding left and right eigenvectors of \(\mathbf{T}(\mu)\) satisfy following equations, \[\langle\mathbf{\phi}(\lambda_{k})|\mathbf{T}(\mu)=\lambda_{k}\langle \mathbf{\phi}(\lambda_{k})|\] \[\mathbf{T}(\mu)\,|\mathbf{\psi}(\lambda_{k})\rangle=\lambda_{k}\,| \mathbf{\psi}(\lambda_{k})\rangle. \tag{34}\] More explicitly, the left and the right eigenvectors of the transfer matrix corresponding to a given eigenvalue \(\lambda_{k}\) can be written in a vector form as, \[|\mathbf{\phi}(\lambda_{k})\rangle=\begin{pmatrix}\phi_{1}(\lambda_{k})\\ \phi_{2}(\lambda_{k})\\ \vdots\\ \phi_{2n}(\lambda_{k})\end{pmatrix},\quad\text{and}\quad|\mathbf{\psi}(\lambda_{ k})\rangle=\begin{pmatrix}\psi_{1}(\lambda_{k})\\ \psi_{2}(\lambda_{k})\\ \vdots\\ \psi_{2n}(\lambda_{k})\end{pmatrix}. \tag{35}\] Given the transfer matrix, \(\mathbf{T}(\mu)\) in Eq. 19, the components of the left eigenvector can be obtained as, \[\phi_{j}(\lambda_{k})=\sum_{r=0}^{2n-j}a(|n-r|)\,\lambda_{k}^{r+j},\ \ \ \ j=1,2,\ldots 2n \tag{36}\] Interestingly Eq. 36 with \(j=0\) is the characteristic polynomial for the \(\mathbf{T}(\mu)\) and therefore matches with Eq. 25. The similarity transformation \(\mathbf{S}\) which diagonalizes the transfer matrix \(\mathbf{T}(\mu)\) to it's diagonal form \[\mathbf{D}=\text{diag}[\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\ldots \lambda_{1}^{-1},\lambda_{2}^{-1},\ldots,\lambda_{n}^{-1}] \tag{37}\] can be written using the left eigenvector as, \[\mathbf{S}=\begin{pmatrix}\phi_{1}(\lambda_{1})&\phi_{2}(\lambda_{1})&\ldots &\phi_{2n}(\lambda_{1})\\ \phi_{1}(\lambda_{2})&\phi_{2}(\lambda_{2})&\ldots&\phi_{2n}(\lambda_{2})\\ \vdots&\vdots&\vdots&\vdots\\ \phi_{1}(\lambda_{n})&\phi_{2}(\lambda_{n})&\ldots&\phi_{2n}(\lambda_{n})\\ \phi_{1}(\lambda_{1}^{-1})&\phi_{2}(\lambda_{1}^{-1})&\ldots&\phi_{2n}( \lambda_{1}^{-1})\\ \phi_{1}(\lambda_{1}^{-1})&\phi_{2}(\lambda_{2}^{-1})&\ldots&\phi_{2n}( \lambda_{2}^{-1})\\ \vdots&\vdots&\vdots&\vdots\\ \phi_{1}(\lambda_{n}^{-1})&\phi_{2}(\lambda_{n}^{-1})&\ldots&\phi_{2n}( \lambda_{n}^{-1})\end{pmatrix}. \tag{38}\] Using the similarity transformation, we can write, \[\mathbf{S}\,\mathbf{T}(\mu)\,\mathbf{S}^{-1}=\mathbf{D}. \tag{39}\] Here, \(\mathbf{S}^{-1}\) contains all the components of right eigenvector as, \[\mathbf{S}^{-1}=\begin{pmatrix}\psi_{1}(\lambda_{1})&\ldots\psi_{1}(\lambda_{ n})&\psi_{1}(\lambda_{1}^{-1})&\ldots\psi_{1}(\lambda_{n}^{-1})\\ \psi_{2}(\lambda_{1})&\ldots\psi_{2}(\lambda_{n})&\psi_{2}(\lambda_{1}^{-1})& \ldots\psi_{2}(\lambda_{n}^{-1})\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ \psi_{2n}(\lambda_{1})&\ldots\psi_{2n}(\lambda_{n})&\psi_{2n}(\lambda_{1}^{-1} )&\ldots\psi_{2n}(\lambda_{n}^{-1})\end{pmatrix}. \tag{40}\] ### Matrix elements of the exponents of transfer matrix \(\mathbf{T}(\mu)\) In this section, we compute the exponents of the transfer matrix that are required to obtain the conductance. If the transfer matrix \(\mathbf{T}(\mu)\) is diagonalizable, then using Eq. 39 we can write the \(m-\)th exponent of \(\mathbf{T}(\mu)\) as, \[\mathbf{T}^{m}(\mu)=\mathbf{S}^{-1}\,\mathbf{D}^{m}\,\mathbf{S}. \tag{41}\] Thus, the matrix elements of the \(m-\)th exponent \(\mathbf{T}(\mu)\) is given by, \[\langle s|\mathbf{T}^{m}(\mu)|j\rangle=\sum_{k=1}^{2n}\psi_{s}( \lambda_{k})\,\phi_{j}(\lambda_{k})\,\lambda_{k}^{m}\] \[=\sum_{k=1}^{n}\Big{[}\psi_{s}(\lambda_{k})\,\phi_{j}(\lambda_{k}) \,\lambda_{k}^{m}+\psi_{s}(\lambda_{k}^{-1})\,\phi_{j}(\lambda_{k}^{-1})\, \lambda_{k}^{-m}\Big{]}. \tag{42}\] In cases when the transfer matrix \(\mathbf{T}(\mu)\) is no longer diagonalizable, one can bring it to a Jordon-normal form. One such situation arises when at least two eigenvalues are the same. Generally, this does not necessarily imply coalescing of two eigenvectors. However, in the case of transfer matrix \(\mathbf{T}(\mu)\), the analytical mathematical structure (Eq. 19) facilitates one to recast the eigenvectors in the form of Eq. 36. It is interesting to note coalescing of eigenvalues in Eq. 36 also necessarily implies coalescing of eigenvectors. If \(\mathbf{R}\) is any similarity transformation that converts \(\mathbf{T}(\mu)\) to a \(2n\times 2n\) Jordon-normal form \(J\) then \[\mathbf{R}\,\mathbf{T}(\mu)\,\mathbf{R}^{-1}=\mathbf{J}. \tag{43}\] As a result, \(\mathbf{T}(\mu)=\mathbf{R}^{-1}\,\mathbf{J}\,\mathbf{R}\). Thus, in this case, we can calculate the exponents as, \[\mathbf{T}^{m}(\mu)=\mathbf{R}^{-1}\,\mathbf{J}^{m}\,\mathbf{R}. \tag{44}\] Upto now, all the descriptions are very general for the finite-range hopping model. In the result section, we will describe the specific examples in detail and look at the connection between non-hermitian properties of transfer matrix \(\mathbf{T}(\mu)\) and its relation with open-system conductance. ## V Results In this section, we present our results for the finite-range lattice model with a range of hopping \(n=2\) and unravel the important novel role played by the eigenspectra of the transfer matrix in comparison to the nearest neighbour case i.e., \(n=1\). Before proceeding further, we would like to list certain important and pertinent questions: 1. Do the band edges for the finite-range hopping model correspond to the exceptional points of the transfer matrix? If yes, what is the consequence in terms of NESS transport? 2. As discussed in Sec. II for finite range hopping model with \(n>1\), the upper band edge may or may not correspond to \(k=\pi\). What is the corresponding signature, if any, in NESS transport? 3. With increasing the hopping range, the dimension of the transfer matrix also increases. Do these transfer matrices support exceptional hypersurfaces with higher-order exceptional points and if yes, are their consequences in NESS transport? To answer these questions, we now discuss a concrete example of finite-range hopping model with \(n=2\) (next nearest neighbour hopping model), without loss of generality and comment on the case of general \(n\). ### An example of finite-range hopping model with \(n=2\) _Dispersion and band edges:_ The dispersion relation for the finite range hopping model is given in Eq. 2. For \(n=2\), with nearest neighbour hopping \(t_{1}\) and next nearest neighbour hopping \(t_{2}\), we get, \[\omega(k)=-2t_{1}\cos k-2t_{2}\cos 2k. \tag{45}\] Recall that, the lower band edge is always at \(k=0\) and the corresponding energy is given by \[\omega(k\!=\!0)=-2t_{1}-2t_{2},\quad\text{lower band edge} \tag{46}\] In contrast, whether \(k=\pi\) is an upper band edge or not, depends on a condition between the two hoppings, as discussed for the general case in Sec. II. The energy of the upper band edge for three different scenarios mentioned in Eq. 5, Eq. 7, and Eq. 8 is given by, \[\omega(k)=\begin{cases}2t_{1}-2t_{2},&\text{if }\ t_{2}<t_{1}/4\ \ \text{with }k=\pi\\ 2t_{1}-2t_{2},&\text{if }t_{2}=t_{1}/4\ \ \text{with }k=\pi\\ \frac{t_{1}^{2}}{4t_{2}}+2t_{2},&\text{if }t_{2}>t_{1}/4\ \ \text{with }k=\cos^{-1}[-\frac{t_{1}}{4t_{2}}].\end{cases} \tag{47}\] Next, we discuss in detail the nature of the eigenvalues of the transfer matrix \(\mathbf{T}(\mu)\). _Detailed description of nature of eigenvalues of transfer matrix \(\mathbf{T}(\mu)\):_ The transfer matrix \(\mathbf{T}(\mu)\) for \(n=2\) is a \(4\times 4\) matrix and is given by (using Eq. 19), \[\mathbf{T}(\mu)=\begin{pmatrix}-\frac{t_{1}}{t_{2}}&-\frac{\mu}{t_{2}}&-\frac {t_{1}}{t_{2}}&-1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\end{pmatrix}. \tag{48}\] We now unravel the properties of this transfer matrix. In Fig. 1, we first construct a phase diagram in the Figure 1: (Color online). The figure represents a schematic phase diagram in the \(\mu-t_{2}\) plane. We fix \(t_{1}=1\) without loss of generality. The phase diagram is constructed using the eigenvalues of transfer matrix \(\mathbf{T}(\mu)\) for \(n=2\). We identify five different regimes denoted by (I) to (V) (orange, yellow, gray, pink, and cyan) where each regime is associated by the nature of complex eigenvalues. This is further elaborated in Table. 1. The lines between the regimes denoted by A-E (blue, red, green, purple, and brown) represents the exceptional lines. In other words, at any given point on any of these lines at least two of the four eigenvalues and eigenvectors coalesce. Remarkably, a special point emerges in the phase plane as a result of intersection between two exceptional lines and occurs at \(t_{2}=t_{1}/4\). This point is denoted by \(\Gamma_{e}\) (black filled circle) and all four eigenvalues and eigenvectors evaluated at this point coalesce indicating a fourth order exceptional point. The blue line (A) corresponds to the lower band edge as given in Eq. 3. Similarly the red line (B), the fourth order exceptional point \(\Gamma_{e}\) and the green line (C) form the upper band edge given in Eq. 47. The other two lines i.e. purple line (D) and brown line (E) albeit being exceptional point interestingly do not correspond to the band edges. The NESS conductance inside the five regimes [(I)-(V)], on the five lines (A to E) and the higher order exceptional point (\(\Gamma_{e}\)) has been discussed in Table. 1 and 2. \(\mu-t_{2}\) plane, by setting \(t_{1}=1\) without loss of generality. The phase diagram is constructed by using the different nature of the eigenvalues of \(\mathbf{T}(\mu)\). The eigenvalues of \(\mathbf{T}(\mu)\) are obtained analytically following Eq. 30 (see Appendix. B for the details). It turns out that the lower band edge, as given in Eq. 46, is an exceptional line in the \(\mu-t_{2}\) plane. This implies that at any given point on this line at least two of the four eigenvalues and eigenvectors of \(\mathbf{T}(\mu)\) coalesce. This line is denoted by the symbol A (blue solid line). Similarly, following Eq. 47, the upper band edge also turns out to yield exceptional lines denoted by B (purple solid line) and C (green solid line) and a higher order exceptional point denoted by \(\Gamma_{e}\) (black filled circle). The other two lines in the phase plane i.e. D (purple solid line) and E (brown solid line) do not correspond to band edges although remarkably still remain as exceptional lines. These exceptional lines A-E yield five different regimes, denoted by (I) to (V) [orange, yellow, gray, pink, and cyan]. This Fig. 1 sets the stage for a more quantitative summary of our main findings which is gathered in Table 1 and Table 2. The different regimes and associated properties of \(\mathbf{T}(\mu)\) are described in Table 1. In Table 2 we present the findings for different exceptional lines and the higher order exceptional point. In addition to the properties of \(\mathbf{T}(\mu)\), we further summarize in the last columns of Table 1 and Table 2 the system size scaling of NESS conductance which we will be discussed in depth later. The third column of Table 1 and Table 2 can be nicely visualized via appropriate vertical cuts in Fig. 1. This is presented in detail in Fig. 2. the transfer matrix eigenspectra. _NESS conductance and its scaling with system size:_ In Fig. 3, we show the results for conductance \(\mathcal{G}(\mu)\) as a function of \(\mu\) for different system sizes \(N\). Different relative values of \(t_{1}\) and \(t_{2}\) are chosen. These correspond to appropriate vertical cuts (\(t_{2}<t_{1}/4\), \(t_{2}=t_{1}/4\), \(t_{2}>t_{1}/4\)) in the \(\mu-t_{2}\) phase plane in Fig. 1. In all three cases, \(\mathcal{G}(\mu)\) displays non-analytic changes at both upper and lower band edges. It is evident that within the band edges i.e., regime (II) in the left and middle panels and regime (II) and (III) in the right panel, \(\mathcal{G}(\mu)\) is independent of system size and therefore implies ballistic transport. It is worth noting that the black solid vertical line that separates regimes (II) and (III) in the right panel of Fig. 3, represents an exceptional point on the line (D) of Fig. 1. Despite this point being exceptional, ballistic transport behavior is observed and is shown in Fig. 4(d). Outside of both the lower and upper band edges, i.e., regime (I), (V), and (IV) in the left panel, regimes (I) and (IV) in the middle and last panels of Fig. 3, \(\mathcal{G}(\mu)\) decays exponentially with system size. Once again, the black solid vertical line that separates regimes (V) and (IV) in the left panel of Fig. 3, despite being exceptional, displays exponentially decaying transport and is shown in Fig. 4(e). In striking contrast, at both the band edges which correspond to exceptional lines A, B, and C or point \(\Gamma_{e}\) as elucidated in Fig. 1, \(\mathcal{G}(\mu)\) shows interesting anomalous transport with \(1/N^{2}\) scaling. This is clearly demonstrated in Fig. 4 (a), (b), (c), and (f). Moreover, in Fig. 4(c) in addition to overall, \(1/N^{2}\) envelope, there are interesting oscillations whose cause is rooted in the fact that exceptional point albeit occurring at the band edge does not correspond to \(k=\pi\) (see Eq. 47). Recall that albeit Fig. 4(f) is associated with a fourth-order exceptional point \(\Gamma_{e}\), nonetheless, the robustness of \(1/N^{2}\) scaling of NESS conductance is observed, indicating remarkable universality in anomalous transport. Our findings provide strong evidence that the cause of anomalous transport is rooted not only in the existence of exceptional points but also in the fact that they have to be associated with the band edges. In what follows, we bolster our findings by using suitable analytical arguments based on transfer matrices. _Analytical approach to NESS conductance scaling in terms of non-hermitian transfer matrix:_ As mentioned earlier, the system size scaling of NESS conductance \(\mathcal{G}(\mu)\) is entirely governed by the bare Green's function of the system i.e., \(|\mathbf{g}_{1N}(\mu)|^{2}\) with \(\mathbf{g}(\mu)\) being defined in Eq. 14. Figure 2: (Color online:) Plots for real (solid line) and imaginary parts (dashed line) of the eigenvalues of \(4\times 4\) transfer matrix \(\mathbf{T}(\mu)\) for \(n=2\) as a function of \(\mu\). We set the value \(t_{1}=1\). [Left panel] Here we consider the case \(t_{2}<t_{1}/4\), with \(t_{2}=0.15\). This is an example of making a vertical cut in Fig. 1 in the zone where \(t_{2}<0.25\). Note that such a cut in Fig. 1 passes through four regimes (I), (II), (V) and (IV) which are also marked here. Additionally, this cut encompasses three exceptional points out of which two correspond to the lower and the upper band edges. Recall that these lower and upper band edges are denoted by \(A\) and \(B\), respectively in Fig. 1. The other exceptional point which does not correspond to any band edges is denoted by \(E\). In this figure, we represent the exceptional points corresponding to band edges by black dotted vertical lines and the exceptional points that do not correspond to the band edges by black solid vertical lines. The nature of the eigenvalues is consistent with that summarized in Table.1. In regime (I), the eigenvalues (real part) represented by red and green solid lines are inverse of each other. Likewise, there is an inverse corresponding to the purple line which, for the sake of clarity, is not represented here as it falls way outside the presented \(y\)-axis range. In the same manner, even in other regimes/cases when data values fall outside the presented \(y\)-axis range, we do not present it here. [Middle panel] Here we consider the case \(t_{2}=t_{1}/4=0.25\) which spans three regimes (I), (II), and (IV) and two exceptional points corresponding to two band edges according to Fig. 1. Note that, the exceptional point corresponding to the upper band edge is denoted by \(\Gamma_{e}\) which is a fourth-order exceptional point. As can be seen in the figure, at the point \(\Gamma_{e}\), all four eigenvalues become real and coalesce at the value \(-1\). [Right panel] (c) Here we consider the case \(t_{2}>t_{1}/4\) with \(t_{2}=0.4\). Similar to the top panel, a corresponding appropriate vertical cut passes through regimes (I), (II), (III), and (IV). It is worth noting that regime (III) is characterized by a scenario where all four eigenvalues are complex with absolute value \(1\). As a consequence, the eigenvalues when plotted as a function of \(\mu\) in this regime appear dense. Using Eq. 24 the \(\mathbf{g}_{1N}(\mu)\) component can be obtained via the transfer matrix. Recall that \(\mathbf{g}(\mu)\) is related to \(\mathbf{M}(\mu)^{-1}\) via Eq. 20. According to Eq. 24, \(\mathbf{M}(\mu)^{-1}\) obeys the following relations for \(n=2\), \[\left\langle 3|\mathbf{T}(\mu)^{-N}|3\right\rangle\left\langle 1| \mathbf{M}(\mu)^{-1}|j\right\rangle+\left\langle 3|\mathbf{T}(\mu)^{-N}|4 \right\rangle\left\langle 2|\mathbf{M}(\mu)^{-1}|j\right\rangle-\left\langle 3| \mathbf{T}(\mu)^{-(N-j+1)}|1\right\rangle =0. \tag{49}\] \[\left\langle 4|\mathbf{T}(\mu)^{-N}|3\right\rangle\left\langle 1| \mathbf{M}(\mu)^{-1}|j\right\rangle+\left\langle 4|\mathbf{T}(\mu)^{-N}|4 \right\rangle\left\langle 2|\mathbf{M}(\mu)^{-1}|j\right\rangle-\left\langle 4| \mathbf{T}(\mu)^{-(N-j+1)}|1\right\rangle =0.\] Eq. 49 can be recast to a matrix form, \[\begin{pmatrix}\left\langle 3|\mathbf{T}(\mu)^{-N}|3\right\rangle&\left\langle 3| \mathbf{T}(\mu)^{-N}|4\right\rangle\\ \left\langle 4|\mathbf{T}(\mu)^{-N}|3\right\rangle&\left\langle 4|\mathbf{T}( \mu)^{-N}|4\right\rangle\end{pmatrix}\begin{pmatrix}\left\langle 1|\mathbf{M}(\mu)^{-1}|j \right\rangle\\ \left\langle 2|\mathbf{M}(\mu)^{-1}|j\right\rangle\end{pmatrix}=\begin{pmatrix} \left\langle 3|\mathbf{T}(\mu)^{-(N-j+1)}|1\right\rangle\\ \left\langle 4|\mathbf{T}(\mu)^{-(N-j+1)}|1\right\rangle\end{pmatrix}. \tag{50}\] As for the NESS conductance calculation, we require \(\mathbf{g}_{1N}(\mu)\) component which in turn requires us to evaluate \(\langle 1|\mathbf{M}(\mu)^{-1}|N\rangle\). We therefore set \(j=N\) and obtain from Eq. 50 \[\begin{pmatrix}\left\langle 1|\mathbf{M}(\mu)^{-1}|N\right\rangle\\ \left\langle 2|\mathbf{M}(\mu)^{-1}|N\right\rangle\end{pmatrix}=\begin{pmatrix} \left\langle 3|\mathbf{T}(\mu)^{-N}|3\right\rangle&\left\langle 3|\mathbf{T}(\mu)^{-N} |4\right\rangle\\ \left\langle 4|\mathbf{T}(\mu)^{-N}|3\right\rangle&\left\langle 4|\mathbf{T}( \mu)^{-N}|4\right\rangle\end{pmatrix}^{-1}\begin{pmatrix}\left\langle 3|\mathbf{T}(\mu)^{- 1}|1\right\rangle\\ \left\langle 4|\mathbf{T}(\mu)^{-1}|1\right\rangle\end{pmatrix}. \tag{51}\] Using Eq. 51, one can easily evaluate \(\mathbf{g}_{1N}(\mu)=\left\langle 1|\mathbf{M}(\mu)^{-1}|N\right\rangle/t_{2}\) with \(\mathbf{T}(\mu)\) as given in Eq. 48. By performing the inverse of \(\mathbf{T}(\mu)\) in Eq. 48, it is easy to check that \(\left\langle 3|\mathbf{T}^{-1}(\mu)|1\right\rangle=0\) and \(\left\langle 4|\mathbf{T}^{-1}(\mu)|1\right\rangle=-1\). To this end, we obtain a simplified expression for \(\mathbf{g}_{1N}(\mu)\) as \[\mathbf{g}_{1N}=\frac{1}{t_{2}}\frac{\left\langle 3|\mathbf{T}^{-N}|4 \right\rangle}{\left\langle 4|\mathbf{T}^{-N}|4\right\rangle\left\langle 3| \mathbf{T}^{-N}|3\right\rangle-\left\langle 3|\mathbf{T}^{-N}|4\right\rangle \left\langle 4|\mathbf{T}^{-N}|3\right\rangle}. \tag{52}\] For the sake of brevity we omit the argument \(\mu\) from both \(\mathbf{T}\) and \(\mathbf{g}_{1N}\) in Eq. 52. Figure 3: (Color online): Plot for conductance \(\mathcal{G}(\mu)\) as a function of \(\mu\) for three different and appropriate vertical cuts (\(t_{2}<t_{1}/4\), \(t_{2}=t_{1}/4\), \(t_{2}>t_{1}/4\)) in the \(\mu-t_{2}\) phase plane in Fig. 1 for different system sizes \(N\). We set the value \(t_{1}\) and \(t_{2}\) exactly the same as in Fig. 2. In all the figures, we represent the exceptional points corresponding to band edges by black dotted vertical lines and the exceptional points that do not correspond to the band edges by black solid vertical lines. In all cases, we see non-analytic changes in \(\mathcal{G}(\mu)\) at the two band edges and this is discussed in more detail in Fig. 4. [Left panel] Here we consider the case \(t_{2}<t_{1}/4\) with \(t_{1}=1\) and \(t_{2}=0.15\). The behavior of conductance in four different regimes (I), (II), (V), and (IV) are shown. In regimes (I), (V), and (IV) which correspond to outside the band edges, the conductance decays exponentially with system size \(N\). In regime (II) which corresponds to within the band edges, ballistic behavior (system size independence) is observed. [Middle panel] Here we consider the case \(t_{2}=t_{1}/4\) with \(t_{1}=1\) and \(t_{2}=0.25\). Regime (II) shows ballistic transport and regimes (I) and (IV) show exponentially suppressed transport. [Right panel] Here we consider the case \(t_{2}>t_{1}/4\) with \(t_{1}=1\) and \(t_{2}=0.4\). Regimes (II) and (III) show ballistic transport and regimes (I) and (IV) show exponentially suppressed transport. Eq. 52 is one of the central equations of this work. Thus, to calculate \(|\mathbf{g}_{1N}|^{2}\), the main task is to calculate \(\mathbf{T}^{-N}\) using its eigenspectra. Recall that, in Table. 1 and Table 2, we summarize the nature of the eigenvalues of the transfer matrix according to the different regimes of Fig. 1. From the nature of these eigenvalues, one can extract the system size dependence of NESS conductance \(\mathcal{G}(\mu)\) using Eq. 52, as we discuss below. Let us now consider a situation when the transfer matrix \(\mathbf{T}\) does not have any exceptional points [Regimes (I), (II), (III), (IV), and (V) in Fig. 1] and therefore is a diagonalizable matrix. This scenario is summarized in Table. 1. Therefore one can use Eq. 42 to explicitly write down the elements of \(\mathbf{T}^{-N}\) as, \[\left\langle 3|\mathbf{T}^{-N}|3\right\rangle =\psi_{3}(\lambda_{1})\phi_{3}(\lambda_{1})\lambda_{1}^{-N}+\psi_ {3}(\lambda_{1}^{-1})\phi_{3}(\lambda_{1}^{-1})\lambda_{1}^{N}+\psi_{3}( \lambda_{2})\phi_{3}(\lambda_{2})\lambda_{2}^{-N}+\psi_{3}(\lambda_{2}^{-1}) \phi_{3}(\lambda_{2}^{-1})\lambda_{2}^{N}, \tag{53}\] \[\left\langle 4|\mathbf{T}^{-N}|4\right\rangle =\psi_{4}(\lambda_{1})\phi_{4}(\lambda_{1})\lambda_{1}^{-N}+\psi_ {4}(\lambda_{1}^{-1})\phi_{4}(\lambda_{1}^{-1})\lambda_{1}^{N}+\psi_{4}( \lambda_{2})\phi_{4}(\lambda_{2})\lambda_{2}^{-N}+\psi_{4}(\lambda_{2}^{-1}) \phi_{4}(\lambda_{2}^{-1})\lambda_{2}^{N},\] \[\left\langle 3|\mathbf{T}^{-N}|4\right\rangle =\psi_{3}(\lambda_{1})\phi_{4}(\lambda_{1})\lambda_{1}^{-N}+\psi_ {3}(\lambda_{1}^{-1})\phi_{4}(\lambda_{1}^{-1})\lambda_{1}^{N}+\psi_{3}( \lambda_{2})\phi_{4}(\lambda_{2})\lambda_{2}^{-N}+\psi_{3}(\lambda_{2}^{-1}) \phi_{4}(\lambda_{2}^{-1})\lambda_{2}^{N},\] \[\left\langle 4|\mathbf{T}^{-N}|3\right\rangle =\psi_{4}(\lambda_{1})\phi_{3}(\lambda_{1})\lambda_{1}^{-N}+\psi_ {4}(\lambda_{1}^{-1})\phi_{3}(\lambda_{1}^{-1})\lambda_{1}^{N}+\psi_{4}( \lambda_{2})\phi_{3}(\lambda_{2})\lambda_{2}^{-N}+\psi_{4}(\lambda_{2}^{-1}) \phi_{3}(\lambda_{2}^{-1})\lambda_{2}^{N}.\] Now, collecting all terms in Eq. 53 together, the denominator of Eq. 52, takes a form, \[A_{1}+B_{1}\lambda_{1}^{N}\lambda_{2}^{N}+C_{1}\lambda_{1}^{-N}\lambda_{2}^{-N }+D_{1}\lambda_{1}^{N}\lambda_{2}^{-N}+E_{1}\lambda_{1}^{-N}\lambda_{2}^{N}, \tag{54}\] Figure 4: (Color online): Plot for system size scaling of NESS conductance \(\mathcal{G}(\mu)\) at exceptional lines A-E and point \(\Gamma_{e}\) as elucidated in Fig. 1. (a) We choose a point that lies on line \(A\) (lower band edge) in Fig. 1 and observe subdiffusive scaling \(\mathcal{G}(\mu)\sim 1/N^{2}\). (b) We choose a point that lies on line \(B\) (upper band edge with \(t_{2}<t_{1}/4\)) and once again observe subdiffusive scaling with the same exponent. (c) We choose a point that lies on line \(C\) (upper band edge with \(t_{2}>t_{1}/4\)) which exhibits interesting oscillating behaviour with an overall \(1/N^{2}\) envelope. This therefore can also be regarded as an anomalous behaviour that is subdiffusive in nature. (d) We choose a point that lies on line \(D\) (within the band edges with \(t_{2}>t_{1}/4\)) and observe system size independent scaling (ballistic transport) despite being an exceptional point. (e) We choose a point that lies on line \(E\) (outside the upper band edges with \(t_{2}<t_{1}/4\)) and observe exponentially decaying scaling of conductance with system size despite being an exceptional point. (f) We choose the \(\Gamma_{e}\) point which occurs at \(t_{2}=t_{1}/4\) and corresponds to upper band edge. We observe subdiffusive transport with the same exponent despite being a higher-order exceptional point. These six plots reveal that the cause of the anomalous transport is rooted in the existence of exceptional points at the band edges. where all the prefactors \((A_{1},B_{1},C_{1},D_{1},E_{1})\) in front of the eigenvalues \(\lambda_{i},\lambda_{i}^{-1},i=1,2\) are \(N\) independent. The subscript \(1\) here represents the coefficients associated with the denominator. Analogously, the numerator can be expressed as, \[A_{2}\lambda_{1}^{N}+B_{2}\lambda_{1}^{-N}+C_{2}\lambda_{2}^{N}+D_{2}\lambda_{2 }^{-N}, \tag{55}\] where once again all the prefactors \((A_{2},B_{2},C_{2},D_{2})\) are \(N\) independent and the subscript \(2\) represents the coefficients associated with the numerator. It is important to note that, the expression for the denominator in Eq. 54, terms such as \(\lambda_{i}^{2N}\) and \(\lambda_{i}^{-2N},i=1,2\) do not appear and exactly cancel out. In what follows, we now discuss the scaling of \(g_{1N}\) with \(N\) for different cases corresponding to different values of \(\mu\) with no exceptional points. _Below lower band edge_ [Regime (I), \(\mu<-2t_{1}-2t_{2}\)]: For \(n=2\), the regime below the lower band edge corresponds to \(\mu<-2t_{1}-2t_{2}\). In Fig. 1 this regime is indicated by the symbol (I). In this regime, transfer matrix eigenvalues are always real and therefore are of the form \(\lambda_{1}=-e^{\kappa_{1}}=e^{i\pi}e^{\kappa_{1}}\) and \(\lambda_{2}=e^{\kappa_{2}}\), where \(\kappa_{1},\kappa_{2}>0\), and two other eigenvalues being \(\lambda_{1}^{-1},\lambda_{2}^{-1}\) (see Table. 1). Thus, from Eq. 52 we get, \[\mathbf{g}_{1N}\sim\frac{A_{2}e^{\kappa_{1}N}+B_{2}e^{-\kappa_{1}N}+C_{2}e^{ \kappa_{2}N}+D_{2}e^{-\kappa_{2}N}}{A_{1}+B_{1}e^{(\kappa_{1}+\kappa_{2})N}+C _{1}e^{-(\kappa_{1}+\kappa_{2})N}+D_{1}e^{(\kappa_{1}-\kappa_{2})N}+E_{1}e^{( -\kappa_{1}+\kappa_{2})N}}. \tag{56}\] Choosing \(\kappa_{1},\kappa_{2}\) such that \(\kappa_{1}>\kappa_{2}\) and neglecting the exponentially decaying terms in the large \(N\) limit, we obtain, \[\mathbf{g}_{1N}\sim\frac{A_{2}}{B_{1}e^{\kappa_{2}N}}\sim e^{-\kappa_{2}N} \tag{57}\] which implies \[|\mathbf{g}_{1N}|^{2}\sim e^{-2\kappa_{2}N}\sim\lambda_{2}^{-2N}. \tag{58}\] As a result, below the lower band edge, the NESS conductance always decays exponentially with the system size \(N\). The corresponding localization length is set by \(\kappa_{2}\) where \(\kappa_{2}\) is related to the smallest eigenvalue \(\lambda_{2}\) of the transfer matrix \(\mathbf{T}\) i.e., \(\lambda_{2}=e^{\kappa_{2}}\). _Within the band edges [Regime (II) and Regime (III)]:_ Let us now discuss the scaling of conductance when the chemical potential \(\mu\) is within the band edge. Recall that, the lower band edge always occurs at energy \(-2t_{1}-2t_{2}\) (see Fig. 1). However, the energy corresponding to the upper band edge depends on relative values of \(t_{1},t_{2}\) as given in Eq. 47. Thus two distinct regimes [Regime (II) and Regime (III)] emerge within the band edges which is clearly shown in Fig. 1. For regime (II) of Fig. 1 with \(-2t_{1}-2t_{2}<\mu<2t_{1}-2t_{2}\), the transfer matrix \(\mathbf{T}\) eigenvalues are \(\lambda_{1}=-e^{\kappa_{1}}\), \(\lambda_{2}=e^{i\kappa_{2}}\), \(\lambda_{1}^{-1}\) and \(\lambda_{2}^{-1}\), where \(\kappa_{1},\kappa_{2}>0\) (see Table. 1). We therefore obtain from Eq. 52, \[\mathbf{g}_{1N}\sim\frac{A_{2}e^{\kappa_{1}N}+B_{2}e^{-\kappa_{1}N}+C_{2}e^{i \kappa_{2}N}+D_{2}e^{-i\kappa_{2}N}}{A_{1}+B_{1}e^{(\kappa_{1}+i\kappa_{2})N}+ C_{1}e^{-(\kappa_{1}+i\kappa_{2})N}+D_{1}e^{(\kappa_{1}-i\kappa_{2})N}+E_{1}e^{(- \kappa_{1}+i\kappa_{2})N}}. \tag{59}\] In the large \(N\) limit, Eq. 59 simplifies to, \[\mathbf{g}_{1N}\sim\frac{A_{2}}{B_{1}e^{i\kappa_{2}N}+D_{1}e^{-i\kappa_{2}N}} \tag{60}\] thus implying \(|\mathbf{g}_{1N}|^{2}\sim N^{0}\) or ballistic transport. For regime (III) in Fig. 1, \(2t_{1}-2t_{2}<\mu<2t_{2}+\frac{t_{1}^{2}}{4t_{2}}\) with \(t_{2}>t_{1}/4\), the eigenvalues of \(\mathbf{T}\) are all complex and given as \(\lambda_{1}=e^{i\kappa_{1}}\) and \(\lambda_{2}=e^{i\kappa_{2}}\) (see Table. 1). Thus, all the terms in Eq. 52 will have oscillatory dependence on \(N\) indicating once again ballistic transport. An interesting situation arises for \(\mu=2t_{1}-2t_{2}\) corresponding to line D in Fig. 1 with \(t_{2}>t_{1}/4\). Any point along this line corresponds to a second-order exceptional point of \(\mathbf{T}\). Nonetheless, despite being an exceptional point, the corresponding NESS conductance is ballistic and this will be elaborate on later. _Above the upper band edge [Regime (IV) and Regime (V)]:_ Once again depending on the relative values of hopping \(t_{1}\) and \(t_{2}\), two distinct regimes [Regime (IV) and Regime (V)] appear above the upper band edge (see Fig. 1). Above the upper band edge when \(\mu>2t_{2}+\frac{t_{1}^{2}}{4t_{2}}\), it corresponds to the regime (IV) of Fig. 1. In this case, the eigenvalues of \(\mathbf{T}\) are \(\lambda_{1}=-e^{\kappa_{1}+i\kappa_{2}}\) and \(\lambda_{2}=-e^{\kappa_{1}-i\kappa_{2}}\) where \(\kappa_{1},\kappa_{2}>0\) (see Table. 1). Now following Eq. 52, we can write, \[\mathbf{g}_{1N}\sim\frac{A_{2}e^{(\kappa_{1}+i\kappa_{2})N}+B_{2}e^{-(\kappa_{1} +i\kappa_{2})N}+C_{2}e^{(\kappa_{1}-i\kappa_{2})N}+D_{2}e^{-(\kappa_{1}-i \kappa_{2})N}}{A_{1}+B_{1}e^{2\kappa_{1}N}+C_{1}e^{-\kappa_{1}N}+D_{1}e^{2i \kappa_{2}N}+E_{1}e^{-2i\kappa_{2}N}}. \tag{61}\] In the large \(N\) limit, Eq. 61 reduces to, \[\mathbf{g}_{1N}\sim\left(A_{2}e^{i\kappa_{2}N}+C_{2}e^{-i\kappa_{2}N}\right)e^{ -\kappa_{1}N} \tag{62}\] implying exponentially decaying transport. With \(t_{2}<t_{1}/4\), \(2t_{1}-2t_{2}<\mu<2t_{2}+t_{1}^{2}/4t_{2}\), corresponds to the above upper band edge i.e., regime (V) of Fig. 1. Eigenvalues of transfer matrix \(\mathbf{T}\) are \(\lambda_{1}=-e^{\kappa_{1}}\) and \(\lambda_{2}=-e^{\kappa_{2}}\), where \(\kappa_{1},\kappa_{2}>0\) (see Table. 1). This is exactly like the situation below the lower band edge i.e., regime (I) of Fig. 1 and therefore shows exponentially decaying conductance \(\mathcal{G}(\mu)\) with system size. For \(t_{2}<t_{1}/4\), at \(\mu=2t_{2}+\frac{t_{1}^{2}}{4t_{2}}\) corresponds to another interesting situation and is an exceptional line E in Fig. 1. Nonetheless, despite being an exceptional point, the corresponding NESS conductance is exponentially decaying and this will be elaborated on later. To summarize, we have provided a detailed analytical understanding of NESS conductance scaling within and outside the band edges following the transfer matrix eigenspectra that perfectly matches with direct numerics as shown in Fig. 3 and Fig. 4. In other words, we analytically show the ballistic transport within the band edges and exponentially decaying transport outside the band edges of the lattice system. Next, we discuss a situation when the transfer matrix \(\mathbf{T}\) has exceptional points i.e., along the lines A, B, C, D, and E and the point \(\Gamma_{e}\) and is therefore non-diagonalizable in nature. Recall that this scenario is summarized in Table. 2. _At the lower band edge (exceptional line A of Fig. 1):_ Let us now discuss the NESS conductance scaling with system size at the lower band edge which always occurs at \(k=0\) with energy \(\mu=-2t_{1}-2t_{2}\), as given in Eq. 46. This corresponds to the exceptional line A of Fig. (1). Interestingly, for any point on this line, the eigenvalues of \(\mathbf{T}\) are given as \(\lambda_{1}=1\), \(\lambda_{2}=-e^{\kappa_{1}}=e^{i\pi}e^{\kappa_{1}}\), \(\lambda_{1}^{-1}\), \(\lambda_{2}^{-1}\) where \(\kappa_{1}>0\) (see Table. 2). Note that \(\kappa_{1}\) in general is a function of \(\mu\). Therefore, the lower band edge corresponds to a second-order exceptional line, and hence the transfer matrix \(\mathbf{T}\) can be brought to a Jordan normal form \(\mathbf{J}\) (see Eq. 43). This is given by \[\mathbf{J}=\begin{pmatrix}e^{i\pi}e^{\kappa_{1}}&0&0&0\\ 0&e^{-i\pi}e^{-\kappa_{1}}&0&0\\ 0&0&1&1\\ 0&0&0&1\end{pmatrix} \tag{63}\] and \[\mathbf{J}^{-N}=\begin{pmatrix}e^{-\kappa_{1}N}e^{-i\pi N}&0&0&0\\ 0&e^{\kappa_{1}N}e^{i\pi N}&0&0\\ 0&0&1&-N\\ 0&0&0&1\end{pmatrix}. \tag{64}\] In Eq. 64 one of the matrix elements is \(-N\) which plays a pivotal role in dictating the scaling of the NESS conductance as we will see now. Using Eq. 52, we can obtain an expression for \(\mathbf{g}_{1N}\) as, \[\mathbf{g}_{1N}\sim\frac{A_{2}+B_{2}e^{\kappa_{1}N}+C_{2}e^{-\kappa_{1}N}+D_{ 2}N}{A_{1}+B_{1}e^{\kappa_{1}N}+C_{1}e^{-\kappa_{1}N}+D_{1}Ne^{\kappa_{1}N}+E _{1}Ne^{-\kappa_{1}N}+F_{1}N}. \tag{65}\] Now, taking the large \(N\) limit, we obtain, \[\mathbf{g}_{1N}\sim\frac{1}{N}. \tag{66}\] As a result, \(|\mathbf{g}_{1N}|^{2}\propto 1/N^{2}\), implying subdiffusive scaling of NESS conductance at the lower band edge. Our analytical findings are rigorously verified by direct numerics as shown in Fig. 4(a). _At the upper band edge (exceptional lines B and C and exceptional point \(\Gamma_{e}\) of Fig. 1):_ We now discuss the conductance scaling when the chemical potential \(\mu\) is located at the upper band edge. This band edge is comprised of three parts: exceptional line B, exceptional line C, and exceptional point \(\Gamma_{e}\). Let us discuss NESS scaling for each of these cases separately. For the case \(t_{2}<t_{1}/4\), the upper band edge is located at \(k=\pi\) with corresponding energy \(\mu=2t_{1}-2t_{2}\). This corresponds to line B of the phase diagram Fig. 1. In this case, the eigenvalues of \(\mathbf{T}\) are \(\lambda_{1}=-1\), \(\lambda_{2}=e^{i\pi}e^{\kappa_{1}}\), \(\lambda_{1}^{-1}\), and \(\lambda_{2}^{-1}\) and hence once again it's a second-order exceptional point (see Ta ble. 2). Here \(\kappa_{1}>0\). Therefore, exactly like the lower band edge case, one can show that the NESS conductance scales subdiffusively as \(1/N^{2}\). This is also clearly shown in Fig. 4(b). When \(t_{2}>t_{1}/4\), the location of the upper band edge does not occur at \(k=\pi\). The corresponding energy is \(\mu=2t_{2}+t_{1}^{2}/4t_{2}\) which is represented by line C in Fig. 1. Interestingly, in this scenario, the eigenvalues of \(\mathbf{T}\) are \(\lambda_{1}=e^{i\kappa_{1}}\) and \(\lambda_{2}=e^{-i\kappa_{1}}\), \(\lambda_{1}^{-1}\), and \(\lambda_{2}^{-1}\) (see Table 2). Here \(\kappa_{1}>0\). As a result, the four eigenvalues form two complex conjugate pairs of two each. This implies that there are two second-order exceptional points that are complex, unlike the case when the upper band edge is located at \(k=\pi\) i.e., line B. Thus, once again the transfer matrix \(\mathbf{T}\) can be brought to a Jordon-normal form given as, \[\mathbf{J}=\begin{pmatrix}e^{i\kappa_{1}}&1&0&0\\ 0&e^{i\kappa_{1}}&0&0\\ 0&0&e^{-i\kappa_{1}}&1\\ 0&0&0&e^{-i\kappa_{1}}\end{pmatrix} \tag{67}\] and \[\mathbf{J}^{-N}=\begin{pmatrix}e^{-i\kappa_{1}N}&-Ne^{-i\kappa_{1}(N+1)}&0&0 \\ 0&e^{-i\kappa_{1}}&0&0\\ 0&0&e^{i\kappa_{1}}&-Ne^{i\kappa_{1}(N+1)}\\ 0&0&0&e^{i\kappa_{1}N}\end{pmatrix}. \tag{68}\] With this result, Eq. 52 can be written as, \[\mathbf{g}_{1N}\sim\frac{B_{2}e^{-i\kappa_{1}N}+C_{2}e^{i\kappa_{1}N}+D_{2}Ne ^{-i\kappa_{1}(N+1)}+E_{2}Ne^{i\kappa_{1}(N+1)}}{A_{1}+B_{1}N^{2}+C_{1}Ne^{-i \kappa_{1}}+D_{1}Ne^{i\kappa_{1}}+E_{1}e^{i\kappa_{1}(2N+1)}+F_{1}e^{-i\kappa _{1}(2N+1)}}. \tag{69}\] In the large-\(N\) limit, Eq. 69 simplifies to, \[\mathbf{g}_{1N}\sim\frac{D_{2}e^{-i\kappa_{1}N}+E_{2}e^{i\kappa_{1}N}}{B_{1}N} \tag{70}\] and as a result, the NESS conductance shows interesting oscillations set by \(\kappa_{1}\) along with overall \(1/N^{2}\) subdiffusive scaling. This is another central finding of this paper. Our analytical results have been corroborated with the direct numerical simulations shown in Fig. 4(c). Let us now discuss the conductance scaling at the exceptional point \(\Gamma_{e}\) in Fig. 1. This special point occurs for \(t_{2}=t_{1}/4\) and corresponds to the upper band edge energy \(\mu=2t_{1}-2t_{2}\). At this special point, all four eigenvalues and eigenvectors of \(\mathbf{T}\) coalesce (see Table. 2) thereby yielding a fourth-order exceptional point. The corresponding Jordon-Normal form at this fourth-order exceptional point \(\Gamma_{e}\) is given by \[\mathbf{J}=\begin{pmatrix}-1&1&0&0\\ 0&-1&1&0\\ 0&0&-1&1\\ 0&0&0&-1\end{pmatrix} \tag{71}\] and \[\mathbf{J}^{-N}=\begin{pmatrix}-1&-N&\frac{1}{2}N(N+1)&-\frac{1}{6}N(N+1)(N+2 )\\ 0&-1&-N&\frac{1}{2}N(N+1)\\ 0&0&-1&-N\\ 0&0&0&-1\end{pmatrix}. \tag{72}\] It is interesting to note that, the matrix elements \(\mathbf{J}^{-N}\) in Eq. 72 contain terms upto \(O(N^{3})\) which is in stark contrast with all the other cases where exceptional points were of second order. However, the final system-size scaling of NESS conductance still shows \(1/N^{2}\) subdiffusive scaling and therefore extremely robust against the order of the exceptional points of transfer matrices indicating a strong presence of universality. Below we provide the details. In this case, to determine the scaling of conductance, we need to know the explicit form of the transformation matrix \(\mathbf{R}\), as defined in Eq. 43. We obtain, \[\mathbf{R}=\begin{pmatrix}0&0&0&1\\ 0&0&1&1\\ 0&1&2&1\\ 1&3&3&1\end{pmatrix}. \tag{73}\] Thus, using this form of \(\mathbf{R}\), we obtain the different matrix elements for \(\mathbf{T}\) as, \[\left\langle 3|T^{-N}|3\right\rangle =\frac{1}{2}(N^{3}+4N^{2}+N-2),\] \[\left\langle 3|T^{-N}|4\right\rangle =\frac{1}{6}N(N+1)(N+2),\] \[\left\langle 4|T^{-N}|3\right\rangle =-\frac{1}{2}N(N^{2}+N+2),\] \[\left\langle 4|T^{-N}|4\right\rangle =-1-\frac{1}{6}N(N^{2}+5). \tag{74}\] Substituting the expressions obtained in Eq. 74 in Eq. 52, we receive, \[\mathbf{g}_{1N}\sim\frac{A_{1}N^{3}+B_{1}N^{2}+C_{1}N+D_{1}}{A_{2}N^{4}+B_{2} N^{3}+C_{2}N^{2}+D_{2}N+E_{2}} \tag{75}\] which in the large \(N\) limit gives \(\mathbf{g}_{1N}\sim 1/N\). Thus, conductance scales as \(1/N^{2}\) like the other band edges i.e., exceptional lines A, B, and C even though the transfer matrix has a higher-order exceptional point. This analysis also matches with our numerical findings as shown in Fig. 4(f). _Within the band edge, (along the exceptional line D of Fig. 1):_ The exceptional line D emerges for \(t_{2}>t_{1}/4\) which separates regime (II) and regime (III) of Fig. 1. This line always occurs within the two band edges at \(\mu=2t_{1}-2t_{2}\). The transfer matrix eigenvalues in this case are given as \(\lambda_{1}=-1\), \(\lambda_{2}=e^{i\kappa_{1}}\), \(\lambda_{1}^{-1}\) and \(\lambda_{2}^{-1}\) (see Table. 2). Here \(\kappa_{1}>0\). As a result, once again the transfer matrix \(\mathbf{T}\) is not diagonalizable and can be brought to a Jordan normal form given by, \[\mathbf{J}=\begin{pmatrix}e^{i\kappa_{1}}&0&0&0\\ 0&e^{-i\kappa_{1}}&0&0\\ 0&0&-1&1\\ 0&0&0&-1\end{pmatrix} \tag{76}\] and \(\mathbf{J}^{-N}\) is given by, \[\mathbf{J}^{-N}=\begin{pmatrix}e^{-i\kappa_{1}N}&0&0&0\\ 0&e^{i\kappa_{1}N}&0&0\\ 0&0&-1&-N\\ 0&0&0&-1\end{pmatrix}. \tag{77}\] As a result, following Eq. 52 we obtain, \[\mathbf{g}_{1N}\sim\frac{A_{2}+B_{2}e^{i\kappa_{1}N}+C_{2}e^{-i\kappa_{1}N}+D_ {2}N}{A_{1}+B_{1}e^{i\kappa_{1}N}+C_{1}e^{-i\kappa_{1}N}+D_{1}Ne^{i\kappa_{1}N }+E_{1}Ne^{-i\kappa_{1}N}+F_{1}N}. \tag{78}\] In the large \(N\) limit, Eq. 78 simplifies to \[\mathbf{g}_{1N}\sim\frac{D_{2}}{D_{1}e^{i\kappa_{1}N}+E_{1}e^{-i\kappa_{1}N}+F _{1}} \tag{79}\] which produces ballistic transport and is further supported by direct numerics and shown in Fig. 4(d). It is worth noting that this ballistic behavior occurs even in presence of exceptional points. This further implies that albeit the points are exceptional in nature, the fact that the appear within the band edge causes ballistic transport. _Above the band edge (along the exceptional line E of Fig. 1):_ The exceptional line E emerges for \(t_{2}<t_{1}/4\) separating regime (IV) and (V) of Fig. 1. This line always occurs above the upper band edge and corresponds to energy \(\mu=2t_{2}+t_{2}^{2}/4t_{2}\). The transfer matrix eigenvalues are \(\lambda_{1}=e^{i\pi}e^{\kappa_{1}}\) and \(\lambda_{2}=e^{i\pi}e^{-\kappa_{1}}\) where \(\kappa_{1}>0\) (see Table. 2). Interestingly, here the transfer matrix has a pair of second-order exceptional points. The Jordan-normal form is given by, \[\mathbf{J}=\begin{pmatrix}e^{i\pi}e^{\kappa_{1}}&1&0&0\\ 0&e^{i\pi}e^{\kappa_{1}}&0&0\\ 0&0&e^{-i\pi}e^{-\kappa_{1}}&1\\ 0&0&0&e^{-i\pi}e^{-\kappa_{1}}\end{pmatrix} \tag{80}\] and \[\mathbf{J}^{-N}=\begin{pmatrix}e^{-i\pi N}e^{-\kappa_{1}N}&-Ne^{-(i\pi+ \kappa_{1})(N+1)}&0&0\\ 0&e^{-i\pi N}e^{-\kappa_{1}N}&0&0\\ 0&0&e^{i\pi N}e^{\kappa_{1}N}&-Ne^{(i\pi+\kappa_{1})(N+1)}\\ 0&0&0&e^{i\pi N}e^{\kappa_{1}N}\end{pmatrix}. \tag{81}\] With that, Eq. 52 can be written as, \[\mathbf{g}_{1N}\sim\frac{B_{2}e^{-\kappa_{1}N}+C_{2}e^{\kappa_{1}N}+D_{2}Ne^ {-\kappa_{1}(N+1)}+E_{2}Ne^{\kappa_{1}(N+1)}}{A_{1}+B_{1}N^{2}+C_{1}Ne^{- \kappa_{1}}+D_{1}Ne^{\kappa_{1}}+E_{1}e^{\kappa_{1}(2N+1)}+F_{1}e^{-\kappa_{ 1}(2N+1)}} \tag{82}\] which in the large \(N\) limit gives \(\mathbf{g}_{1N}\sim e^{-\kappa_{1}N}\). Thus conductance shows exponentially decaying scaling with system size which also matches with the direct numerics as shown in Fig. 4(e). Note that, albeit the points along line E are exceptional in nature, the fact that they appear outside the band edge causes exponentially suppressed transport. Next in section. V.2, we comment on the NESS conductance scaling for general finite-range hopping systems. ### Comment on general finite-range hopping system It is possible to generalize the study performed in Sec. V.1 for any finite range hopping systems, i.e., \(n=3,4,5,\cdots\). Accordingly, following Eq. 16 the \(2n\times 2n\) transfer matrix \(\mathbf{T}(\mu)\) can be constructed and its eigenvectors can be subsequently analyzed. The NESS conductance and its system size scaling behavior can then be addressed using Eqs. 20 and 24. We note that there is a general framework obeyed by all finite range models irrespective of the range of hopping parameter \(n\) which we elaborate on below. Without loss of generality, we again set \(t_{1}=1\). 1. At \(k=0\), the transfer matrix \(\mathbf{T}(\mu)\) needs to be evaluated at the lower band edge energy \(\mu=-2\sum_{m=1}^{n}t_{m}\) (see Eq. 3). This naturally defines a \((n-1)\) dimensional hyper-surface and when \(\mathbf{T}(\mu)\) is evaluated at any point on this hyper-surface it will have an exceptional point. This can be understood as follows: at the lower band edge, \(\theta=0\) where recall that \(\theta\) is related to the eigenvalue of \(\mathbf{T}(\mu)\) by \(\lambda=e^{i\theta}\) (see Eq. 27) is always a solution of Eq. 30. Thus at least two eigenvalues with value 1 and corresponding eigenvectors of \(\mathbf{T}(\mu)\) coalesce and hence an exceptional point. It then turns out that the corresponding NESS conductance is subdiffusive with universal \(1/N^{2}\) scaling, irrespective of the value \(n\). We illustrate this for the case \(n=3\) in Fig. 5 (a). 2. At \(k=\pi\), \(\mathbf{T}(\mu)\) needs to be evaluated at \(\mu=\omega(k=\pi)\) following Eq. 6. This once again forms a \((n-1)\) dimensional hyper-surface and if \(\mathbf{T}(\mu)\) is evaluated at any point on this hyper-surface, it will have an exceptional point. However, interestingly this point may not always correspond to the upper band edge. This, therefore, yields two different scenarios: When \(k=\pi\) corresponds to the usual upper band edge we obtain subdiffusive scaling \(1/N^{2}\) for NESS conductance for reasons similar to that for \(n=2\) case. We illustrate this for the case \(n=3\) in Fig. 5 (b). In contrast, when \(k=\pi\) does not correspond to the upper band edge, it naturally implies that \(k=\pi\) point is located inside the band edges. Therefore albeit being an exceptional point, the NESS conductance will show ballistic behavior \(N^{0}\) and this is illustrated explicitly for the \(n=3\) case in Fig. 5 (c). In the scenario when the upper band edge is located at some other value of \(k\neq\pi\), the eigenvalues of \(\mathbf{T}(\mu)\) come in complex conjugate pairs and are exceptional in nature. This gives rise to an oscillatory behavior with an overall envelope of \(1/N^{2}\) scaling for NESS conductance. This is illustrated in Fig. 5 (d). 3. Similar to \(n=2\), exceptional hyper-surface may likely emerge above the upper band edge. However, the transfer matrix \(\mathbf{T}(\mu)\) evaluated at points on this hyper-surface have real eigenvalues (not equal to 1) and are exceptional in nature. The resulting conductance will be exponentially suppressed with system size. 4. It is worth mentioning that, fourth-order exceptional points will always emerge at \(k=\pi\) when the hopping strengths satisfy the condition in Eq. 8. For \(n\geq 2\), the fourth order exceptional point \(\Gamma_{e}\) will become a \((n-2)\) hyper-surface of fourth order exceptional points. With odd \(n\), in presence of such fourth-order exceptional points in the transfer matrix, NESS conductance will always show ballistic behaviour as \(k=\pi\) does not correspond to the upper band edge as discussed in Eq. 11. Whereas with even \(n\), at \(k=\pi\) the NESS conductance will show subdiffusive transport with scaling \(1/N^{2}\) as it corresponds to the upper band edge. There is a possibility that \(2n\)-th order exceptional point may appear for finite range lattice with hopping range \(n\). Needless to mention, we find that searching for such higher-order exceptional points for \(n>2\) is both analytically and numerically highly challenging, given the high dimensionality of the parameter space. Having established a strong sense of universality in NESS transport properties with respect to range of the hopping parameter, an important question is that of robustness to imperfections in realistic systems. In Section V.3 we address this point. ### Robustness In this section, we discuss the fate of anomalous transport that occurs at the band edges of the clean lattice system at zero temperature with respect to (i) weak disorder within the system, (ii) chemical potential fixed near the band edge energies, and (iii) finite but low temperature. Let us first discuss the situation when the clean lattice Hamiltonian \(\hat{H}\) in Eq. 1 is subjected to weak on-site disorder of strength \(\delta\). The Hamiltonian for such a disordered system takes the form, \[\hat{H}=\hat{H}+\hat{H}_{\mathrm{d}} \tag{83}\] where \(\hat{H}_{\mathrm{d}}\) describes the onsite disorder given as, \[\hat{H}_{\mathrm{d}}=\sum_{i=1}^{N}\epsilon_{i}\,\hat{c}_{i}^{\dagger}c_{i} \tag{84}\] with \(\epsilon_{i}\) chosen randomly from a uniform distribution \([0,\delta]\). In presence of such onsite disorder system, we investigate the fate of subdiffusive behaviour. Additionally, we also shift the chemical potential across the band edge by an amount \(b_{r}\) where \(b_{r}\) is a random number chosen from a uniform distribution \([-\delta,\delta]\). In other words, we set \[\mu=\omega_{b}+b_{r}. \tag{85}\] Before discussing the fate of anomalous transport when subject to disorder we will analyse the gap \(\Delta_{N}\) between the upper or lower band energy of a finite size system (i.e., finite \(N\)) to that of its corresponding thermodynamic limit (i.e., \(N\to\infty\)) for a clean system, i.e., \(\epsilon_{i}=0\) in Eq. 84. In Fig. 6 (a), we plot this gap parameter \(\Delta_{N}\) with system size \(N\). We see that \(\Delta_{N}\) decays as \(N^{-2}\). For a chosen value of \(N^{*}\), there is a \(\Delta_{N^{*}}\) shown by the black circles in Fig. 6 (a). This \(\Delta_{N^{*}}\) provides an estimate for disorder strength \(\delta\) for which anomalous scaling is observed approximately upto system size \(N^{*}\). In Fig. 6 (b) and (c), we display the robustness of subdiffusive \(1/N^{2}\) scaling with respect to weak disorder \(\delta=10^{-5}\) [Fig. 6 (b)] and \(\delta=10^{-3}\) [Fig. 6 (c)] by plotting 100 different disorder realizations as well as their disorder averaged values. Note that each disorder realization stands for a partic Figure 5: (color online) Plot for system size scaling of NESS conductance for \(n=3\) at various exceptional points of the exceptional hyper-surfaces. In all these figures we set \(t_{1}=1\), \(t_{3}=1/9\), and the value of \(t_{2}\) are displayed in the corresponding plots. (a) Subdiffusive \(1/N^{2}\) scaling is reported at the lower band edge that corresponds to \(k=0\), (b) subdiffusive \(1/N^{2}\) scaling at \(k=\pi\) that corresponds to the upper band edge, (c) ballistic \(N^{0}\) scaling with system size at \(k=\pi\) which does not correspond to the upper band edge, and (d) oscillatory behaviour with an overall envelope that is subdiffusive in nature with \(1/N^{2}\) scaling. This occurs at the upper band edge with \(k\approx 1.92\) (i.e., \(k\neq\pi\)). Figure 6: (color online). Figures supporting the robustness of our finding of anomalous transport to various kinds of disorder. [Left panel] The figure shows the gap \(\Delta_{N}\) (for a clean system, i.e., \(\epsilon_{i}=0\) in Eq. 84) between the upper or lower (blue or orange, respectively) band energy of a finite size system (i.e., finite \(N\)) to that of its corresponding thermodynamic limit (i.e., \(N\to\infty\)). The gap \(\Delta_{N}\) for a clean system decreases with system size \(N\) as \(1/N^{2}\). This figure plays a pivotal role in getting an estimate of allowed disorder that does not destroy anomalous transport. This estimate is made as follows: The vertical black and brown dotted lines represent four pairs \((\Delta_{N}^{*},N^{*})\) shown by four black circles. The right element of each pair \((\Delta_{N}^{*},N^{*})\) gives an estimate of \(N^{*}\) upto which anomalous transport is robust. The left element of each pair \((\Delta_{N}^{*},N^{*})\) gives the corresponding estimate for the allowed disorder strength i.e., \(\delta\sim\Delta_{N^{*}}\). (b) [Middle panel] The figure shows the robustness of the anomalous transport in presence of weak disorder \(\delta=10^{-5}\) both in on-site potential in Eq. 84 and in the chemical potential \(\mu=\omega_{b}+b_{r}\) where \(b_{r}\) is a random number chosen from a uniform distribution \([-\delta,\delta]\). We have plotted the conductance for 100 different realizations (represented by light color lines) as well as the mean value (represented by dots). The deviation from anomalous behavior occurs approximately near \(N^{*}\) (represented by dotted vertical lines) that is estimated using the plot in the left panel of the same figure. Likewise, the left panel of the same figure gives an estimate of \(\delta\) which in this case is \(\delta=10^{-5}\). [Right panel] Figure similar to the middle panel for the pairs \((\Delta_{N}^{*},N^{*})\) which yield \(\delta=10^{-3}\). Note that a similar analysis will hold even when similar disorder is present in the hopping parameters \(t_{1}\) and \(t_{2}\). ular chemical potential \(\mu\) (Eq. 85) and onsite energy \(\epsilon_{i}\) (Eq. 84). The deviation from the subdiffusive behaviour occurs near a critical finite system size \(N^{*}\) which can be clearly seen from Fig. 6(b) and (c). We end this section by making a comment regarding finite (but low) temperature. The robustness of the anomalous transport (up to a critical finite system size) despite having a window around the band edge as per Eq. 85 also suggests a possible window of temperature (albeit small) where one does not destroy the subdiffusive nature of scaling. We expect subdiffusive behaviour of conductance upto a inverse temperature \(\beta\)\(\delta>1\) or \(\beta\Delta_{N^{*}}>1\). From the above detailed analysis, one can conclude that the NESS scaling of conductance with system size at both the band edges with or without oscillations remains robust in presence of (i) weak on-site disorder, (ii) fine-tuned energies across the band edges, and (iii) low temperatures. ## VI Summary and Outlook In summary, we have performed a detailed analysis of the non-hermitian properties of transfer matrices and exceptional hyper-surfaces and their impact on the scaling of NESS conductance for arbitrary finite range hopping model (Table 1 and Table 2). We have established the connection between non-equilibrium steady state (NESS) conductance and underlying non-hermitian transfer matrix for these lattice models. We unravel the non-trivial role played by exceptional points in determining the universal system size scaling of NESS conductance at the band edges (Table 2, Fig. 4, Fig. 5). We further provide evidence that the value of the scaling exponent is remarkably robust to the order of the exceptional point. The signature of the upper band edge not being located at \(k=\pi\) shows up in the conductance as an interesting oscillation with overall \(N^{-2}\) envelope. It is interesting to note that though the exceptional points appear at very specific energies (and therefore sensitive), none-the-less, the NESS conductance is robust (Fig. 6) against weak onsite energy and small shift in chemical potential and temperature. Having done a detailed investigation on the consequence of non-hermitian transfer matrices and exceptional points in NESS conductance, a natural interesting question is the role of external perturbations such as Buttiker voltage probes [56; 57; 58; 59; 60] which models incoherent processes within the system without directly taking part in the transport process. Such studies are especially fascinating because usually exceptional points are sensitive to external perturbations and the sensitivity crucially depends on the order of the exceptional points. Hence, it is an interesting and challenging task to see the impact of higher-order exceptional points (that were reported in this work) on conductance due to incoherent processes induced by such probes. Another fascinating and challenging problem is investigating anomalous transport at such exceptional points starting from a many-body interacting Hamiltonian with finite range hopping. ## Acknowledgement The authors would like to acknowledge Archak Purkayastha for numerous useful discussions. M. S acknowledges funding from National Postdoctoral Fellowship Scheme (NPDF), SERB file No. PDF/2020/000992. BKA acknowledges the MATRICS grant MTR/2020/000472 from SERB, Government of India and the Shastri Indo-Canadian Institute for providing financial support for this research work in the form of a Shastri Institutional Collaborative Research Grant (SIRG). B. K. A. would also like to acknowledge funding from National Mission on Interdisciplinary Cyber-Physical Systems (NM-ICPS) of the Department of Science and Technology, Govt. Of India through the I-HUB Quantum Technology Foundation, Pune India. M.K. would like to acknowledge support from the project 6004-1 of the Indo-French Centre for the Promotion of Advanced Research (IFCPAR), Ramanujan Fellowship (SB/S2/RJN-114/2016), SERB Early Career Research Award (ECR/2018/002085) and SERB Matrics Grant (MTR/2019/001101) from the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India. M.K. acknowledges support of the Department of Atomic Energy, Government of India, under Project No. 19P1112R&D. ## Appendix A Details about Bare Greens function for the finite-range hopping model In this section, we provide the details about calculating the Bare Greens function \(\mathbf{g}(\mu)\), given in Eq. 20, for a finite range lattice model with system size \(N\) and range of hopping \(n\). The details of this calculation can be found in Ref. [53]. Here we summarize the main points of the derivation to obtain \(\mathbf{g}(\mu)\). The calculation of \(\mathbf{g}(\mu)\) involves calculating the inverse of \(\mathbf{M}(\mu)\), as defined in Eq. 21. To obtain the inverse, we use the identity, \(\mathbf{M}(\mu)\mathbf{M}(\mu)^{-1}=\mathbb{I}\) which gives, \[\sum_{j=1}^{N}\left\langle i|\mathbf{M}(\mu)|j\right\rangle\left\langle j| \mathbf{M}(\mu)^{-1}|k\right\rangle=\delta_{i,k} \tag{12}\] Now using Eq. 22, we can write Eq. 12 as, \[\sum_{m=\alpha(i)}^{\eta(i)}a(|m|)\left\langle i+m|\mathbf{M}(\mu)^{-1}|k \right\rangle=\delta_{i,k}, \tag{13}\] where the sum runs from \(\alpha(i)=\text{Max}\{1-i,-n\}\) and \(\eta(i)=\text{Min}\{N-i,n\}\). We now define a vector \(\mathbf{V}_{i}(j)\) with dimension \(2n\times 1\) given as, \[\mathbf{V}_{i}(j)=\begin{pmatrix}\left\langle i-n+1|\mathbf{M}(\mu)^{-1}|j \right\rangle\\ \left\langle i-n+2|\mathbf{M}(\mu)^{-1}|j\right\rangle\\ \left\langle i-n+3|\mathbf{M}(\mu)^{-1}|j\right\rangle\\ \vdots\\ \left\langle i+n|\mathbf{M}(\mu)^{-1}|j\right\rangle\end{pmatrix}. \tag{10}\] Here, \(1\leq i,j\leq N\), otherwise \(\left\langle i|\mathbf{M}(\mu)^{-1}|j\right\rangle\) is zero. Using the transfer matrix \(\mathbf{T}(\mu)\) of dimension \(2n\times 2n\) in Eq. 19, we can rewrite Eq. 11 as, \[\mathbf{T}(\mu)\mathbf{V}_{i}(j)=\mathbf{V}_{i-1}(j)-\delta_{i,j}\mathbb{I}| \mathbf{1}\rangle, \tag{11}\] where \(|\mathbf{1}\rangle\) is a column matrix of dimension \(2n\times 1\) with the first element \(1\) and all other elements are \(0\). By iterating Eq. 11, we can obtain, \[\mathbf{V}_{i}(j)=\begin{cases}\mathbf{T}(\mu)^{-i}\mathbf{V}_{0}(j),&\text{ if }\ j>i\\ \mathbf{T}(\mu)^{-i}\mathbf{V}_{0}(j)-\mathbf{T}(\mu)^{-(i-j+1)}|\mathbf{1} \rangle&\text{if }j\leq i\end{cases} \tag{12}\] Note that the vector \(\mathbf{V}_{0}(j)\) in Eq. 12 contains zeros in its first \(n\) elements. Therefore using Eq. 10 and Eq. 12 we finally obtain, \[\left\langle i|\mathbf{M}(\mu)^{-1}|j\right\rangle=\begin{cases}\sum\limits_{ m=1}^{n}\left\langle n|\mathbf{T}(\mu)^{-i}|n+m\right\rangle\left\langle m| \mathbf{M}(\mu)^{-1}|j\right\rangle,&\text{if }\ j>i\\ \sum\limits_{m=1}^{n}\left\langle n|\mathbf{T}(\mu)^{-i}|n+m\right\rangle \left\langle m|\mathbf{M}(\mu)^{-1}|j\right\rangle-\left\langle n|\mathbf{T}( \mu)^{-(i-j+1)}|1\right\rangle&\text{if }j\leq i.\end{cases} \tag{13}\] Eq. 13 shows that any matrix element of \(\mathbf{M}(\mu)^{-1}\) involves the information of \(\left\langle m|\mathbf{M}(\mu)^{-1}|j\right\rangle\) with \(m=1,2\ldots n\). These matrix elements can be determined by noting that the vector \(\mathbf{V}_{N}(j)\) has zeros in its last \(n\) elements. Using this fact in Eq. 13, we obtain, \[\sum\limits_{m=1}^{n}\left\langle s+n|\mathbf{T}(\mu)^{-N}|n+m\right\rangle \left\langle m|\mathbf{M}(\mu)^{-1}|j\right\rangle-\left\langle s+n|\mathbf{T }(\mu)^{-(N-j+1)}|1\right\rangle=0, \tag{14}\] were, \(s=1,2,3\ldots n\). Eq. 14 provides \(n\) linear equations for \(n\) unknown matrix elements \(\left\langle m|\mathbf{M}(\mu)^{-1}|j\right\rangle\) with \(m=1,2\ldots n\) and therefore can be uniquely determined which in turn helps to determine the rest of the matrix elements \(\left\langle i|\mathbf{M}(\mu)^{-1}|j\right\rangle\) following Eq. 13. Appendix B Transfer matrix eigenvalues in different regimes, exceptional lines, and points for \(n=2\) In this section, we discuss the nature of eigenvalues of the transfer matrix \(\mathbf{T}(\mu)\) for \(n=2\), given in Eq.48, in different regimes, exceptional lines, and points as marked in Fig. 1. We find the eigenvalues analytically by solving for \(F(\theta)=0\) with \(F(\theta)\) defined in Eq. 29. The analytical results obtained in this section are summarized in Table. 1 and Table. 2 (third column). _Regime (I) in Fig. 1:_ The regime (I) of Fig. 1 i.e., below the lower band edge corresponds to \(\mu<-2t_{1}-2t_{2}\). To check the corresponding eigenvalues of the transfer matrix \(\mathbf{T}(\mu)\), we set \(\mu=-2t_{1}-2t_{2}-\varepsilon\) with \(\varepsilon>0\). Note that, \(\varepsilon>0\) is introduced to indicate that we are accessing regime (I). The condition \(F(\theta)=0\) then provides, \[-2t_{1}-2t_{2}-\varepsilon=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{15}\] The solution of \(\theta\) can be written using Eq. 15 as, \[\theta=\cos^{-1}\Big{[}-\frac{t_{1}}{4t_{2}}\pm\sqrt{\left(1+\frac{t_{1}}{4t_{ 2}}\right)^{2}+\frac{\varepsilon}{4t_{2}}}\Big{]}. \tag{16}\] For the case when we have negative sign in the argument of Eq. 16, the argument inside the \(\cos^{-1}\) is always less than \(-1\). For the case when we have positive sign in the argument of Eq. 16, as \(\sqrt{(1+\frac{t_{1}}{4t_{2}})^{2}+\frac{\varepsilon}{4t_{2}}}>1+\frac{t_{1}}{4 t_{2}}\), thus the argument inside \(\cos^{-1}\) is always greater than \(1\). As a result, in this below lower band edge regime, the argument inside \(\cos^{-1}\) is either greater than \(1\) or less than \(-1\). Therefore, the allowed solutions for \(\theta\) is of the form \(\theta=c+id\) where \(c\) can either be \(0\)(mod \(2\pi\)) or \(\pi\) and \(d\in\) real. Thus, the solution \(\theta\) does not match with any wave-vector \(k\) value for the lattice. As a consequence, all the transfer matrix eigenvalues, given by \(e^{i\theta}\), are real with an absolute value not equal to \(1\). _Regime (II) in Fig. 1:_ In a similar way, let us consider a small number \(\varepsilon>0\) to check the transfer matrix eigen values in regime (II) of Fig. 1 where \(-2t_{1}-2t_{2}<\mu<2t_{1}-2t_{2}\). For this case, we set \(\mu=-2t_{1}-2t_{2}+\varepsilon\) with \(0<\varepsilon<4t_{1}\). The condition \(F(\theta)=0\) provides, \[-2t_{1}-2t_{2}+\varepsilon=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{111}\] Using Eq. 111, we can write the solution of \(\theta\) as, \[\theta=\cos^{-1}\Big{[}-\frac{t_{1}}{4t_{2}}\pm\sqrt{\left(1+\frac{t_{1}}{4t_{ 2}}\right)^{2}-\frac{\varepsilon}{4t_{2}}}\Big{]}. \tag{112}\] The second term in the argument of Eq. 112 is always positive in the regime \(0<\varepsilon<4t_{1}\) and is bounded as \(\varepsilon\) by, \[\frac{t_{1}}{4t_{2}}-1 <\sqrt{\left(1+\frac{t_{1}}{4t_{2}}\right)^{2}-\frac{\varepsilon} {4t_{2}}}<1+\frac{t_{1}}{4t_{2}},\text{ if }\ t_{2}<t_{1}/4\] \[0 <\sqrt{\left(1+\frac{t_{1}}{4t_{2}}\right)^{2}-\frac{\varepsilon }{4t_{2}}}<2,\text{ if }t_{2}=t_{1}/4\] \[1-\frac{t_{1}}{4t_{2}} <\sqrt{\left(1+\frac{t_{1}}{4t_{2}}\right)^{2}-\frac{\varepsilon }{4t_{2}}}<1+\frac{t_{1}}{4t_{2}},\text{ if }t_{2}>t_{1}/4 \tag{113}\] With Eq. 113, for the case when we have positive sign in the argument of Eq. 112, then the quantity inside \(\cos^{-1}\) is bounded between \(-1\) to \(1\). This leads to one real solution \(\theta\) which matches with wave-vector \(k\) of the lattice. In a similar way, for the case when we have negative sign in the argument of Eq. 112, then the quantity inside \(\cos^{-1}\) is always less than \(-1\). This leads complex solution of \(\theta\) of the form \(\theta=c+id\) where \(c=\pi\) and \(d\in\) real. As a result, transfer matrix will have two real eigenvalues and two complex conjugate pairs. _Regime (III) in Fig. 1:_ Now, to explain the nature of the eigenvalues of transfer matrix \(\mathbf{T}(\mu)\) in region (III) in Fig. 1 which is between \(2t_{1}-2t_{2}<\mu<\frac{t_{1}^{2}}{4t_{2}}+2t_{2}\) with \(t_{2}>t_{1}/4\), we set \(\mu=2t_{1}-2t_{2}+\varepsilon\) with \(0<\varepsilon<\varepsilon_{c}\). At the transition point given by, \[\varepsilon_{c}=4t_{2}\bigg{(}1-\frac{t_{1}}{4t_{2}}\bigg{)}^{2}, \tag{114}\] the chemical potential \(\mu\) corresponds to the upper band edge line D of Fig. 1. With that, \(F(\theta)=0\) provides, \[2t_{1}-2t_{2}+\varepsilon=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{115}\] From Eq. 115, we can easily write the solution for \(\theta\) as, \[\theta=\cos^{-1}\Big{[}-\frac{t_{1}}{4t_{2}}\pm\sqrt{\left(1-\frac{t_{1}}{4t_ {2}}\right)^{2}-\frac{\varepsilon}{4t_{2}}}\Big{]}. \tag{116}\] Thus, the second term in Eq. 116 is bounded as, \[0<\sqrt{\left(1-\frac{t_{1}}{4t_{2}}\right)^{2}-\frac{\varepsilon}{4t_{2}}}< 1-\frac{t_{1}}{4t_{2}}. \tag{117}\] With Eq. 117 and since \(t_{1}/4t_{2}<1\), following Eq. 116 for both the cases when we have positive and negative sign in the argument, the entire quantity inside the argument in \(\cos^{-1}\) is bounded between \(-1\) to \(1\). Thus, \(\theta\) will have two real solutions which matches with wave-vector \(k\) of the lattice. Thus transfer matrix eigenvalues will have two complex conjugate pairs. _Regime (IV) in Fig. 1:_ Now, to analyse regime (IV) of Fig. 1 i.e. \(\mu>\frac{t_{1}^{2}}{4t_{2}}+2t_{2}\), we set \(\mu=2t_{1}-2t_{2}+\varepsilon_{c}+\varepsilon\) with \(\varepsilon>0\) and recall that \(\varepsilon_{c}\) is defined in Eq. 114. The solutions for \(F(\theta)=0\) gives, \[2t_{1}-2t_{2}+\varepsilon_{c}+\varepsilon=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{118}\] Using Eq. 118, we can write the solution for \(\theta\) as, \[\theta=\cos^{-1}\bigg{[}-\frac{t_{1}}{4t_{2}}\pm\sqrt{\left(1- \frac{t_{1}}{4t_{2}}\right)^{2}-\left(\frac{\varepsilon_{c}+\varepsilon}{4t_{2 }}\right)}\bigg{]}. \tag{119}\] Using the value of \(\varepsilon_{c}\) [Eq. 119], Eq. 120 can be simplified to, \[\theta=\cos^{-1}\bigg{[}-\frac{t_{1}}{4t_{2}}\pm i\sqrt{\frac{ \varepsilon}{4t_{2}}}\bigg{]}. \tag{121}\] Thus, for any \(\varepsilon>0\), the solutions of \(\theta\) are complex numbers of the form \(\theta=c+id\) with \(c,d\in\) real. This leads to complex solutions of transfer matrix eigenvalues with absolute value never equals to \(1\). _Regime (V) in Fig. 1:_ To understand the transfer matrix eigenvalues in regime (V) i.e., \(2t_{1}-2t_{2}<\mu<2t_{2}+\frac{t_{2}^{2}}{4t_{2}}\) of Fig. 1 with \(t_{2}<t_{1}/4\). We therefore set, \(\mu=2t_{1}-2t_{2}+\varepsilon\) with \(0<\varepsilon<\varepsilon_{c}\). At the value \(\varepsilon_{c}\) (Eq. 114) \(\mu\) hits the exceptional line E of Fig. 1. Then the solution \(F(\theta)=0\) gives, \[2t_{1}-2t_{2}+\varepsilon=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{122}\] Using Eq. 122, the solution for \(\theta\) can be written as, \[\theta=\cos^{-1}\Big{[}-\frac{t_{1}}{4t_{2}}\pm\sqrt{\left(\frac{t_{1}}{4t_{2}}-1 \right)^{2}-\frac{\varepsilon}{4t_{2}}}\Big{]}. \tag{123}\] Since \(t_{1}/4t_{2}>1\), the second term of Eq. 123 is bounded as, \[0<\sqrt{\left(\frac{t_{1}}{4t_{2}}-1\right)^{2}-\frac{\varepsilon}{4t_{2}}}< \frac{t_{1}}{4t_{2}}-1. \tag{124}\] From Eq. 124, for both the cases when we have positive and negative sign in the argument of Eq. 123, then the entire argument in \(\cos^{-1}\) is less than \(-1\). Thus, the solutions of \(\theta\) have the form \(\theta=c+id\) with \(c=\pi\) and \(d\in\) real and therefore these eigenvalues do not match with wave-vector \(k\) of the lattice. Thus all the eigenvalues of transfer matrix \(\mathbf{T}(\mu)\) are real with absolute value not equal to \(1\). _Exceptional line A in Fig. 1:_ When chemical potential \(\mu\) is at the lower band edge \(\mu=-2t_{1}-2t_{2}\) along line A of Fig. 1, \(F(\theta)=0\) gives, \[-2t_{1}-2t_{2}=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{116}\] Eq. 116 can be simplified to, \[4\sin^{2}\frac{\theta}{2}\bigg{(}t_{1}+4t_{2}\cos^{2}\frac{\theta}{2}\bigg{)} =0. \tag{117}\] Thus, the solutions for \(\theta\)'s are \[\theta=0,\ \cos^{-1}\bigg{[}-\bigg{(}1+\frac{t_{1}}{2t_{2}}\bigg{)}\bigg{]}. \tag{118}\] As transfer matrix eigenvalues are \(e^{i\theta}\), \(\theta=0\) will give two eigenvalues as \(1\). Thus, we immediately see that any point corresponding to the lower band edge (line A of Fig. 1) is always an exceptional point of underlying transfer matrix. Now, as the ratio of \(t_{1}/t_{2}\) is always positive, the argument in \(\cos^{-1}\) is always less than \(-1\). Thus, the other solution for \(\theta\) has form \(\theta=c+id\) with \(c=\pi\) and \(d\in\) real. Thus, at the lower band edge, transfer matrix has exceptional point with two eigenvalues \(1\) and two other eigenvalues are real with absolute value not equal to \(1\). _Exceptional line B in Fig. 1_ When chemical potential \(\mu\) at the upper band edge i.e. \(\mu=2t_{1}-2t_{2}\) with \(t_{2}<t_{1}/4\) along line B in Fig. 1, \(F(\theta)=0\) gives, \[2t_{1}-2t_{2}=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{119}\] Eq. 119 can be simplified to, \[4\cos^{2}\frac{\theta}{2}\bigg{(}-t_{1}+4t_{2}\sin^{2}\frac{\theta}{2}\bigg{)} =0. \tag{120}\] Thus, the solutions for \(\theta\)'s are \[\theta=\pi,\cos^{-1}\bigg{[}1-\frac{t_{1}}{2t_{2}}\bigg{]}. \tag{121}\] As transfer matrix eigenvalues are \(e^{i\theta}\), \(\theta=\pi\) will give the two eigenvalues as \(-1\). Thus, once again we immediately see the upper band edge also corresponds to transfer matrix exceptional point. Now, since \(t_{1}/t_{2}>4\), the other solution of \(\theta\) has the form \(\theta=c+id\) with \(c=\pi,d\in\) real. Thus, along line B of Fig. 1, transfer matrix has exceptional point with two eigenvalue \(-1\) and two other eigenvalues are real numbers with absolute value not equal to \(1\). _Exceptional line C in Fig. 1:_ When the chemical potential \(\mu\) is along the line C of Fig. 1 i.e. \(\mu=\frac{t_{1}^{2}}{4t_{2}}+2t_{2}\) with \(t_{2}>t_{1}/4\), it corresponds to the upper band edge with wave-vector \(k\neq\pi\). Along this line, \(F(\theta)=0\) gives, \[\frac{t_{1}^{2}}{4t_{2}}+2t_{2}=-2t_{1}\cos\theta-2t_{2}\cos 2\theta. \tag{122}\] Using Eq. 122, the solution of \(\theta\)'s are, \[\theta=\cos^{-1}\bigg{[}-\frac{t_{1}}{4t_{2}}\bigg{]}. \tag{123}\] Since the transfer matrix eigenvalues are \(e^{\pm i\theta}\), using Eq. 123 we can write the eigenvalues as, \[-\frac{t_{1}}{4t_{2}}+i\sqrt{1-\bigg{(}\frac{t_{1}}{4t_{2}}\bigg{)}^{2}},\ \ \ -\frac{t_{1}}{4t_{2}}-i\sqrt{1-\bigg{(}\frac{t_{1}}{4t_{2}}\bigg{)}^{2}}. \tag{124}\] Since \(t_{1}/4t_{2}<1\), these eigenvalues are complex with absolute value \(1\). Thus, the upper band edge along line C of the Fig. 1 has two pairs of complex exceptional point as mentioned it Eq. 124. _Exceptional line D in Fig. 1:_ When the chemical potential \(\mu=2t_{1}-2t_{2}\) with \(t_{2}>t_{1}/4\) along line D of Fig. 1, from Eq. 119 and Eq. 120, in a similar way, the solutions of \(\theta\) are, \[\theta=\pi,\ \cos^{-1}\bigg{[}1-\frac{t_{1}}{2t_{2}}\bigg{]}. \tag{125}\] Thus two transfer matrix eigenvalues are \(-1\) (exceptional points) along this line D. Since, \(0<t_{1}/t_{2}<4\), the other solution for \(\theta\) is bounded between \(-1\) to \(1\). Thus, the other two eigenvalues of the transfer matrix are complex conjugate pairs with an absolute value \(1\). _Exceptional line E in Fig. 1:_ To understand the transfer matrix eigenvalues along line E i.e. \(\mu=\frac{t_{1}^{2}}{4t_{2}}+2t_{2}\) of Fig. 1 with \(t_{2}>t_{1}/4\), we have to follow the same analysis as done for the case of along exceptional line C. Thus, eigenvalues of transfer matrix are, \[-\frac{t_{1}}{4t_{2}}+\sqrt{\bigg{(}\frac{t_{1}}{4t_{2}}\bigg{)}^{2}-1},\ \ \ -\frac{t_{1}}{4t_{2}}-\sqrt{\bigg{(}\frac{t_{1}}{4t_{2}}\bigg{)}^{2}-1}. \tag{126}\] Since, \(t_{1}/4t_{2}>1\), all the eigenvalues are real with absolute value not equal to \(1\). _Exceptional point \(\Gamma_{e}\) in Fig. 1:_ When the chemical potential \(\mu\) is at the upper band edge i.e. \(\mu=2t_{1}-2t_{2}\) with \(t_{1}/4t_{2}=1\) (at \(\Gamma_{e}\) point of Fig. 1), \(F(\theta)=0\) gives,, \[4\cos^{2}\frac{\theta}{2}\bigg{(}-t_{1}+4t_{2}\sin^{2}\frac{\theta}{2}\bigg{)} =0. \tag{127}\] This is exactly same as Eq. 120 with \(t_{2}=t_{1}/4\). Thus, exactly like Eq. 121 the solutions for \(\theta\)'s are \[\theta=\pi,\ \ \cos^{-1}\bigg{[}1-\frac{t_{1}}{2t_{2}}\bigg{]}. \tag{128}\] Since, \(t_{2}=t_{1}/4\), all the transfer matrix eigenvalues are \(-1\). Thus, at this point, transfer matrix has fourth order exceptional point. We have given the analytical results of transfer matrix eigenvalues for all the cases in Table. 1 and.... Also specifically, we have shown the plot for transfer matrix eigenvalues in Fig. 2.
2305.03179
Qumode transfer between continuous and discrete variable devices
Transferring quantum information between different types of quantum hardware is crucial for integrated quantum technology. In particular, converting information between continuous-variable (CV) and discrete-variable (DV) devices enables many applications in quantum networking, quantum sensing, quantum machine learning, and quantum computing. This paper addresses the transfer of CV-encoded information between CV and DV devices. We present a resource-efficient method for encoding CV states and implementing CV gates on DV devices, as well as two measurement-based protocols for transferring CV states between CV and DV devices. The success probability of the transfer protocols depends on the measurement outcome and can be increased to near-deterministic values by adding ancillary qubits to the DV devices.
Alexandru Macridin, Andy C. Y. Li, Panagiotis Spentzouris
2023-05-04T21:52:27Z
http://arxiv.org/abs/2305.03179v4
Qumode transfer between continuous and discrete variable devices by near-deterministic teleportation protocols ###### Abstract Transferring quantum information between different types of quantum hardware is crucial for integrated quantum technology. In particular, converting information between continuous-variable (CV) and discrete-variable (DV) devices enables many applications in quantum networking, quantum sensing, quantum machine learning, and quantum computing. This paper addresses the transfer of CV-encoded information between CV and DV devices. We present an efficient method for encoding CV states and implementing CV gates on DV devices, as well as two teleportation protocols for transferring CV states between CV and DV devices. The success probability of the teleportation protocols depends on the measurement outcome and can be increased to near-deterministic values by adding ancillary qubits to the DV devices. ## I Introduction In the past decade, there has been a major focus on developing quantum technology that holds immense potential for revolutionizing communication, sensing, and computing domains. A wide variety of platforms, including superconducting circuits, microwave cavities, optical systems, trapped ions, atoms, spins, and others, have been used to process quantum information [1; 2]. In these systems, information is encoded in a set of quantum states that can be discrete (qubits and qudits) or continuous (qumodes). Depending on the specific area of application, both types of encoding have their advantages and disadvantages. For the development of integrated quantum technology, it is essential to have the capability to transfer information between all types of quantum devices. While effort has previously been devoted to processing logical qubits encoded on continuous-variable (CV) devices [3; 4; 5; 6], we consider an alternative perspective here: encoding and processing continuous-variable information on discrete-variable (DV) devices. We present a method to efficiently encode CV information into DV devices, along with two measurement-based transfer protocols to convert information between CV and DV devices. The ability to develop hybrid DV-CV technology and convert encoded information between platforms is crucial for building complex systems, such as the quantum internet [7] and quantum sensor networks [8]. For instance, while superconducting chips are better for data processing, optical devices are currently best for long-distance communication and are easily scalable. Various hybrid DV-CV methods have recently been proposed for quantum teleportation [9; 10; 11], entanglement distillation [12], and quantum computing [13; 14]. In addition, methods to encode qubits in CV devices, such as cat states [3; 4; 5; 6], and GKP states [15], have also been proposed to increase qubit resilience to errors. Furthermore, significant effort has been put into developing methods to entangle DV and CV qubits [16; 17; 18] and to convert DV and CV qubits from one to the other via teleportation protocols [19; 20; 21]. Aside from the possibility of encoding qubits, the CV devices have the ability to process information encoded in the continuous bases formed by the eigenvectors of the field quadrature operators, known also as _qumode_ encoding. CV quantum computing [22; 23] is universal [24], meaning that any unitary transformation generated by a polynomial function of the quadrature operators can be decomposed into a finite number of gates drawn from a finite set of gates. Recent advancements in photonic chips [25; 26] and the availability of CV quantum software, such as Strawberry Fields [27], indicate that this is an active and rapidly evolving research area. A significant amount of effort has been devoted to the development of quantum CV algorithms. Currently, the CV algorithms address a wide range of problems, such as scalar field simulations [28; 29], spin simulations [30], attractive Bose-Hubbard simulations [31], partial differential equations [32], quantum approximate optimization algorithm [33], Grover's search [34] and the Deutsch-Jozsa problem [35]. There is also growing interest in employing CV systems in quantum machine learning (QML) methods [36; 37]. In this paper, we address the encoding of qumodes in DV devices and the conversion of qumodes between CV and DV devices. A qumode is a quantum state expressed in an infinite basis set. Therefore, transferring qumodes to a finite qubit device is generally an ill-posed problem. However, for most practical purposes, we can impose a boson occupation cutoff, \(N_{b}\), such that the contribution of states with more than \(N_{b}\) bosons is negligible. One simple way to encode such a truncated qumode to a DV device is by mapping the boson number states with \(n<N_{b}\) to the DV computational basis states. However, this direct encoding may have limited usefulness because information encoded in this way cannot be easily processed on the DV device. This is because information encoded in qumodes is generally processed by employing gates that are functions of the quadrature operators. To achieve effective encoding of qumodes onto DV devices, we not only need to map the qumode's state onto a DV device but also to efficiently implement CV gates on DV devices. To encode qumodes on DV devices, we take advantage of the properties of their wavefunction at large argument and use the Nyquist-Shannon expansion of functions with support on finite intervals [38] to represent them in a discrete quadrature basis. We will call the qumodes mapped onto DV systems in this way _discrete qumodes_. This encoding has high accuracy, which increases exponentially with the size of the finite Hilbert space [39], and allows for a straightforward and efficient implementation of CV gates on DV devices. We present two teleportation protocols: one for transferring CV qumodes to their corresponding discrete representation on DV devices, and another for transferring discrete DV qumodes to CV devices. Both protocols are modifications of the _one-qubit_ CV teleportation protocol described in [23; 40]. They involve entangling the two systems, measuring the first system, and manipulating the second system using operations that depend on the measurement outcome, as shown diagrammatically in Figs. 1 and 4. The teleportation protocols are non-deterministic since the teleportation probability of success, defined in Section IV, depends on the measurement outcome and is smaller than one. However, the probability of success can be increased by using an ancillary DV register. We call our protocols _near-deterministic_ because the probability of success can be brought exponentially close to one by increasing the number of ancilla qubits. For example, we find that a CV state with a boson number cutoff \(N_{b}=100\) can be teleported with an accuracy of \(\mathcal{O}(10^{-7})\) on a DV register of 8 qubits with a success probability of 0.99 (0.999) using an ancillary register of 13 (20) qubits. After teleportation, the ancillary register can be discarded. Furthermore, the teleportation protocols presented here might find immediate or near-future applications when used in the non-deterministic regime for qumodes with cutoff \(N_{b}<20\), since in this case, the total number of required DV qubits is \(\sim 4-6\). We believe that our method for converting qumodes between CV and DV devices has a wide range of potential applications. For instance, quantum sensor networks could benefit from processing data collected by sensors with CV encoding on superconducting QPUs. Qubit based quantum machine learning algorithms can increase their expressivity by including continuous-variable data encoding. The quantum tomography of CV states [41] can be reduced to an equivalent qubit system tomography problem. Transferring DV states to CV registers opens up new possibilities for non-Gaussian state preparation and the implementation of non-Gaussian operations on CV platforms. New measurement based quantum algorithms [42; 43; 44; 45] that use hybrid CD-DV cluster states might be developed. This paper is organized as follows: In Section II, we define the qumode and briefly introduce the gates required for CV quantum computing. In Section III, we introduce the discrete representation of qumodes on qubit devices. In Section IV, we present the teleportation protocols that transfer qumodes between CV and DV devices. In Section V, we show how to use an ancillary qubit register to increase the success probability of the teleportation protocols. Finally, in Section VI, we present a summary of our results and the conclusions. ## II CV states and CV quantum computing Qumodes are vectors belonging to the Hilbert space of square integrable functions, \(L^{2}(\mathbb{R})\). The observables associated with qumodes are generated by the quadrature operators. We denote the quadrature operators by \(X\) and \(P\) because they are equivalent to the position and the momentum operators of a harmonic oscillator, respectively, obeying the canonical commutation relations \([X,P]=i\). The eigenvectors \(\{\ket{x}\}\) of \(X\) (\(X\ket{x}=x\ket{x}\)) and the eigenvectors \(\{\ket{p}\}\) of \(P\) (\(P\ket{p}=p\ket{p}\)), constitutes continuous basis sets and are connected by the Fourier transform \[\ket{p}=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dxe^{ipx}\ket{x}. \tag{1}\] Aside form continuous basis sets, \(L^{2}(\mathbb{R})\) also admits denumerable bases, like the ones formed by boson number states, also known as Fock states. The Fock states are eigenvectors of the harmonic oscillator Hamiltonian and of the boson number operator \(a^{\dagger}a\), where \[a=\left(\sqrt{\mu}X+iP/\sqrt{\mu}\right)/\sqrt{2}, \tag{2}\] and \(\mu\) is the boson mass. For example, in optical devices the bosons are the photons, while in other platforms, like trapped ion devices, the bosons can be the vibrational modes (phonons) [46]. CV computation employs operators with continuous spectra to process the data encoded in qumode states. It has been shown [24] that the evolution of any Hamiltonian that is a polynomial function of \(X\) and \(P\) can be simulated using only a small number of gate types. For example, a sufficient set of gates for universal computation consists of [23]: _i)_ local Gaussian gates, such as the displacement gate \(e^{-i\eta X}\), the phase gate \(e^{-i\eta X^{2}}\), and the Fourier transform \(e^{i\frac{\pi}{4}}e^{-i\frac{\pi}{4}}(P^{(2+X^{2})})\), _ii)_ a non-local Gaussian gate that couples two different modes, like the CPHASE gate \(e^{-i\eta X_{i}\otimes X_{j}}\), and _iii)_ one local non-Gaussian gate, such as the cubic phase gate \(e^{-i\eta X^{3}}\). This example of the universal set of gates is not unique; equivalent alternatives can be considered. In optical systems, Gaussian gates can be relatively easily implemented using displacement, squeezing, phase shift, and beam splitter operations. However, the implementation of non-Gaussian gates is much more difficult [47]. ## III Discrete representation of qumodes The representation of bosonic states on qubit hardware has been discussed in previous works [48; 49; 50; 39], with a focus on fermion-boson and scalar field quantum simulations. In this work, we briefly present the main ideas, emphasizing the points that are most relevant for CV computations, and qumode teleportation protocols. In this paper, we do not consider the direct encoding of Fock states to computational DV states. While this encoding efficiently represents states, we are not aware of any efficient way to implement typical CV gates on DV devices using this encoding [39]. ### Nyquist-Shannon expansion of qumodes We assume that a cutoff \(N_{b}\) can be determined such that the contribution of boson number states above \(N_{b}\) is negligible to the qumode state. As can be seen from Eq. (2) the definition of boson operators is not unique; the bosons are defined up to a mass factor, \(\mu\). Bosons with different masses are related by a squeezing operation. For a given qumode the cutoff \(N_{b}\) depends on the boson mass \(\mu\). As discussed in [39], the smaller \(N_{b}\) is, the better is the accuracy of the discrete representation of qumodes which will be introduced in Section III.2. Keeping the boson mass as a tunable parameter can be useful for optimizing quantum algorithms and computational resources. A wavefunction \(\phi(x)\) with no bosons above the cutoff decreases exponentially fast to zero as the magnitude of its argument \(|x|\) increases. The same is true for its Fourier transform, \(\hat{\phi}(p)\). A parameter \(L>0\) can be chosen such that \(\phi(x)\approx\mathcal{O}(\epsilon)\) when \(x\notin\left[-\frac{L}{\sqrt{\mu}},\frac{L}{\sqrt{\mu}}\right]\) and \(\hat{\phi}(p)\approx\mathcal{O}(\epsilon)\) when \(p\notin\left[-L\sqrt{\mu},L\sqrt{\mu}\right]\). Here \(\mathcal{O}(\epsilon)\) denotes a small quantity with magnitude of the order \(\epsilon\). The error \(\epsilon\) decreases exponentially as the support window parameter \(L\) increases. The Nyquist-Shannon theorem [38] states that a function with limited support in the Fourier space can be written as an infinite sum with the sum terms proportional to the function sampled on a grid. In our case, the wavefunction is _almost_ limited (up to an error \(\mathcal{O}(\epsilon)\)) in _both_\(x\) and \(p\) variables. Therefore, the wavefunction can be represented as a _finite_ sum with \(N_{x}\) terms, \[\phi(x)\equiv\langle x|\phi\rangle=\sum_{j=0}^{N_{x}-1}\phi(x_{j}+\delta_{x} \Delta_{x})u(x-x_{j}-\delta_{x}\Delta_{x})+\mathcal{O}(\epsilon), \tag{3}\] where \[\Delta_{x} =\frac{\pi}{L\sqrt{\mu}}, \tag{4}\] \[L =\sqrt{\frac{\pi N_{x}}{2}},\] (5) \[u(x) =\operatorname{sinc}\left(\frac{x}{\Delta_{x}}\right)\equiv \frac{\sin\left(\pi\frac{x}{\Delta_{x}}\right)}{\pi\frac{x}{\Delta_{x}}},\] (6) \[x_{j} =\left(j-\frac{N_{x}-1}{2}\right)\Delta_{x}. \tag{7}\] In Eq. (3) \(-0.5<\delta_{x}\leq 0.5\) is arbitrary, and a consequence of the fact that for the Nyquist-Shannon expansion the origin of sampling grid can be shifted by an arbitrary amount, as explained in Appendix B. Similarly, \(\hat{\phi}(p)\) can be expressed as \[\hat{\phi}(p)\equiv\langle p|\phi\rangle=\sum_{m=0}^{N_{x}-1}\hat{\phi}(p_{m} +\delta_{p}\Delta_{p})v(p-p_{m}-\delta_{p}\Delta_{p})+\mathcal{O}(\epsilon) \tag{8}\] where \[\Delta_{p} =\frac{\pi\sqrt{\mu}}{L}=\mu\Delta_{x}, \tag{9}\] \[v(p) =\text{sinc}\left(\frac{p}{\Delta_{p}}\right),\] (10) \[p_{m} =\left(m-\frac{N_{x}-1}{2}\right)\Delta_{p}, \tag{11}\] and \(-0.5<\delta_{p}\leq 0.5\) is an arbitrary shift. The sampling sets \(\{\phi(x_{j}+\delta_{x}\Delta_{x})\}_{j\in\{0,\dots,N_{x}-1\}}\) and \(\{\hat{\phi}(p_{m}+\delta_{p}\Delta_{p})\}_{m\in\{0,\dots,N_{x}-1\}}\) are connected by shifted finite Fourier transforms, as follows: \[\sqrt{\Delta_{p}}\hat{\phi}(p_{m}+\delta_{p}\Delta_{p}) =\frac{1}{\sqrt{N_{x}}}\sum_{j=0}^{N_{x}-1}\sqrt{\Delta_{x}}\phi( x_{j}+\delta_{x}\Delta_{x})e^{-i\frac{2\pi}{N_{x}}\left(m-\frac{N_{x}-1}{2}+ \delta_{p}\right)\left(j-\frac{N_{x}-1}{2}+\delta_{x}\right)}+\mathcal{O}( \epsilon), \tag{12}\] \[\sqrt{\Delta_{x}}\phi(x_{j}+\delta_{x}\Delta_{x}) =\frac{1}{\sqrt{N_{x}}}\sum_{m=0}^{N_{x}-1}\sqrt{\Delta_{p}}\hat{ \phi}(p_{m}+\delta_{p}\Delta_{p})e^{i\frac{2\pi}{N_{x}}\left(m-\frac{N_{x}-1}{ 2}+\delta_{p}\right)\left(j-\frac{N_{x}-1}{2}+\delta_{x}\right)}+\mathcal{O}( \epsilon). \tag{13}\] Equations (12) and (13) can be derived by directly calculating the Fourier transforms of Eq. (3) and the inverse Fourier transform of Eq. (8), respectively. Note that the Fourier transform of the _sinc_ function is the rectangular function, see Eq. (13) in Appendix A. ### Finite Hilbert space representation We are constructing a finite Hilbert space of dimension \(N_{x}>N_{b}\). This is achieved by considering the basis \(\{\ket{j}\}\) with \(j\in\{0,1,\dots,N_{x}-1\}\) and defining the discrete position operator \(\bar{X}\) as, \[\bar{X}\ket{j}=x_{j}\ket{j}, \tag{14}\] where \(x_{j}\) is given by the Eq. (7), and the discrete momentum operator \(\bar{P}\) as \[\bar{P}=\mu\bar{\mathcal{F}}\bar{X}\bar{\mathcal{F}}^{-1}, \tag{15}\] where \(\bar{\mathcal{F}}\) represents the centered discrete Fourier Transform, defined by Eq. (14) in Appendix C (see also Eq. (16)). The vectors \(\{\ket{m}_{p}\}\), with \(m\in\{0,1,\dots,N_{x}-1\}\), \[\ket{m}_{p}\equiv\bar{\mathcal{F}}\ket{m}=\frac{1}{\sqrt{N_{x}}}\sum_{j=0}^{N_ {x}-1}e^{i\frac{2\pi}{N_{x}}\left(m-\frac{N_{x}-1}{2}\right)\left(j-\frac{N_{x }-1}{2}\right)}\ket{j} \tag{16}\] are eigenvectors of \(\bar{P}\), \[\bar{P}\ket{m}_{p}=p_{m}\ket{m}_{p}=\left(m-\frac{N_{x}-1}{2}\right)\Delta_{p} \ket{m}_{p}. \tag{17}\] As shown in [39], when \(N_{x}\) is large enough, the discrete position and momentum operators obey (up to an error term \(\mathcal{O}(\epsilon)\)) the canonical commutation relation on the \(N_{b}\) dimensional subspace defined by the projector \(\bar{Q}_{b}\), \[\left[\bar{X},\bar{P}\right]\bar{Q}_{b}=i\bar{Q}_{b}+\mathcal{O}(\epsilon), \tag{18}\] where \[\bar{Q}_{b}=\sum_{n=0}^{N_{b}-1}\ket{\bar{n}}\bra{\bar{n}}. \tag{19}\] In Eq. (19) \(\{\ket{\bar{n}}\}_{n}\) are the ordered eigenvectors of the discrete harmonic oscillator \[H_{h}=\frac{1}{2}\bar{P}^{2}+\frac{\mu^{2}}{2}\bar{X}^{2}. \tag{20}\] Let \(Q_{b}=\sum_{n=0}^{N_{b}-1}\left|n\right\rangle\left\langle n\right|\), with \(\left|n\right\rangle\) being the \(n\)-th Fock state of the CV Hilbert space, denote the projector on the subspace with the number of bosons below \(N_{b}\). There is an isomorphism between the CV subspace defined by the projector \(Q_{b}\) and the subspace of the finite Hilbert space defined by \(\bar{Q}_{b}\) (see also Fig. 2 in [39]). A CV wavefunction described by Eq. (3) can be encoded with \(\mathcal{O}(\epsilon)\) error on the discrete system as follows: \[\left|\phi_{C}\right\rangle=\int\phi(x)\left|x\right\rangle_{C}dx \longleftrightarrow\left|\phi_{D}\right\rangle=\sqrt{\Delta_{x}}\sum_{j=0}^{N_ {x}-1}\phi(x_{j})\left|j\right\rangle_{D}. \tag{21}\] Furthermore, any CV operator \(O(X,P)\) that acts on and yields states in the subspace defined by the projector \(Q_{b}\) can be mapped to the operator \(\bar{O}(\bar{X},\bar{P})\) which acts on the discrete space, by replacing \(X\) and \(P\) with \(\bar{X}\) and \(\bar{P}\), respectively: \[O(X,P)Q_{b}\longleftrightarrow\bar{O}(\bar{X},\bar{P})\bar{Q}_{b}\ \ \text{when}\ \ O(X,P)Q_{b}=Q_{b}O(X,P)Q_{b}+\mathcal{O}(\epsilon). \tag{22}\] By inspecting Eq. (3), it is clear that the information encoded in the DV state \(\left|\phi_{D}\right\rangle\), as described by Eq. (21), is sufficient to reproduce the CV wavefunction \(\phi(x)\) for all values of \(x\). Furthermore, the values of \(\phi(x)\) can be directly measured in the DV basis \(\{\left|j\right\rangle\}\) by applying the grid shift operator \(T_{\delta,0}\) before the measurement, \[T_{\delta,0}\left[\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x_{j})\left|j \right\rangle\right]=\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x_{j}+\delta \Delta_{x})\left|j\right\rangle+\mathcal{O}(\epsilon), \tag{23}\] where \(\delta=\frac{(x-x_{l})}{\Delta_{x}}\) and \(x_{l}\) is the grid point closest to \(x\). The grid shift operator \(T_{\delta,0}\) is a product of a shifted Fourier transform with an inverse shifted Fourier transform and is defined in Eq. (102) in Appendix C. The error \(\mathcal{O}(\epsilon)\) in the previous equations is determined by the weight of the wavefunction of the Fock state with \(N_{b}\) bosons and the weight of its Fourier transform outside the intervals \(\left[-\frac{L}{\sqrt{\mu}},\frac{L}{\sqrt{\mu}}\right]\) and \(\left[-L\sqrt{\mu},L\sqrt{\mu}\right],\) respectively. Both analytical and numerical investigations in [39; 49] have shown that \(\epsilon\) decreases exponentially as the number of discretization points increases, since \(L\propto\sqrt{N_{x}}\) (see Eq. (5)). For example, we have found that the error in the commutation relation described by Eq. (18), is smaller than \(10e^{-(0.51N_{x}-0.765N_{b})}\). ### Finite Hilbert space encoding on qubits The \(N_{x}\) basis states \(\{\left|j\right\rangle\}\), with integer \(j\in\{0,...,N_{x}-1\}\) are represented on \(n_{q}=log_{2}(N_{x})\) qubits in a binary encoding \[\left|j\right\rangle=\left|j_{0}\right\rangle\left|j_{1}\right\rangle...\left| j_{n_{q}-1}\right\rangle, \tag{24}\] where \(j_{q}\in\{0,1\}\), such that \[j=\sum_{q=0}^{n_{q}-1}j_{q}2^{n_{q}-1-q}. \tag{25}\] The discrete position operator is expressed as \[\bar{X}=-\Delta_{x}\sum_{q=0}^{n_{q}-1}2^{n_{q}-1-q}\frac{\sigma_{q}^{z}}{2}, \tag{26}\] where \(\sigma_{q}^{z}=\left|0\right\rangle\left\langle 0\right|_{q}-\left|1 \right\rangle\left\langle 1\right|_{q}\) is the Pauli \(\sigma^{z}\) acting on the qubit \(q\). The operator \(\bar{X}\) satisfies Eq. (14), as can be directly checked. The implementation of the discrete momentum operator \(\bar{P}\) is achieved by using Eq. (15), along with the implementation of the centered discrete Quantum Fourier transform described in Appendix C.1. The gates required for universal CV quantum computation can be implemented on qubits by replacing \(X\) and \(P\) with \(\bar{X}\) and \(\bar{P}\) respectively, as mentioned in Section III.2. In Appendix D, we present the explicit implementation on qubits of the universal set of gates introduced in Section II. This is one of the main advantages of our encoding scheme: the CV gates can be efficiently implemented on qubit hardware. Additionally, in Appendix E we provide an efficient implementation of the discrete squeezing operator, \[\bar{S}(r)=e^{i\frac{\epsilon}{2}\left(\bar{X}\bar{P}+\bar{P}\bar{X}\right)}. \tag{27}\] The discrete squeezing operator will be used in Sections V.1 and V.2 to discard or add ancillary qubits to the DV device in order to increase the teleportation success probability. For that we will use the following property of \(\bar{S}(r)\), \[\bar{S}(r)\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x_{j})\ket{j}=\sqrt{ \Delta_{x}e^{r}}\sum_{j=0}^{N_{x}-1}\phi(x_{j}e^{r})\ket{j}. \tag{28}\] ## IV Teleportation protocols In this section we introduce two teleportation protocols. Both teleportation protocols are modification of the _one-qubit_ CV teleportation protocol described in [40, 23]. The goal of the first teleportation protocol is to transfer a CV qumode \[\ket{\phi_{C}}=\int\phi(x)\ket{x}_{C}dx=\int\sum_{j=0}^{N_{x}-1}\phi(x_{j})u(x- x_{j})\ket{x}_{C}dx+\mathcal{O}(\epsilon) \tag{29}\] to its discrete representation \[\ket{\phi_{D}}=\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x_{j})\ket{j}_{D}. \tag{30}\] We measure the teleportation fidelity by \[F_{D}=\ket{\bra{\chi_{D}}\phi_{D}},\;\;\text{where}\;\;\ket{\chi_{D}}= \mathcal{T}^{CD}\left(\ket{\phi_{C}}\right), \tag{31}\] and \(\mathcal{T}^{CD}\) represents the teleportation channel taking a CV state to a DV device. The goal of the second teleportation protocol is to take the DV state described by Eq. (30) to the corresponding CV state described by Eq. (29). The teleportation fidelity for this protocol is \[F_{C}=\ket{\bra{\chi_{C}}\phi_{C}},\;\;\text{where}\;\;\ket{\chi_{C}}= \mathcal{T}^{DC}\left(\ket{\phi_{D}}\right), \tag{32}\] and \(\mathcal{T}^{DC}\) represents the teleportation channel taking a DV state to a CV device. The teleportation protocols involve measurement operations and the resulting fidelity is dependent on the measurement outcome. We consider the teleportation successful if the teleportation fidelity is larger than a desired threshold value. As described in sections Sections IV.2 and IV.3, the success of the protocols is conditioned on the measurement outcome, and for certain outcomes, the teleportation fails. To quantify the success of the protocols, we define the _teleportation probability of success_ as the probability that the measurement outcome belongs to the set of measurements that yield a successful teleportation. As can be inferred from the above definition, the teleportation probability of success is dependent on the chosen fidelity threshold. To facilitate further discussions, let us define a parameter \(K>0\) here, which depends on \(\epsilon\), as the _minimum_ value such that the weight of normalized \(\phi(x)\) outside the interval \(\left[-\frac{K}{\sqrt{\mu}},\frac{K}{\sqrt{\mu}}\right]\) and the weight of normalized \(\hat{\phi}(p)\) outside the interval \(\left[-K\sqrt{\mu},K\sqrt{\mu}\right]\) are smaller or equal to \(\epsilon\), _i.e._, \[\left(\int_{-\infty}^{-\frac{K}{\sqrt{\mu}}}\ket{\phi(x)}^{2}dx+ \int_{\frac{K}{\sqrt{\mu}}}^{\infty}\ket{\phi(x)}^{2}dx\right)^{\frac{1}{2}}\leq\epsilon \tag{33}\] \[\left(\int_{-\infty}^{-K\sqrt{\mu}}\left|\hat{\phi}(p)\right|^{2} dp+\int_{K\sqrt{\mu}}^{\infty}\left|\hat{\phi}(p)\right|^{2}dp\right)^{\frac{1}{2}}\leq\epsilon.\] We call the intervals \(\left[-\frac{K}{\sqrt{\mu}},\frac{K}{\sqrt{\mu}}\right]\) and \(\left[-K\sqrt{\mu},K\sqrt{\mu}\right]\) the \(\epsilon\)-support intervals of the functions \(\phi(x)\) and \(\hat{\phi}(p)\), respectively, since the functions are \(\mathcal{O}(\epsilon)\) negligible for arguments outside those intervals. For both teleportation protocols, the number of qubits of the DV device needs be large enough such that \(L=2^{\frac{\mu}{2}}\sqrt{\pi/2}>K\). ### Coupling between continuous-variable and discrete-variable devices To implement the teleportation protocols, we assume that the unitary \[e^{-i\eta X\otimes\bar{X}}, \tag{34}\] coupling the CV and DV devices can be implemented. Since \(\bar{X}\) is a linear combination of \(\sigma_{q}^{z}\) operators (see Eq. (26)), this can be achieved if the unitary \(e^{-i\eta X\otimes\sigma_{q}^{z}}\) coupling the qumode and the qubit \(q\) can be realized for all \(q\in\{0,1,...,n_{q}-1\}\). For example, this type of mode-qubit coupling can be achieved by considering the evolution under the interaction Hamiltonian \(H_{int}\propto\left(a^{\dagger}+a\right)\sigma_{q}^{x}\) sandwiched between two qubit Hadamard gates \[e^{-i\eta X\otimes\sigma_{q}^{z}}=H_{q}e^{-i\eta X\otimes\sigma_{q}^{x}}H_{q}. \tag{35}\] This kind of interaction is realized, for instance, in systems with transmons coupled to a microwave cavity [51], or in systems with an electromagnetic mode coupled to qubits [52; 53; 54]. ### Teleportation from a CV device to a DV device The CV-DV teleportation protocol, diagrammatically presented in Fig. 1, consists of the following steps: 1. By applying a Hadamard gate to every qubit, the DV system is prepared into the state \[\frac{1}{\sqrt{N_{x}}}\sum_{j=0}^{N_{x}-1}\left|j\right\rangle_{D}.\] (36) The initial joint CV-DV system's state is \[\left|\chi_{CD}\right\rangle=\frac{1}{\sqrt{N_{x}}}\int\sum_{j=0}^{N_{x}-1} \phi(x)\left|x\right\rangle_{C}\left|j\right\rangle_{D}dx.\] (37) 2. The entangling operator \(e^{-i\mu X\otimes\bar{X}}\) is applied. The state becomes \[e^{-i\mu X\otimes\bar{X}}\left|\chi_{CD}\right\rangle=\frac{1}{\sqrt{N_{x}}} \int\sum_{j=0}^{N_{x}-1}e^{-i\mu x_{j}}\phi(x)\left|x\right\rangle_{C}\left|j \right\rangle_{D}dx.\] (38) 3. The CV system is measured in the momentum basis by employing a homodyne measurement. Let's denote the Figure 1: CV-DV teleportation protocol, described in Section IV.2. The CV state \(\left|\phi_{C}\right\rangle\) is teleported into the DV state \(\left|\chi_{D}\right\rangle\). measurement result by \(p_{meas}\). After the measurement, the DV state becomes \[\left|\chi_{D0}\right\rangle =\frac{1}{\sqrt{\Pr(p_{meas})}}\frac{1}{\sqrt{2\pi N_{x}}}\int\sum_ {j=0}^{N_{x}-1}e^{-ix\left(\mu x_{j}+p_{meas}\right)}\phi(x)\left|j\right\rangle _{D}dx\] (39) \[=\frac{1}{\sqrt{\Pr(p_{meas})}}\frac{1}{\sqrt{N_{x}}}\sum_{j=0}^{ N_{x}-1}\hat{\phi}(\mu x_{j}+p_{meas})\left|j\right\rangle_{D},\] while the probability to measure the value \(p_{meas}\) is \[\Pr(p_{meas})=\frac{1}{N_{x}}\sum_{j=0}^{N_{x}-1}\left|\hat{\phi}(\mu x_{j}+p_ {meas})\right|^{2}.\] (40) 4. The gate \(e^{-i\frac{n\Delta_{p}}{\mu}\tilde{P}}\) is applied to the DV device, \[\left|\chi_{D1}\right\rangle=e^{-i\frac{n\Delta_{p}}{\mu}\tilde{P}}\left| \chi_{D0}\right\rangle=\frac{1}{\sqrt{\Pr(p_{meas})}}\frac{1}{\sqrt{N_{x}}} \sum_{j=0}^{N_{x}-1}\hat{\phi}\left[\mu x_{j}+\left(n+\delta_{p}\right)\Delta _{p}\right]\left|\left(j+n\right)_{modN_{x}}\right\rangle_{D},\] (41) where the integer \(n\) and the shift parameter \(-0.5<\delta_{p}\leq 0.5\) are defined such that \(p_{meas}=\left(n+\delta_{p}\right)\Delta_{p}\). Here \(k_{modN_{x}}=:k-N_{x}[\frac{k}{N_{x}}]\), with \(\left\lfloor\ \right\rfloor\) being the integer _floor_ function, denotes \(k\)_modulo_\(N_{x}\) and takes integer values between \(0\) and \(N_{x}-1\). 5. For the final step the shifted Fourier transform (see Eq. (15) in Appendix C), \[\bar{\mathcal{F}}_{0,\delta_{p}}=\frac{1}{\sqrt{N_{x}}}\sum_{l,j=0}^{N_{x}-1}e ^{i\frac{2\pi}{N_{x}}\left(l-\frac{N_{x}-1}{2}\right)\left(j-\frac{N_{x}-1}{2} +\delta_{p}\right)}\left|l\right\rangle_{D}\left\langle j\right|_{D},\] (42) is applied to the DV state. The teleported state is \[\mathcal{T}^{CD}(\left|\phi_{C}\right\rangle)\equiv\left|\chi_{D}\right\rangle =\bar{\mathcal{F}}_{0,\delta_{p}}\left|\chi_{D1}\right\rangle=\sum_{j=0}^{N_{x }-1}\xi_{j}\left|j\right\rangle_{D},\] (43) where \[\xi_{j}=\frac{1}{N_{x}\sqrt{\Pr(p_{meas})}}\sum_{l=0}^{N_{x}-1}\hat{\phi} \left(\mu x_{l+n}+\delta_{p}\Delta_{p}\right)e^{i\frac{\delta_{p}}{N_{x}} \left(j-\frac{N_{x}-1}{2}\right)\left[\left(l+n\right)_{modN_{x}}-\frac{N_{x}- 1}{2}+\delta_{p}\right]}.\] (44) As can be seen from Eq. (44) the teleportation fidelity depends on the value of \(p_{meas}=\left(n+\delta_{p}\right)\Delta_{p}\). We will show that for the values of \(p_{meas}\) for which \[\hat{\phi}\left(\mu x_{j+n}+\delta_{p}\Delta_{p}\right)=\hat{\phi}\left(\mu x_ {\left(j+n\right)_{modN_{x}}}+\delta_{p}\Delta_{p}\right)+\mathcal{O}( \epsilon)\ \ \text{for all}\ \ j\in\{0,...,N_{x}-1\}, \tag{45}\] the teleportation has a small error \(\mathcal{O}(\epsilon)\). First, we determine \(p_{meas}\) for which Eq. (45) is true. There are three cases to be discussed: _i)_ When \(j+n=\left(j+n\right)_{modN_{x}}\), Eq. (45) is satisfied. _ii)_ For positive \(n\), when \(j+n\geq N_{x}\), the modulo sum \(\left(j+n\right)_{modN_{x}}=j+n-N_{x}\). On the left-hand side of Eq. (45) we have \(\hat{\phi}\left(\mu x_{j+n}+\delta_{p}\Delta_{p}\right)=\mathcal{O}(\epsilon)\) since \(\mu x_{j+n}+\delta_{p}\Delta_{p}>K\sqrt{\mu}\) is outside the \(\epsilon\)-support interval of the \(\hat{\phi}\) function. The requirement that the right-hand side of Eq. (45) \(\hat{\phi}\left(\mu x_{\left(j+n\right)_{modN_{x}}}+\delta_{p}\Delta_{p} \right)=\mathcal{O}(\epsilon)\) implies \(\mu x_{j+n-N_{x}}+\delta_{p}\Delta_{p}<-K\sqrt{\mu}\). This is equivalent to \(\left(j+n-N_{x}-\frac{N_{x}-1}{2}+\delta_{p}\right)\Delta_{p}<-K\sqrt{\mu}\) for all \(j\in\{0,...,N_{x}-1\}\) and, by employing Eq. (5), implies \(\left(n+\delta_{p}\right)\Delta_{p}<-K\sqrt{\mu}+L\sqrt{\mu}+\frac{\Delta_{p}}{2}\). _iii)_ For negative \(n\), when \(j+n<0\), the modulo sum \(\left(j+n\right)_{modN_{x}}=j+n+N_{x}\). The left-hand side of Eq. (45) \(\hat{\phi}\left(\mu x_{j+n}+\delta_{p}\Delta_{p}\right)=\mathcal{O}(\epsilon)\) since the argument \(\mu x_{j+n}+\delta_{p}\Delta_{p}<-K\sqrt{\mu}\) is outside the \(\epsilon\)-support interval of the \(\hat{\phi}\) function. The requirement that the right-hand side of Eq. (45) \(\hat{\phi}\left(\mu x_{\left(j+n\right)_{modN_{x}}}+\delta_{p}\Delta_{p}\right)= \mathcal{O}(\epsilon)\) implies \(\delta_{p}\Delta_{p}>K\sqrt{\mu}\). This is equivalent to \(\left(j+n+N_{x}-\frac{N_{x}-1}{2}+\delta_{p}\right)\Delta_{p}>K\sqrt{\mu}\) for all \(j\in\{0,...,N_{x}-1\}\) which implies \((n+\delta_{p})\Delta_{p}>K\sqrt{\mu}-L\sqrt{\mu}-\frac{\Delta_{p}}{2}\). Considering _i)_, _ii)_ and _iii)_, we can conclude that Eq. (45) is satisfied when \[\frac{|p_{meas}|}{\sqrt{\mu}}<L-K+\frac{\Delta_{p}}{2\sqrt{\mu}}. \tag{46}\] Second, we calculate the probability to measure \(p_{meas}\) when \(p_{meas}\) satisfies Eq. (46). In Eq. (40) the argument \(\mu x_{j}+p_{meas}\) of the wavefunction \(\hat{\phi}(p)\) takes values in the interval \([-L\sqrt{\mu}-\frac{1}{2}\Delta_{p}+p_{meas},L\sqrt{\mu}+\frac{1}{2}\Delta_{p }+p_{meas}]\) when the summation index \(j\) runs from \(0\) to \(N_{x}-1\). The summation terms for which \(\mu x_{j}+p_{meas}\) takes value outside the \(\epsilon\)-support interval \(\left[-K\sqrt{\mu},K\sqrt{\mu}\right]\) are negligible small (of order \(\mathcal{O}(\epsilon)\)) and, therefore, their contribution to the sum is negligible. In another words, as long as \(\left[-K\sqrt{\mu},K\sqrt{\mu}\right]\subset[-L\sqrt{\mu}-\frac{1}{2}\Delta_ {p}+p_{meas},L\sqrt{\mu}+\frac{1}{2}\Delta_{p}+p_{meas}]\) (which is equivalent to Eq. (46)), all the non-negligible terms are included in the summation. In this case the sum is independent of the value of \(p_{meas}\), _i.e._, \[\Pr(p_{meas})=\frac{1}{N_{x}}\sum_{j=0}^{N_{x}-1}\left|\hat{\phi}(\mu x_{j}) \right|^{2}+\mathcal{O}(\epsilon)=\frac{1}{N_{x}\Delta_{p}}+\mathcal{O}( \epsilon)\ \ \mbox{for}\ \ \frac{|p_{meas}|}{\sqrt{\mu}}<L-K+\frac{\Delta_{p}}{2\sqrt{\mu}}, \tag{47}\] The last equality in Eq. (47) is a consequence of Eq. (8). Finally, for \(p_{meas}\) satisfying Eq. (46), Eqs. (12), (44), (45) and (47) imply \[\xi_{j}=\sqrt{\Delta_{x}}\phi(x_{j})+\mathcal{O}(\epsilon). \tag{48}\] Equations (43) and (48) imply that \[\mathcal{T}^{CD}(|\phi_{C}))=|\phi_{D})+\mathcal{O}(\epsilon)\ \ \mbox{for}\ \ \frac{|p_{meas}|}{\sqrt{\mu}}<L-K+\frac{\Delta_{p}}{2\sqrt{\mu}}. \tag{49}\] Hence, for \(p_{meas}\) in the interval range given by Eq. (46), the teleportation has a small error \(\mathcal{O}(\epsilon)\). The teleportation probability of success, defined as the probability of having a measurement outcome such that the fidelity is larger than \(1-\epsilon\), is given by \[P_{tele}^{CD}(\epsilon)=\int dp_{meas}\Pr(p_{meas})\bigg{|}_{F_{D}>1-\epsilon}, \tag{50}\] where the fidelity \(F_{D}\) is defined by Eq. (31). According to Eqs. (47) and (49) we have \[P_{tele}^{CD}(\epsilon)\approx P_{tele}^{CD}\left[\mathcal{O}(\epsilon)\right] =\int_{(-L+K)\sqrt{\mu}-\frac{\Delta_{p}}{2}}^{(L-K)\sqrt{\mu}+\frac{\Delta_{ p}}{2}}\Pr(p_{meas})dp_{meas}=\frac{L-K(\epsilon)}{L}+\frac{1}{N_{x}}=\frac{ \sqrt{N_{x}}-K(\epsilon)\sqrt{2/\pi}}{\sqrt{N_{x}}}+\frac{1}{N_{x}}. \tag{51}\] Equation (51) shows that the probability of successful teleportation increases with increasing the number of discretization points, \[P_{tele}^{CD}(\epsilon)\xrightarrow{N_{x}\rightarrow\infty}1. \tag{52}\] Considering that the number of the discretization points increases exponentially with number of qubits (\(N_{x}=2^{n_{q}}\)), Eq. (51) implies that the probability of failure decreases exponentially with increasing the number of qubits, \[1-P_{tele}^{CD}(\epsilon)=K(\epsilon)\sqrt{\frac{2}{\pi}}2^{-\frac{n_{q}}{2}}- 2^{-n_{q}}. \tag{53}\] Since the support window parameter \(K=K(\epsilon)\) increases as \(\epsilon\) decreases (see Eq. (33)), the probability of a successful teleportation, \(P_{tele}^{CD}(\epsilon)\), decreases with increasing accuracy. Assuming \(\epsilon\propto e^{-cK}\) (which is a good approximation for Fock states, as the numerical calculations presented in Fig. 3(b) and in Ref. [39] show), Eq. (53) implies that the number of necessary qubits when \(P_{tele}^{CD}\) is fixed scales with the error as \(n_{q}\propto\log_{2}\left[\ln\left(\epsilon^{-1}\right)\right]\). Figure 3: a) The support interval parameter \(K\) defined by Eq. (33) versus the Fock state order \(n\) for different values of the error \(\epsilon\). Numerical fitting yields \(K\propto\sqrt{2n+c_{1}}+c_{2}\), where \(c_{1}\) and \(c_{2}\) are constants of the order of unity and depend on \(\epsilon\). b) \(K\) versus the error \(\epsilon\) (logarithmic scale) for Fock state with \(n=0,31,62,100\) and \(151\). Numerical fitting finds that the error decreases exponentially with increasing \(K\), _i.e._, \(\epsilon\propto e^{-cK}\) where \(c\approx 10\). c) Probability \(P_{tele}^{CD}\) (see Eq. (50)) for high fidelity teleportation (\(\epsilon=10^{-4}\)) versus \(n\) for DV devices with different number of qubits. \(P_{tele}^{CD}\) decreases with increasing \(n\) since \(K\) increases with increasing \(n\). \(P_{tele}^{CD}\) increases with increasing \(n_{q}\) since \(L\) increases with increasing \(n_{q}\). d) Probability \(P_{tele}^{CD}\) versus \(\epsilon\) (logarithmic scale) for Fock state with \(n=0,31,62,100\) and \(151\) for a DV device with \(n_{q}=10\) qubits. It is useful to investigate the teleportation of the Fock states \(\{\phi_{n}(x)\}_{n}\), since in our method, the qumode is truncated in the Fock states basis. The Fock states \(\{\phi_{n}(x)\}_{n}\) and their Fourier transform \(\{\hat{\phi}_{n}(p)\}_{n}\) are Hermite-Gaussian functions. In Fig. 2 we illustrate the teleportation of the Fock states of order \(n=0\), \(n=31\) and \(n=62\) to a DV device with \(n_{q}=7\) qubits. As can be seen from Fig. 2(a), for a fixed accuracy, the support interval parameter \(K\) increases with increasing \(n\). Consequently, the range of \(p_{meas}\) for high-fidelity teleportation is decreasing with increasing \(n\), see Fig. 2(b) and (c). In Fig. 3 we investigate the probability of achieving high fidelity teleportation. The dependence of \(K\) on the Fock state order \(n\) is illustrated in Fig. 3(a), while its dependence on the error \(\epsilon\) is illustrated in Fig. 3(b). The behavior of \(P_{tele}^{CD}(\epsilon,n)\) as a function of \(n\) for fixed \(\epsilon\) and as a function of \(\epsilon\) for fixed \(n\) is shown in Fig. 3(c) and Fig. 3(d), respectively. Even when the number of qubits of the DV device is small, (_i.e._\(n_{q}=4,5,6\)), the teleportation can be implemented with significant probability, (_e.g._\(P_{tele}^{CD}(\epsilon=10^{-4})>0.1\)), for CV states with boson cutoff \(N_{b}<20\). Presumably, this will make the experimental implementation of the teleportation protocol feasible with present or near-future technology. On the other hand, for a near-deterministic protocol, a high success probability is desired. In this case, the number of required qubits is of the order of 20. For example, for \(P_{tele}^{CD}=0.99\) (\(P_{tele}^{CD}=0.999\)), and states with a cutoff \(N_{b}=100\), the necessary number of qubits \(n_{q}>\approx 21\) (\(n_{q}>\approx 28\)) when the required precision is \(\mathcal{O}(10^{-7})\). The number of qubits needed for near-deterministic teleportation is larger than the number of qubits required for accurately representing the qumode on qubits. For instance, in the previous paragraph we found that the teleportation of a Fock state with \(n=100\) requires \(\approx 21\) qubits for a probability of success \(P_{tele}^{CD}\approx 0.99\). However, this state can be represented with an accuracy of \(\mathcal{O}(10^{-25})\)[39] on just 8 qubits, meaning that an ancillary register of \(\approx 13\) qubits was used to increase the teleportation success probability. In Section V.1, we will show how to discard the ancillary register after the teleportation is complete. ### Teleportation from a DV device to a CV device The DV-CV teleportation protocol, diagrammatically presented in Fig. 4, consists of the following steps: 1. The CV state is prepared into \[\int g(x)\left|x\right\rangle_{C}dx,\] (54) with \(\int\left|g(x)\right|^{2}dx=1\). The joint DV-CV initial wavefunction reads \[\left|\chi_{DC}\right\rangle=\sqrt{\Delta_{x}}\int\sum_{j=0}^{N_{x}-1}g(x) \phi(x_{j})\left|x\right\rangle_{C}\left|j\right\rangle_{D}dx.\] (55) The teleportation success depends on the initial state of the CV device defined by the wavefunction \(g(x)\). At the end of this section, we will discuss the choice of \(g(x)\) and provide two examples. Figure 4: DV-CV teleportation protocol, described in Section IV.3. The DV state \(\left|\phi_{D}\right\rangle\) is teleported into the CV state \(\left|\chi_{C}\right\rangle\). 2. The entangling operator \(e^{-i\mu X\otimes\bar{X}}\) is applied, \[e^{-i\mu X\otimes\bar{X}}\left|\chi_{DC}\right\rangle=\sqrt{\Delta_{x}}\int g(x) \sum_{j=0}^{N_{x}-1}\phi(x_{j})e^{-i\mu xx_{j}}\left|x\right\rangle_{C}\left|j \right\rangle_{D}dx.\] (56) 3. The DV system in measured in the discrete momentum basis. Let's denote the measured value \(p_{m}\). According to Eq. (11), \(p_{m}=(m-\frac{N_{x}-1}{2})\Delta_{p}\), with \(m\in\{0,...,N_{x}-1\}\). The CV state after the measurement is \[\left|\chi_{C0}\right\rangle=\sqrt{\frac{\Delta_{p}}{Pr(p_{m})}}\int g(x)\hat{ \phi}_{aper}(\mu x+p_{m})\left|x\right\rangle_{C}dx,\] (57) where \[\sqrt{\Delta_{p}}\hat{\phi}_{aper}(\mu x+p_{m})=\frac{1}{\sqrt{N_{x}}}\sum_{j =0}^{N_{x}-1}\sqrt{\Delta_{x}}\phi(x_{j})e^{-ix_{j}(\mu x+p_{m})},\] (58) and \(Pr(p_{m})\) is the probability to measure \(p_{m}\), \[Pr(p_{m})=\Delta_{p}\int\left|g(x)\hat{\phi}_{aper}(\mu x+p_{m})\right|^{2}dx.\] (59) The function \(\hat{\phi}_{aper}(p)\) is anti-periodic since \(2L\sqrt{\mu}x_{j}=2\pi\left(j-\frac{N_{x}}{2}+\frac{1}{2}\right)\) and \(N_{x}=2^{n_{q}}\) is an even number. Employing Eq. (12), we have \[\hat{\phi}_{aper}(p) =-\hat{\phi}_{aper}(p+2L\sqrt{\mu}),\] (60) \[\hat{\phi}_{aper}(p) =\hat{\phi}(p)+\mathcal{O}(\epsilon)\ \ \text{when}\ \ p\in\left[-L\sqrt{\mu},L\sqrt{\mu}\right].\] (61) 4. The operator \(e^{-i\frac{\mu m}{\mu}P}\) is applied to the CV system \[\left|\chi_{C1}\right\rangle=e^{-i\frac{\mu m}{\mu}P}\left|\chi_{C0}\right\rangle =\sqrt{\frac{\Delta_{p}}{Pr(p_{m})}}\int g(x)\hat{\phi}_{aper}(\mu x+p_{m}) \left|x+\frac{p_{m}}{\mu}\right\rangle_{C}dx.\] (62) 5. The continuous Fourier transform \(\mathcal{F}_{\mu}\), defined as \[\mathcal{F}_{\mu}=\sqrt{\frac{\mu}{2\pi}}\int dx\int dye^{i\mu xy}\left|x \right\rangle\left\langle y\right|,\] (63) that can be implemented using phase shift and squeezing operations, is applied to the CV system. The CV state becomes \[\mathcal{T}^{DC}(\left|\phi_{D}\right\rangle)\equiv\left|\chi_{C}\right\rangle =\mathcal{F}_{\mu}\left|\chi_{C1}\right\rangle=\int\xi(x)\left|x\right\rangle _{C}dx,\] (64) where \[\xi(x)=\sqrt{\frac{\Delta_{p}}{Pr(p_{m})}}\frac{1}{\sqrt{2\pi\mu}}\int g(\frac {k-p_{m}}{\mu})\hat{\phi}_{aper}(k)e^{ikx}dk.\] (65) By employing the anti-periodicity property of \(\hat{\phi}_{aper}(p)\), it can be shown that (see Appendix F) \[\xi(x)e^{-ixp_{m}}=\frac{1}{\sqrt{Pr(p_{m})}}\sqrt{\frac{\Delta_{p}}{N_{x}}} \sum_{j=-\infty}^{\infty}\phi(x_{j})e^{-ix_{j}p_{m}}\hat{g}\left[\mu\left(x_{j} -x\right)\right]. \tag{66}\] Here \[\hat{g}(t)=\frac{1}{\sqrt{2\pi}}\int g(k)e^{-ikt}dk, \tag{67}\] is the Fourier transform of \(g(x)\). The probability to measure \(p_{m}\) can be written as (see Appendix F) \[Pr(p_{m})=\frac{\Delta_{p}}{N_{x}\mu}\sum_{i,j=-\infty}^{\infty}\phi^{*}(x_{i}) \phi(x_{j})e^{-i(x_{i}-x_{j})p_{m}}\int\hat{g}^{*}(z+\mu x_{i})\hat{g}(z+\mu x_{ j})dz. \tag{68}\] The teleportation probability of success, defined as the probability of having a measurement outcome such that the fidelity is larger than \(1-\epsilon\), is given by \[P_{tele}^{DC}(\epsilon)=\sum_{m=0}^{N_{x}-1}Pr(p_{m})\bigg{|}_{F_{C}>1- \epsilon}. \tag{69}\] where \(F_{C}\) is defined by Eq. (32). By inspecting Eq. (66), it can be seen that \(\xi(x)e^{-ixp_{m}}\) is, up to a normalization factor, the convolution of the set \(\{\phi(x_{j})e^{-ix_{j}p_{m}}\}_{j}\) with the function \(\hat{g}(\mu x)\). The next goal is to find appropriate choices of \(\hat{g}(\mu x)\) such that \(\xi(x)\approx\phi(x)\). We present two examples below. #### iv.2.1 Rectangular initial CV state. If the Fourier transform of \(\phi(x)e^{-ixp_{m}}\) had support on the finite interval \(p\in\left[-L\sqrt{\mu},L\sqrt{\mu}\right]\) and \(\hat{g}(\mu x)\) were proportional to the _sinc_ function \(u(x)\) defined by Eq. (6), the Nyquist-Shannon theorem and Eq. (66) would imply Figure 5: Teleportation of \(n=0\) (full symbols), \(n=10\) (shaded symbols) and \(n=31\) (open symbols) Fock states from a \(n_{q}=8\) qubit device to a CV device initially prepared with rectangular (rectangle symbols) and Gaussian (circle symbols) wavefunctions. The Gaussian wavefunction has \(\sigma=0.5\frac{L}{\sqrt{\mu}}\) (see Eq. (76)). The dotted line is for visual guidance. a) Probability to measure \(p_{m}\). While for a rectangular CV initial state \(Pr(p_{m})\) is constant (see Eq. (72)), it has a Gaussian shape for a Gaussian CV initial state, with a width that increases with increasing \(n\). b) Teleportation fidelity (Eq. (32)) versus \(p_{m}\). For rectangular CV initial state the fidelity \(F_{C}\geq 1-\mathcal{O}(\epsilon)\) when \(\frac{|p_{m}|}{\mu}<L-K\). Compared to the rectangular case, for a Gaussian CV initial state the fidelity is smaller and decreases faster with increasing \(p_{m}\) and \(n\). \(\xi(x)=\phi(x)\). Therefore, our first choice of \(g(x)\) is the rectangular function \[g(x)=\left\{\begin{array}{ll}\frac{\mu^{\frac{1}{4}}}{\sqrt{2L}}&\mbox{ for }\quad x\in\left[-\frac{L}{\sqrt{\mu}},\frac{L}{\sqrt{\mu}}\right]\\ 0&\mbox{ for }\quad|x|>\frac{L}{\sqrt{\mu}}\end{array}\right., \tag{70}\] because for this choice we have (see Eq. (107) in Appendix A) \[\hat{g}(\mu x)=\frac{1}{\sqrt{\Delta_{p}}}u(x). \tag{71}\] The orthogonality property of the _sinc_ functions described by Eq. (108) (Appendix A), together with Eq. (68), yields a probability to measure \(p_{m}\) that is independent of \(p_{m}\), \[Pr(p_{m})=\frac{\Delta_{p}}{N_{x}}\sum_{i=-\infty}^{\infty}\left|\phi(x_{i}) \right|^{2}\frac{\Delta_{x}}{\Delta_{p}}=\frac{1}{N_{x}}+\mathcal{O}(\epsilon). \tag{72}\] The Fourier transform of \(\phi(x)e^{-ixp_{m}}\) is \(\hat{\phi}(p+p_{m})\). Since the \(\epsilon\)-support interval of \(\hat{\phi}(p)\) is \(\left[-K\sqrt{\mu},K\sqrt{\mu}\right]\), \(\hat{\phi}(p+p_{m})\) has negligible (_i.e._\(\mathcal{O}(\epsilon)\)) support outside the interval \(\left[-L\sqrt{\mu},L\sqrt{\mu}\right]\), as long as \[\frac{|p_{m}|}{\sqrt{\mu}}\leq L-K. \tag{73}\] In this case the DV state can be teleported with \(O(\epsilon)\) precision to the CV register, _i.e._ \[\xi(x)=\phi(x)+\mathcal{O}(\epsilon),\;\;\mbox{when}\;\;\frac{|p_{m}|}{\sqrt{ \mu}}\leq L-K. \tag{74}\] Equations (69) and (74) imply \[P_{tele}^{DC}(\epsilon)\approx P_{tele}^{DC}\left[(O(\epsilon)\right]=\frac{L- K(\epsilon)}{L}=\frac{\sqrt{N_{x}}-K(\epsilon)\sqrt{2/\pi}}{\sqrt{N_{x}}}. \tag{75}\] For illustration, in Fig. 5 we show (rectangle symbols) the probability \(Pr(p_{m})\) and the fidelity \(F_{C}\) versus \(p_{m}\) for the teleportation of the Fock states with \(n=0\), \(n=10\) and \(n=31\) from a \(n_{q}=8\) qubits device to a CV device. Note that, apart from the small term \(\frac{\Delta_{p}}{2}\) and the fact that \(p_{m}\) is discrete, Eq. (73) is similar to Eq. (46) which gives the condition for high-fidelity CV-DV teleportation. Up to the small \(\frac{1}{N_{x}}\) term, we also have \(P^{DC}tele(\epsilon)\approx P^{CD}tele(\epsilon)\), as can be seen by comparing Eqs. (51) and (75). Practically, the dependence of DV-CV teleportation on the number of qubits and accuracy is the same as the corresponding dependence of CV-DV teleportation discussed in Section IV.2 and illustrated in Fig. 3 for the Fock states. #### iv.2.2 Gaussian initial CV state. A rectangular initial state of the CV device ensures high teleportation fidelity and a probability of success that approaches one exponentially fast as the number of qubits increases, similar to the CV-DV teleportation protocol. However, preparing rectangular CV states might be challenging in practice, since a rectangular state is non-Gaussian. Here we show that the DV-CV teleportation protocol works and can be brought to the near-deterministic regime for alternative initial CV wavefunctions, which can be easily prepared in practice, but at the cost of increasing the number of required qubits in the DV device. Namely we address the DV-CV teleportation when the initial CV state is a Gaussian function, \[g(x)=\pi^{-\frac{1}{4}}\frac{1}{\sqrt{\sigma}}e^{-\frac{x^{2}}{2\sigma^{2}}} \tag{76}\] with variance \(\sigma^{2}\). For this choice of \(g(x)\), Eq. (66) yields \[\xi(x)e^{-ixp_{m}}=\pi^{-\frac{1}{4}}\sqrt{\Delta_{p}\sigma}\sum_{j=-\infty}^{ \infty}\phi(x_{j})e^{-ix_{j}p_{m}}e^{-\frac{\mu^{2}\sigma^{2}}{2}(x-x_{j})^{2}}. \tag{77}\] By inspecting Eq. (77), we expect that \[\sigma\lessapprox\frac{1}{\mu\Delta_{x}}=\frac{L}{\sqrt{\mu}} \tag{78}\] is required for a smooth convolution. On the other hand, a value of \(\sigma\) that is too small will average out the variation of \(\phi(x)e^{-ixp_{m}}\) along the grid points. It is expected that as the variation of \(\phi(x)\) and the value of \(p_{m}\) increase, the teleportation fidelity will decrease. This has been confirmed by numerical calculations. We also have found numerically that \(\sigma\in\left[0.5\frac{L}{\sqrt{\mu}},0.6\frac{L}{\sqrt{\mu}}\right]\) yields the best teleportation fidelity (not shown). In Fig. 5(a) and (b), we show (circles) the probability to measure \(p_{m}\) and, respectively, the fidelity for the teleportation of Fock states with \(n=0\), \(n=10\), and \(n=31\) from an 8-qubit device to a CV device initially prepared in a Gaussian state with \(\sigma=0.5\frac{L}{\sqrt{\mu}}\). The probability to measure \(p_{m}\) has a Gaussian shape with a width that increases as \(n\) increases. Compared to the rectangular initial CV state, the fidelity is smaller and decreases faster with increasing \(|p_{m}|\) and \(n\). The teleportation probability of success is shown in Fig. 6 for Fock states. \(P_{tele}^{DC}(\epsilon)\) for fixed \(\epsilon\) decreases with increasing \(n\) and increases with increasing the number of qubits in the DV register. Similar to the rectangular case, the accuracy and success probability can be increased by increasing \(n_{q}\). However, for the same level of accuracy, the number of qubits required is greater for the Gaussian case than for the rectangular case. We have not thoroughly investigated the dependence of the teleportation fidelity and success probability on the number of qubits for Gaussian initial CV states, because a Gaussian initial CV state is not the only practical choice for DV-CV teleportation, and probably not the best one either. In a future study, we plan to investigate DV-CV teleportation for various initial states, such as variational available states or states consisting of a sum of displaced Gaussians. Figure 6: DV-CV teleportation of Fock states when the initial CV states is a Gaussian with \(\sigma=0.5\frac{L}{\sqrt{\mu}}\). a) Probability \(P_{tele}^{DC}(\epsilon)\) (see Eq. (69)) versus \(n\) for \(\epsilon=0.01\) (full symbols) and \(\epsilon=0.001\) (shaded symbols) when \(n_{q}=7,8\) and 9. b) \(P_{tele}^{DC}(\epsilon)\) versus the number of qubits \(n_{q}\), for the teleportation of Fock states with \(n=0,10\), and \(31\) when \(\epsilon=0.01\) and \(\epsilon=0.001\). \(P_{tele}^{DC}\) decreases as \(n\) increases and increases as \(n_{q}\) increases. ## V Ancillary qubits for near-deterministic teleportation As discussed in Section IV.2, a high success probability and high fidelity CV-DV teleportation protocol requires a number of qubits significantly larger than the one necessary for an accurate discrete representation of the qumode. After teleportation, many coefficients of the discrete qumode state in the basis \(\{\ket{j}_{D}\}_{j}\) with \(j\in\{0,...,N_{x}-1\}\) are negligible. In Section V.1, we show how to down-size the DV register to the minimum number of qubits required for the discrete representation of the qumode with the desired accuracy. Similarly, for a high success probability and high fidelity DV-CV teleportation, the DV register should have a number of qubits significantly larger than the one necessary for the representation of the qumode to be teleported. In Section V.2, we show how to, in order to increase the success probability of the teleportation, add ancillary qubits to the DV register. ### Qubit discard after CV-DV teleportation In order to achieve high-fidelity, near-deterministic CV-DV teleportation, a DV register with a large number of qubits needs to be used. However, not all qubits are necessary to represent the qumode after teleportation. Here, we present a method for discarding unnecessary qubits. We will begin with the procedure for discarding one qubit. As described in Section IV.2, after a successful high-fidelity teleportation (we neglect the teleportation error below) the DV state is \[\ket{\phi_{D}}=\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x_{j})\ket{j}_{D}, \tag{79}\] with \(x_{j}=\left(j-\frac{N_{x}-1}{2}\right)\Delta_{x}\) and \(\Delta_{x}=\sqrt{\frac{2\pi}{N_{x}\mu}}\). The goal of this procedure is to obtain the state \[\ket{\phi^{\prime}_{D}}=\sqrt{\Delta_{x}^{\prime}}\sum_{j=0}^{N_{x}^{\prime}-1 }\phi(x^{\prime}_{j})\ket{j}_{D}, \tag{80}\] on a DV device with \(n^{\prime}_{q}=n_{q}-1\) qubits, where \(N^{\prime}_{x}=N_{x}/2\), and \(x^{\prime}_{j}=\left(j-\frac{N^{\prime}_{x}-1}{2}\right)\Delta^{\prime}_{x}\) with \(\Delta^{\prime}_{x}=\sqrt{2}\Delta_{x}\). Figure 7: One qubit discard. The coefficients of the basis vectors \(\{\ket{j}_{D}\}\) shown in the shaded region are negligible (\(\approx\mathcal{O}(\epsilon)\)). First, a \(CX\) gate is applied to the first two qubits (qubit 0 and the control qubit 1). Second, an \(X\) gate is applied to the qubit 1. As a result, all basis vectors with nonzero coefficients will have the qubit 0 in the state \(\ket{1}\). The 0 qubit is unentangled and can be discarded. The remaining state is described by Eq. (82). A number of qubits larger than the one required for the qumode discrete representation implies that the number of the discretization points \(N_{x}\) is large enough such that \(\frac{1}{\sqrt{2}}\geq K\), with \(K\) defined by Eq. (33). The coefficients \(\phi(x_{j})=\mathcal{O}(\epsilon)\) for \(j\in\{0,...,\frac{1}{4}N_{x}-1\}\) and \(j\in\{\frac{3}{4}N_{x},...,N_{x}-1\}\), because, for these values of \(j\), \(x_{j}\) is outside the \(\epsilon\)-support window of function \(\phi(x)\), \(\left[-\frac{K}{\sqrt{\mu}},\frac{K}{\sqrt{\mu}}\right]\). In our encoding, as defined by Eq. (24), the qubits defining the basis states are counted from left to right, _i.e._\(\left|j\right\rangle_{D}=\left|j_{0},j_{1},...,j_{n_{q}-1}\right\rangle\). The first part of the procedure, as illustrated in Fig. 7, consists in applying a \(CX\) gate to the qubits \(1\) and \(0\) (with \(1\) being the control qubit), followed by an \(X\) gate to the qubit \(1\), \[\left|\phi_{D}\right\rangle \equiv\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x_{j})\left|j_{0},j_{1},...,j_{n_{q}-1}\right\rangle\xrightarrow[]{CX_{10}}\sqrt{\Delta_{x}} \sum_{j=0}^{N_{x}-1}\phi(x_{j})\left|j_{0}\oplus j_{1},j_{1},...,j_{n_{q}-1}\right\rangle \tag{81}\] \[\xrightarrow[]{X_{1}}\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x _{j})\left|j_{0}\oplus j_{1},j_{1}\oplus 1,...,j_{n_{q}-1}\right\rangle=\left|1 \right\rangle\otimes\sqrt{\Delta_{x}}\sum_{j=\frac{N_{x}}{4}}^{\frac{3N_{x}-1 }{4}}\phi(x_{j})\left|j_{1}\oplus 1,...,j_{n_{q}-1}\right\rangle+\mathcal{O}(\epsilon)\] \[=\left|1\right\rangle\otimes\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1 }\phi(\tilde{x}_{j})\left|j_{0},j_{1},...,j_{n_{q}-2}\right\rangle+\mathcal{O }(\epsilon),\] where \(\tilde{x}_{j}=\left(j-\frac{N_{x}^{\prime}-1}{2}\right)\Delta_{x}=x_{j}^{ \prime}/\sqrt{2}\), and \(\oplus\) denotes _modulo_\(2\) summation. After these two transformations, the qubit \(0\) becomes unentangled and is discarded. After discarding the qubit, the DV state on \(n_{q}-1\) qubits is \[\ket{\phi_{1}}=\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}^{\prime}-1}\phi(\tilde{x}_{j}) \ket{j}_{D}+\mathcal{O}(\epsilon). \tag{82}\] However, this is not exactly the state we target, since the sampling points \(\{\tilde{x}_{j}\}\) are on a grid with the discretization interval \(\Delta_{x}\), as illustrated with black-circle symbols in Fig. 8 for the \(n=0\) Fock state. We want the sampling points for the target state to be on a grid with the discretization interval \(\Delta_{x}^{\prime}=\sqrt{2}\Delta_{x}\), illustrated with red-square symbols in Fig. 8. The second part of the procedure consists in applying a squeezing gate with the squeeze factor \(r=\ln 2/2\). According to Eq. (28) the state becomes \[\bar{S}(\frac{1}{2}\ln 2)\ket{\phi_{1}}=\sqrt{\Delta_{x}^{\prime}}\sum_{j=0}^{ N_{x}^{\prime}-1}\phi(x_{j}^{\prime})\ket{j}_{D}+\mathcal{O}(\epsilon)= \ket{\phi_{D}^{\prime}}+\mathcal{O}(\epsilon), \tag{83}\] which, up to \(\mathcal{O}(\epsilon)\) error, is just the qumode representation on \(n_{q}-1\) qubits, as described by Eq. (80). Note that, even before applying the squeezing operation, the qumode representation on the reduced qubit register described by Eq. (82) is valid. However, it corresponds to a discretization for mass \(\mu^{\prime}\)-bosons, where \(\mu^{\prime}=2\mu\). In this representation, the discrete position and momentum operators should be defined as in Eqs. (14) and (15), but with \(\mu^{\prime}\) replacing \(\mu\). It is important to note that the \(\mu\)-boson and \(\mu^{\prime}\)-boson number distributions of the qumode are different. The representation with the lowest number of bosons is more accurate. A more detailed discussion about the relation between the boson mass and the representation accuracy is presented in [39]. The one-qubit discarding procedure described above can be repeated to discard more qubits. The number of qubits that can be discarded is equal to the maximum integer \(r\) that satisfies \(\frac{L}{\sqrt{2^{r}}}\geq K\). ### Qubit padding before DV-CV teleportation The success probability of DV-CV teleportation increases with increasing size of the DV register. The procedure to add a qubit to the DV register consists of the same steps as the qubit discarding procedure presented in Section V.1, but in reverse order. The \(n_{q}\)-qubit initial DV state is \[\ket{\phi_{D}}=\sqrt{\Delta_{x}}\sum_{j=0}^{N_{x}-1}\phi(x_{j})\ket{j}_{D}. \tag{84}\] The \(n_{q}+1\)-qubit target DV state is \[\ket{\phi_{D}^{\prime}}=\sqrt{\Delta_{x}^{\prime}}\sum_{j=0}^{N_{x}^{\prime}-1 }\phi(x_{j}^{\prime})\ket{j}_{D}, \tag{85}\] with \(N_{x}^{\prime}=2N_{x}\), \(\Delta_{x}^{\prime}=\frac{\Delta_{x}}{\sqrt{2}}\), and \(x_{j}^{\prime}=\left(j-\frac{N_{x}^{\prime}-1}{2}\right)\Delta_{x}^{\prime}\). The first step of the padding procedure is squeezing with the squeeze factor \(r=-\ln 2/2\). According to Eq. (28) the state becomes \[\ket{\phi_{1}}\equiv\bar{S}(-\frac{1}{2}\ln 2)\ket{\phi_{D}}=\sqrt{\frac{ \Delta_{x}}{\sqrt{2}}}\sum_{j=0}^{N_{x}-1}\phi(\frac{x_{j}}{\sqrt{2}})\ket{j}_ {D}=\sqrt{\Delta_{x}^{\prime}}\sum_{j=\frac{N_{x}^{\prime}}{4}}^{\frac{3N_{x} ^{\prime}}{4}-1}\phi(x_{j}^{\prime})\ket{j}_{D}. \tag{86}\] Next, a qubit prepared in the state \(\ket{1}\) is added to the left of the register, _i.e._, \(\ket{\phi_{1}}\longrightarrow\ket{1}\otimes\ket{\phi_{1}}\). According to the encoding convention defined by Eq. (24), this new qubit will be in position \(0\). Next, the steps shown in Fig. 7 are followed in reverse order, _i.e._, an \(X_{1}\) gate is applied to the qubit in position \(1\), followed by a \(CX_{10}\) gate applied to the qubits in positions \(1\) and \(0\). This procedure yields the target state \(\ket{\phi_{D}^{\prime}}\) described by Eq. (85), up to an error given by the weight of \(\phi(x)\) outside the interval \(\left[-\frac{N_{x}\Delta_{x}}{2},\frac{N_{x}\Delta_{x}}{2}\right]\). In order to increase the teleportation success probability to the desired value, the procedure described above can be repeated to add more qubits. Conclusions Qumodes are bosonic quantum states that encode information in the continuous basis formed by the eigenvectors of the quadrature operators. We introduce a discrete representation of the qumodes on the finite Hilbert space of DV devices, along with the implementation of the quadrature operators and the implementation of a universal set of CV gates on DV devices. We construct the discrete qumode representation by employing the Nyquist-Shannon expansion theorem, which is applicable to qumode wavefunctions that have negligible weight at large arguments. The errors associated with this representation decrease exponentially with increasing the size of the finite Hilbert space. We present two teleportation protocols for transferring qumodes between CV and DV devices. The first protocol teleports a CV qumode to its discrete representation on a DV device. The teleportation has high fidelity when the measurement outcome is confined to a specific interval. The probability of achieving high-fidelity teleportation approaches one exponentially as the number of qubits in the DV register increases. The second teleportation protocol transfers a discrete DV qumode to a CV device. The fidelity of the teleportation depends on the measurement outcome. If the initial CV device is prepared with a rectangular wavefunction, the dependence of the teleportation fidelity and success probability on the number of DV qubits is practically the same as that of the CV-DV teleportation. For instance, in this case, the success probability approaches one exponentially as the number of qubits in the DV device increases. However, we find that even with alternative initial CV states, which may be easier to prepare experimentally, the DV-CV teleportation protocol can be implemented with high fidelity and high success probability, albeit requiring more DV qubits. The teleportation protocols can be driven to the near-deterministic regime by increasing the number of DV qubits. This can be achieved by using ancillary registers that can be discarded after the teleportation is completed. We introduce procedures for discarding qubits after CV-DV teleportation and for adding qubits before DV-CV teleportation. These procedures consist of single-qubit gates, CNOT gates, and squeezing operations. The work presented in this paper demonstrates the potential of hybrid CV-DV quantum hardware for processing CV-encoded information, opening up new research directions for hybrid CV-DV systems and creating opportunities for developing integrated quantum technology. We envision a wide range of applications for this study. For example, CV-encoded data from optical or cavity sensors can be transferred to qubit QPUs and analyzed with QML methods that could be challenging to implement on CV devices. Non-Gaussian states can be transferred from a DV device to a CV device, and non-Gaussian gates can be realized by teleporting qubit gates implemented on DV devices to CV devices by developing protocols similar with the ones described in [23; 55]. This would provide an efficient alternative to preparing CV states and gates directly, which typically requires nontrivial optimal pulse control [56]. Hybrid CV-DV cluster states can be employed for quantum computation. The quantum tomography of CV states can be reduced to an equivalent qubit system tomography problem, by transferring the CV states to DV devices. These are a few examples illustrating how CV-DV hybrid quantum hardware, with the proposed transfer protocols, could make quantum information processing more efficient. We believe that this work will enable the development of a new class of quantum algorithms using CV-DV hybrid hardware in various fields such as quantum computing, quantum networking, quantum sensing, quantum tomography, and quantum machine learning. ## VII Acknowledgements This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. A.C.Y.L. is partially supported by the DOE/HEP QuantISED program grant "HEP Machine Learning and Optimization Go Quantum", identification number 0000240323. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
2307.15162
Majorana bound states in d-wave superconductor planar Josephson junction
We study phase-controlled planar Josephson junctions comprising a two-dimensional electron gas with strong spin-orbit coupling and d-wave superconductors, which have an advantage of high critical temperature. We show that a region between the two superconductors can be tuned into a topological state by the in-plane Zeeman field, and can host Majorana bound states. The phase diagram as a function of the Zeeman field, chemical potential, and the phase difference between superconductors exhibits the appearance of Majorana bound states for a wide range of parameters. We further investigate the behavior of the topological gap and its dependence on the type of d-wave pairing, i.e., d, d+is, or d+id', and note the difficulties that can arise due to the presence of gapless excitations in pure d-wave superconductors. On the other hand, the planar Josephson junctions based on superconductors with d+is and d+id' pairings can potentially lead to realizations of Majorana bound states. Our proposal can be realized in cuprate superconductors, e.g., in a twisted bilayer, combined with the layered semiconductor Bi2O2Se.
Hamed Vakili, Moaz Ali, Mohamed Elekhtiar, Alexey A. Kovalev
2023-07-27T19:36:42Z
http://arxiv.org/abs/2307.15162v2
# Majorana bound states in \(d\)-wave superconductor planar Josephson junction ###### Abstract We study phase-controlled planar Josephson junction comprising a two-dimensional electron gas with strong spin-orbit coupling and \(d\)-wave superconductors, which have an advantage of high critical temperature. We show that a region between the two superconductors can be tuned into topological state by the in-plane Zeeman field, and can host Majorana bound states. The phase diagram as a function of the Zeeman field, chemical potential, and the phase difference between superconductors exhibits the appearance of robust Majorana bound states for a wide range of parameters. We further investigate the behavior of the topological gap and its dependence on the type of \(d\)-wave pairing, i.e., \(d\), \(d+is\), or \(d+id^{\prime}\), and note the difficulties that can arise due to the presence of gapless excitations in pure \(d\)-wave superconductors. On the other hand, the planar Josephson junctions based on superconductors with \(d+is\) and \(d+id^{\prime}\) pairings can potentially lead to realizations of Majorana bound states. Our proposal can be realized in twisted bilayer \(d\)-wave superconductors realizable in mechanically exfoliated van der Waals copper oxide heterostructures. Majorana bound states (MBS) are non-abelian anyons that can be used in realizations of quantum computers relying on topological protection [1; 2]. Anyons are quasiparticles that are described by the statistics of neither fermions nor bosons and have exotic properties such as fractional charge [1; 2]. Topological protection associated with realizations of non-abelian anyons in condensed matter systems can be used to encode quantum information in a way that is robust against decoherence [3; 4; 5]. The current common platforms to realize MBS are relying on proximity effect [6; 7; 8; 9] with \(s\)-wave superconductors which have low critical temperatures [3; 4; 5]. The topological gap in such realizations is relatively small, making the system more sensitive to disorder [10; 11] and various imperfections [12; 13], and requiring operation at very low temperatures [14; 15; 16]. The \(d\)-wave superconductivity is a very common type of superconductivity in strongly correlated systems such as cuprates [17]. Such superconductors are associated with gapless excitations [18] and higher critical temperatures compared to \(s\)-wave superconductors. Unfortunately, the presence of gapless excitations is not compatible with topological superconductivity. Furthermore, in \(d\)-wave superconductors, gapless excitations lead to the appearance of Andreev bound states (ABS) [19; 20; 21; 22]. To resolve issues associated with gapless excitations, one can use \(d\)-wave superconductors with inversion asymmetry and applied magnetic field [23; 24; 25; 26; 27], or use a twisted bilayer of \(d\)-wave superconductors that behaves as an effective \(d+id^{\prime}\) superconductor [28; 29; 30; 31; 25]. As has been demonstrated in Ref. [31], a twisted bilayer of \(d\)-wave superconductors offers high degree of tunability via variations in twist angle and offers realization of \(d+is\) or \(d+id^{\prime}\) pairings within the same system. The planar Josephson junction (JJ) platform has been studied for \(s\)-wave superconductors, and the appearance of robust tunable MBS has been predicted [32; 33; 34]. Recently, there has been substantial interest in various realizations of planar JJ based on \(s\)-wave superconductors in the context of superconducting diode effect [35; 36; 37; 38; 39]. It has been demonstrated that the superconducting diode effect also appears in a JJ based on \(d\)-wave superconductors and a topological insulator [40]. In this work, we demonstrate that a planar JJ comprising a two-dimensional electron gas (2DEG) with strong spin-orbit coupling and \(d\)-wave superconductors can host MBS. Such systems can be realized using exfoliated van der Waals copper oxide heterostructures combined with layered semiconductors, such as Bi\({}_{2}\)O\({}_{2}\)Se [30; 41]. In the case of pure \(d\)-wave pairing, we observe that MBS coexist with gappless bulk states which can be detrimental for storing quantum information. However, for the \(d+is\) and \(d+id^{\prime}\) pairings there exists a bulk gap and MBS arise in the planar JJ geometry. The \(d+is\) pairing leads to realizations of robust MBS in analogy to JJ based on \(s\)-wave superconductor [33]. For \(d+id^{\prime}\) pairing in addition to MBS, the chiral edge modes exist at zero energy in the proximized 2DEG due to nontrivial bulk topology; however, they can be gapped out inside JJ by properly tuning the Zeeman field, the relative phase of superconductors, and the chemical potential. As a result for \(d+id^{\prime}\) pairing, one can physically separate the MBS and the chiral edge modes, as we show in our study. _Model and symmetry analysis._ - We consider the planar JJ geometry with an in-plane Zeeman field (due to magnetic field or induced by proximity with a ferromagnet), and mostly assume the limit of large superconductors. When necessary, we use periodic boundaries along the \(y\) axis in Fig. 1a). The phase difference of the left and right \(d\)-wave superconductors can be controlled by applying current or magnetic flux. The Bogoliubov-de Gennes (BdG) Hamiltonian of the system, written in the Nambu basis \((\psi_{\uparrow},\psi_{\downarrow},\psi_{\downarrow}^{\dagger},-\psi_{\uparrow}^ {\dagger})\) reads \[H =\left(\frac{\mathbf{p}^{2}}{2m^{*}}-\mu+\frac{\alpha_{SO}}{\hbar} \left(\mathbf{\sigma}\times\mathbf{p}\right)_{z}+\frac{m^{*}}{2}\alpha_{SO}^{2} \right)\tau_{z}\] \[+h(x)\sigma_{y}+\tau_{x}\text{Re}\Delta(\mathbf{k},x)-\tau_{y}\text{ Im}\Delta(\mathbf{k},x). \tag{1}\] Here, \(\alpha\) is the Rashba spin-orbit coupling strength, \(h(x)\) describes the Zeeman energy, e.g., for an external magnetic field \(h(x)=g(x)\mu_{B}B/2\) where \(g(x)\) is the position dependent \(g\)-factor. We denote the Zeeman energy in the superconducting regions by \(h_{SC}\) and in the junction by \(h_{J}\), \(m^{*}\) is the effective electron mass, and \(\Delta(\mathbf{k},x)\) is the proximity-induced pairing in 2DEG, which reads \[\Delta(\mathbf{k},x) =2e^{i\phi(x)}[\tilde{\Delta}(x)\left(\cos k_{y}-e^{i\beta}\cos k _{x}\right)\] \[+\tilde{\Delta}^{\prime}(x)2i\sin k_{x}\sin k_{y}]. \tag{2}\] Here \(\beta\) determines the pairing type, e.g., \(\beta=0\) corresponds to \(d_{x^{2}-y^{2}}\) pairing and \(\beta=\pi/4\) corresponds to \(d+is\) pairing [31]. The term \(\tilde{\Delta}^{\prime}\) is necessary to realize \(d+id^{\prime}\) pairing. The terms \(\tilde{\Delta}(x)\) and \(\tilde{\Delta}^{\prime}(x)\) are only nonzero in the regions covered by superconductors where \(\tilde{\Delta}(x)=\Delta_{0}\), \(\tilde{\Delta}^{\prime}(x)=\Delta^{\prime}_{0}\), \(\phi(x)=\phi_{L}\) for the region covered by the left superconductor, and \(\phi(x)=\phi_{R}\) for the region covered by the right superconductor. We apply the Zeeman field in the \(y\)-direction. The symmetries, such as the time reversal symmetry, the particle-hole symmetry, and the chiral symmetry determine the type of topological superconductivity realizable in our system. The Hamiltonian (1) breaks time-reversal symmetry when the Zeeman field is present. For the case of \(d\) pairing (or \(d+id^{\prime}\) pairing when \(\Delta^{\prime}\neq 0\)) in the presence of the Zeeman field, we can still define an effective time reversal symmetry as \(\tilde{T}=M_{x}T\) where \(T=i\sigma_{y}K\) is the usual time reversal symmetry and \(M_{x}\) is the mirror with respect to the \(y\)-\(z\) plane. For \(\tilde{T}\), we have a relation \(\tilde{T}^{2}=1\), which in combination with the particle-hole symmetry, \(P=\sigma_{y}\tau_{y}K\), and the chiral symmetry (the \(P\tilde{T}\) symmetry operator), \(\tilde{C}=M_{x}\tau_{y}\), places such system in the BDI symmetry class with a \(\mathbb{Z}\) topological invariant. In our analysis, we also calculate \(\mathbb{Z}_{2}\) Pfaffian invariant which is determined by the parity of \(\mathbb{Z}\) and characterizes systems in the D symmetry class. The D symmetry class is realized in Fig. 1 when the mirror symmetry with respect to the \(y\)-\(z\) plane is broken or for \(d+is\) pairing. _Topological superconductivity and MBS._ - Our symmetry analysis suggests a possibility for topological superconductivity in the quasi-one-dimensional region between superconductors in Fig. 1a). A sufficiently large but finite system along the \(y\)-axis can be used for realizing MBS. We use the tight-binding version of the BdG Hamiltonian (1) on a two-dimensional square lattice given by: \[H_{TB}= \sum_{<ij,mn>}\left[-t\ c_{i,j}^{\dagger}c_{m,n}+it_{SO}\ c_{i,j }^{\dagger}\left(\sigma\times\mathbf{r}_{ij,mn}\right)c_{m,n}\right]\] \[+\sum_{i,j}\left[\left(4t-\mu+t_{SO}^{2}\right)\ c_{i,j}^{ \dagger}c_{i,j}+h\ c_{i,j}^{\dagger}\sigma_{y}\ c_{i,j}\right]\] \[+\sum_{i,j}e^{i\phi/2}\tilde{\Delta}\left[\ c_{i,j\pm 1,\uparrow}^{ \dagger}c_{i,j,\downarrow}^{\dagger}-\ e^{i\beta}c_{i\pm 1,j,\uparrow}^{ \dagger}c_{i,j,\downarrow}^{\dagger}\right]+\] \[\sum_{i,j}e^{i\phi/2}\tilde{\Delta}^{\prime}\left[\ c_{i\pm 1,j \pm 1,\uparrow}^{\dagger}c_{i,j,\downarrow}^{\dagger}-\ c_{i\pm 1,j\mp 1, \uparrow}^{\dagger}c_{i,j,\downarrow}^{\dagger}\right]\] \[+H.c., \tag{3}\] where in our calculations we use the lattice spacing \(a_{c}=10\) nm, \(t_{SO}=0.3t\), \(t=\frac{\hbar^{2}}{2m^{*}a_{c}^{2}}\), and \(m^{*}\)= 0.05\(m_{e}\), with \(m_{e}\) being the electron rest mass. We study the phase diagram as a function of the Zeeman field, chemical potential, and the phases of superconductors. To describe a system with periodic boundaries along the \(y\) axis, we use the momentum representation along the \(y\)-axis of the tight-binding Hamiltonian, \(H_{\rm TB}(k_{y})\). To calculate the BDI class \(\mathbb{Z}\) topological invariant, we employ the eigen basis of the chiral symmetry in which \(\tilde{C}\) is diagonal [33; 42] with \(\mathds{1}\) in the upper left block and \(-\mathds{1}\) in the lower right block. In this basis, \[\tilde{H}_{\rm TB}(k_{y})=\begin{pmatrix}0&A(k_{y})\\ A^{\dagger}(k_{y})&0\end{pmatrix}, \tag{4}\] where for a gapped Hamilton \(A(k_{y})\) defines a complex function \(z(k_{y})=Det(A(k_{y}))/|Det(A(k_{y}))|\) and a winding number \(W=(-i/2\pi)\int_{k_{y}=0}^{k_{y}=2\pi}dz(k_{y})/z(k_{y})\). To calculate the D class \(\mathbb{Z}_{2}\) topological invariant, we use the expression \(Q=\text{sign}(\text{Pf}(H_{k_{y}=\pi}\sigma_{y}\tau_{y})/\text{Pf}(H_{k_{y}=0 }\sigma_{y}\tau_{y}))\). The two topological numbers are related by equation \((-1)^{W}=Q\)[42]. Figure 1: a) Top view of the Josephson junction heterostructure. The left and right sides of 2DEG are covered by \(d\)-wave superconductors, except for a junction of width \(W_{J}\) between the two superconductors. In b) the probability function (\(|\psi|^{2}\)) of MBS at \(\mu=1.32t\) and \(h_{J}=0.6t\) is plotted. c) The energy gap as a function of the Zeeman field \(h_{J}\) in the junction and \(\mu\). d) The topological invariants \(Q\) (\(\mathbb{Z}_{2}\)) and \(W\) (\(\mathbb{Z}\)) as a function of of the Zeeman field \(h_{J}\) in the junction and \(\mu\). The colors show the value of the \(W\) invariant. The black lines separate the regions with \(Q\) = -1 and 1. The \(W\) invariant only changes between odd (even) values for \(Q\) = -1 (1) regions. We used the parameters: \(L_{y}=400a_{c}\), \(W_{J}=5a_{c}\), \(W_{L}=W_{R}=20a_{c}\), \(\Delta_{0}=0.3t\), \(\Delta^{\prime}_{0}=0.0\), \(h_{SC}=0\), and the phase difference \(\Delta\phi\) = \(\pi\). _Planar JJ with \(d\)-wave pairing._ - We first study the planar JJ in Fig. 1a) with pure \(d\)-wave pairing by varying the chemical potential, the phase difference (\(\Delta\phi=\phi_{R}-\phi_{L}\)), and the Zeeman field. We use parameters: \(W_{J}=5a_{c}\), \(W_{L}=W_{R}=20a_{c}\), \(\Delta_{0}=0.3\)t, \(h_{SC}=0\), \(\beta=0\), and \(\Delta_{0}^{\prime}=0.0\). The nontrivial topological regions in parameter space, as determined by \(Q\), are reminiscent of the planar JJ based on the \(s\)-wave superconductor, which suggests the same mechanism for the formation of MBS and the relevance of the Andreev reflection [33]. In Fig. 1b), we show the energy gap of the system as a function of \(\mu\) and \(h_{J}\). The gap shows rapid changes, exhibiting numerous lines with zero gap, which suggests that this is a finite size effect associated with the gapless excitations in the bulk of 2DEG. According to Fig. 1c), the gap closes along the lines that seem to be associated with changes in the two topological invariants, \(W\) or \(Q\), which are calculated from \(H_{\rm TB}(k_{y})\). From decrease of the gap for larger system sizes, we conclude that in the region with \(Q=-1\) MBS will coexist with gapless excitations in the planar JJ based on the \(d\)-wave pairing. Even though the topological invariant \(Q\) shows large continuous region with \(Q=-1\) and well localized MBS can be realized as shown in Fig. 1b), MBS may still hybridize with the gapless excitations. This behavior can be detrimental for quantum coherence associated with MBS. Nevertheless, we expect that the signature of \(Q=-1\) region can be seen in studies of the Josephson current and the superconducting diode effect. _Planar JJ with \(d+is\) pairing._ - Since \(d\)-wave superconductors have directional pairing resulting in \(d_{x^{2}-y^{2}}\) and \(d_{xy}\) components in a general reference frame, the orientation of the superconductor lattices can be used to control the pairing potential in twisted bilayer \(d\)-wave superconductors realizable in mechanically exfoliated van der Waals copper oxide heterostructures. It has been shown [31] that by having a twisted bilayer of \(d\)-wave superconductors, \(d+is\), and \(d+id^{\prime}\) pairings can be realized. In Fig. 2, we consider a system with \(d+is\) pairing and use parameters: \(h_{SC}=0\), \(\Delta_{0}=0.3t\), \(\Delta_{0}^{\prime}=0.0\), \(W_{J}=5a_{c}\), \(W_{L}=W_{R}=40a_{c}\). We take \(\beta=\pi/4\) predicted by the ab-initio calculations [31]. In Fig. 2a), we calculate the phase diagram for \(Q\) as a function of \(h_{J}\) and \(\phi\). The large diamond-shaped region, also typical for the planar JJ based on the \(s\)-wave superconductor [33], defines parameters for which MBS can be realized. In Fig. 2b), we plot the gap. The gap closes at a line of quantum phase transition between the \(Q=-1\) and \(Q=1\) regions. In Figs. 2c) and d), we perform similar calculations as a function of \(\mu\) and \(h_{J}\) with fixed \(\phi=\pi\). Results in Fig. 2 predict robust MBS in the planar JJ with \(d+is\) pairing for a wide range of parameters. We also observe in Fig. 3 that the region with \(Q=-1\) persists even when the Zeeman energy is present in the superconducting regions. Figure 3a) shows the phase diagram for \(Q\) as a function of \(\mu\) and \(h_{J}=h_{SC}=h\). A relatively large region corresponding to \(Q=-1\) in Fig. 3a) can lead to realizations of robust MBS due to a sizable topological gap shown in Fig. 3b). In Figs. 3c) and d), we show that increasing the ratio \(h_{SC}/h_{J}\) expands the range of \(\phi\) where MBS can be realized for certain parameters. Figure 3: The Zeeman field is applied to the whole system. a) The \(Q\) topological invariant as a function of \(h\) and \(\mu\) where \(h_{J}=h_{SC}=h\) for the phase difference \(\Delta\phi=\pi\). The blue color shows the topological region (\(Q=-1\)). b) The energy gap \(E_{gap}\) as a function of \(h\) and \(\mu\) where \(h_{J}=h_{SC}=h\) for the phase difference \(\Delta\phi=\pi\). c), d) The \(Q\) topological number and the energy gap as a function of the ratio \(h_{SC}/h_{J}\) and \(\phi\) for \(\mu=1t\). We used the parameters: \(\Delta_{0}=0.3t\), \(\Delta_{0}^{\prime}=0.0\), \(\beta=\pi/4\), \(h_{J}=0.2t\), \(W_{J}=5a_{c}\), and \(W_{L}=W_{R}=40a_{c}\). Figure 2: Phase diagrams for \(d+is\) pairing (\(\beta=\pi/4\)). a) The \(Q\) topological number (\(\mathbb{Z}_{2}\)) as a function of \(h_{J}\) and \(\Delta\phi\) for \(\mu=1t\). b) The energy gap \(E_{gap}\) as a function of \(h_{J}\) and \(\Delta\phi\) for \(\mu=1t\). c) Phase diagram of the \(Q\) topological number as a function of \(h_{J}\) and \(\mu\) for the phase difference \(\Delta\phi=\pi\). d) The energy gap \(E_{gap}\) as a function of \(h_{J}\) and \(\mu\) for the phase difference \(\Delta\phi=\pi\). d) The energy gap \(E_{gap}\) as a function of \(h_{J}\) and \(\mu\) for the phase difference \(\Delta\phi=\pi\). e) We used the parameters: \(h_{SC}=0\), \(\Delta_{0}=0.3t\), \(\Delta_{0}^{\prime}=0.0\), \(W_{J}=5a_{c}\), \(W_{L}=W_{R}=40a_{c}\). _Planar JJ with \(d+id^{\prime}\) pairing. -_ We consider \(d+id^{\prime}\) pairing in Fig. 4. The phase diagram for \(Q\) and \(W\) as a function of \(\mu\) and \(h_{J}\) in Fig. 4a) looks somewhat similar to the case of \(d\)-wave pairing in Fig. 1d). The \(id^{\prime}\) component of the pairing potential breaks the time reversal symmetry and removes the gapless states in the bulk of 2DEG. At the same time, the topology of the bulk of the proximized 2DEG corresponds to the even Chern number and results in the chiral edge modes [43] in the system in Fig. 1a). A finite size effect associated with the gapless excitations of the chiral edge modes of 2DEG results in gap closings and changes in \(W\) shown in Fig. 4a), where periodic boundary along the \(y\) axis in Fig. 1a) has been used. In Fig. 4c), we show the phase diagram for \(W\) as a function of \(\phi\) and \(h_{J}\), where one can identify a diamond-shaped region with \(Q=-1\). Results in Figs. 4a) and c) suggest that MBS will coexist with the chiral edge modes in system in Fig. 1a). If not separated spatially, the chiral edge modes can hybridize with MBS. In Figs. 4b) and d), we study the gap inside of the planar JJ in Fig. 1a) with the periodic boundary along the \(y\) axis where the outer edge modes have been removed. Figures 4b) and d) show that the planar JJ with the Zeeman term can gap out the edge modes running along the JJ for a large range of parameters. Using Figs. 4b) and d), we suggest a setup in Fig. 4e) with an angled JJ that allows to separate the edge modes from MBS by using four superconducting regions. Using parameters \(h_{J}=0.4t\), \(h_{SC}=0\), \(\mu=2t\), \(\Delta_{0}=0.3t\), \(\Delta^{\prime}_{0}=0.06t\), and phases of four superconducting regions \(\phi_{1}=\phi_{2}=\pi/2\), \(\phi_{3}=\phi_{4}=-\pi/2\) for setup in Fig. 4e), we are able to realize robust MBS, as shown in Fig. 4f) by plotting the \(|\psi|^{2}\) of the lowest eigenvalue of the system. _Conclusions. -_ We have studied the planar JJ comprising a 2DEG with strong spin-orbit coupling and \(d\)-wave superconductors. The proximity effect can induce the superconducting pairing potential in 2DEG with the same symmetry as the host superconductor. Here, we have considered different types of \(d\)-wave superconductors with high critical temperature and large intrinsic gap. Apart from superconductors with pure \(d\)-wave pairing, we have also considered a twisted bilayer \(d+is\), and \(d+id^{\prime}\) superconductors realizable in mechanically exfoliated van der Waals copper oxide heterostructures [31]. In case of \(d\)-wave pairing, we have demonstrated that the planar JJ can lead to MBS for a wide range of parameters, in analogy to realizations based on \(s\)-wave superconductors; however, the presence of gapless excitations in the bulk of 2DEG may hinder the quantum coherence. Nevertheless, we expect interesting manifestations in the superconducting diode effect. In case of \(d+is\) pairing, we have demonstrated realizations of robust MBS for a wide range of parameters. In the case of \(d+id^{\prime}\) pairing there are no gapless states in the bulk of 2DEG; however, the even Chern number associated with the bulk leads to appearance of gapless chiral edge modes, which can hybridize with MBS. To realize MBS with \(d+id^{\prime}\) pairing, we have proposed a modified JJ in which the chiral edge modes are gapped and do not hybridize with MBS. It would be interesting to consider generalizations of our ideas to 2DEG with cubic Rashba interactions [44; 45]. The cuprate-based superconductors have shown critical temperatures of up to 133 K [46]. One direct advantage of using cuprate-based superconductors is that MBS can exist at higher temperatures compared to realizations based on pure \(s\)-wave superconductors. Another advantage stems from much larger intrinsic gap which should result in a larger proximity-induced topological gap and better protection against disorder and thermal excitations [47; 48; 49]. Our results should help in identifying new platforms for realizations of robust and easily tunable MBS with better protection against decoherence. _Acknowledgments. -_ This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0021019. Figure 4: a) The topological invariant \(W\) as a function of \(h_{J}\) and \(\mu\) for \(\Delta\phi=\pi\). The black line shows the outlines of \(Q=-1\) and \(Q=1\) regions. b) The gap \(E_{gap}\) of a JJ with periodic boundaries as a function of \(h_{J}\) and \(\mu\) for \(\Delta\phi=\pi\). c) The topological number \(W\) as a function of \(h_{J}\) and \(\Delta\phi\) for \(\mu=1t\). d) The gap \(E_{gap}\) of a JJ with periodic boundaries as a function of \(h_{J}\) and \(\Delta\phi\) for \(\mu=1t\). We used the parameters: \(W_{J}=5a_{c}\), \(W_{L}=W_{R}=20a_{c}\). e) and f) Modified JJ structure and \(|\psi|^{2}\) of the lowest eigenvalue corresponding to MBS. The phases of four superconducting regions are set to: \(\phi_{1}=\phi_{2}=\pi/2\), \(\phi_{3}=\phi_{4}=-\pi/2\). Dimensions used: \(W_{U}=W_{D}=W_{C}=50a_{c}\), \(W_{S}=600a_{c}\), \(W_{J}=5a_{c}\). Other parameters used: \(h_{J}=0.4t\), \(h_{SC}=0\), \(\mu=2t\), \(\Delta_{0}=0.3t\), \(\Delta^{\prime}_{0}=0.06t\), \(\theta=\arctan(1/9)\approx\pi/9\).
2306.02737
Comparative analysis of the existence and uniqueness conditions of parameter estimation in paired comparison models
In this paper paired comparison models with stochastic background are investigated. We focus on the models which allow three options for choice and the parameters are estimated by maximum likelihood method. The existence and uniqueness of the estimator is a key issue of the evaluation. In the case of two options, a necessary and sufficient condition is given by Ford in the Bradley-Terry model. We generalize this statement for the set of strictly log-concave distribution. Although in the case of three options necessary and sufficient condition is not known, there are two different sufficient conditions which are formulated in the literature. In this paper we generalize them, moreover we compare these conditions. Their capacities to indicate the existence of the maximum are analyzed by a large number of computer simulations. These simulations support that the new condition indicates the existence of the maximum much more frequently then the previously known ones,
László Gyarmati, Éva Orbán-Mihálykó, Csaba Mihálykó
2023-06-05T09:33:00Z
http://arxiv.org/abs/2306.02737v1
Comparative analysis of the existence and uniqueness conditions of parameter estimation in paired comparison models ###### Abstract In this paper paired comparison models with stochastic background are investigated. We focus on the models which allow three options for choice and the parameters are estimated by maximum likelihood method. The existence and uniqueness of the estimator is a key issue of the evaluation. In the case of two options, a necessary and sufficient condition is given by Ford in the Bradley-Terry model. We generalize this statement for the set of strictly log-concave distribution. Although in the case of three options necessary and sufficient condition is not known, there are two different sufficient conditions which are formulated in the literature. In this paper we generalize them, moreover we compare these conditions. Their capacities to indicate the existence of the maximum are analyzed by a large number of computer simulations. These simulations support that the new condition indicates the existence of the maximum much more frequently then the previously known ones. \({}^{1}\) Department of Mathematics, University of Pannonia, 8200 Veszprem, Hungary \({}^{\ast}\) Corresponding author, Department of Mathematics, University of Pannonia, 8200 Veszprem, Egyetem u. 10., Hungary; Email: [email protected], +3688624000/6109 Email: [email protected], [email protected], [email protected] **Keywords**: Bradley-Terry model; maximum likelihood estimation; paired comparison; sufficient conditions; Thurstone model ## 1 Introduction Comparisons in pairs are frequently used in ranking and rating problems. They are mainly applied when scaling is very uncertain, but comparing the objects to the others can guarantee more definite results. The area of the possible applications is extremely large, some examples are the followings: education (Sahroni and Ariff, 2016; Kosztyan et al., 2020), sports (Cattelan et al., 2013; Gyarmati et al., 2023; Orban-Mihalyko et al., 2022), information retrieval (Jeon and Kim, 2013; Gyarmati et al., 2022), energy supply (Trojanowski and Kazibudzki, 2021), financial sector (Montequin et al., 2020), management (Canco et al., 2021). The most popular method is AHP (Analytic Hierarchy Process) elaborated by Saaty (Saaty, 1977, 2004) and developed by others, see for example the detailed literature in (Liu et al., 2020). The method has lots of advantages: more than two options, several methods for evaluation, opportunity of incomplete comparisons, simple condition for the uniqueness of the evaluation (Bozoki et al., 2010), possibility of multi-level decision (Rahman et al., 2021), the concept of consistency (Brunelli, 2014). Nevertheless, due to the lack of stochastic background, the usual statistical tools, like confidence intervals, testing hypotheses are out of the possibilities. Fundamentally different models of paired comparisons are Thurstone motivated stochastic models. The basic concept is the idea of latent random variables, presented in (Thurstone, 1927). Thurstone assumed Gauss distributed latent random variables and allowed two options in decisions, "worse" and "better". The method was modified: Gauss distribution was replaced by logistic distribution in (Bradley and Terry, 1952) and the model is called Bradley-Terry model (BTM). One of its main advantage are the simple mathematical formulae. Thurstone applied least squares method for parameter estimation, BTM applies maximum likelihood estimation and the not-complicated formulae allow quick numerical methods for solving optimization problems. The existence and uniqueness of the optimizer is a key issue in the case of ML estimations; necessary and sufficient condition for it is proved in (Ford Jr, 1957). The model was generalized for three options ("worse", "equal" and "better") in (Glenn and David, 1960) for Gauss distribution and in (Rao and Kupper, 1967) for logistic distribution. The latter paper applied maximum likelihood parameter estimation. Davidson made further modifications in the model concerning ties in (Davidson, 1970). For more than 3 options we can find generalization in (Agresti, 1992) in the case of Bradley-Terry model, and in (Orban-Mihalyko et al., 2019) in the case of Gauss distribution. In (Orban-Mihalyko et al., 2019) it was proved that the models require the same conditions in order to be able to evaluate the data uniquely in the case of a broad set of cumulative distribution functions for the latent random variables: the strictly log-concave property of the probability density function is the crucial point of the uniqueness, while the assurance of the existence is hidden in the data structure. We mention that Gauss distribution and logistic distribution are included in the set of distributions having strictly log-concave probability density function. Note that, due to the probabilistic background, the Thurstone motivated models have the opportunity of building in the homefield or first-mover advantage (Hankin, 2020), testing hypotheses (Szabo et al., 2016), making forecasts (McHale and Morton, 2011), therefore, they are worth investigating. In Yan (2016), the author analyzes the structure of the comparisons allowing both two and three options in choice. The author emphasizes that not only the structure of the graph made from the compared pairs but the results of the comparisons affect the existence of MLE. He makes some data perturbations in the cases where there are comparisons, but some results do not occur. By these perturbations, the zero data values become positive, and these positive value guarantee the strongly connected property of the directed graph constructed by the wins. But these perturbations modify the data structures, therefore, it would be better to avoid them. In (Bong and Rinaldo, 2022), the authors investigate BTM with two options and provide estimations for the probability of the existence of MLE. The authors turn to the condition of Ford to check whether MLE exists uniquely or not. As condition of Ford is necessary and sufficient condition, it indicates explicitly whether the MLE works or not. But in the case of other distributions and/or more than two options these investigations could not be performed due to the lack of necessary and sufficient condition for the existence and uniqueness of MLE. To continue their research, it would be conducive to have (necessary and) sufficient condition for the existence and uniqueness. To the best knowledge of the authors, there is no such theorem in the research literature, only two sufficient conditions is known. In this paper we compare the known conditions, we formulate their generalization, and we prove it. Then, we compare the applicability of the different conditions from the following point of view: how often and for what kind of parameters are they able to indicate the existence and uniqueness of MLE. We make large numbers of computer simulations and we use them to answer these questions. The paper is organised as follows: In Section 2 the investigated model is described. In Section 3 we present new conditions under which the existence and uniqueness is fulfilled. The proof can be found in Appendix A. In Section 4 the simulation results concerning the applicability are presented. Finally a short summary is given. The investigated model Let the number of the different objects to evaluate be denoted by \(n\), and let the objects be referred to as \(1,2,...,n.\) We want to evaluate them on the basis of the opinions of some persons called observers. Let us denote the latent random variable belonging to the \(i^{th}\) object by \(\xi_{i}\), \(i=1,2,...,n\). Let the number of the options in a choice be \(s=3\), namely "worse", "equal" and "better", denoted by \(C_{1}\), \(C_{2}\) and \(C_{3}\). We split the set of the real numbers \(\mathbb{R}\) into 3 intervals, which have no common elements. Each option in judgment corresponds to an interval in the real line, the correspondence is noted by the same index. If the judgment between the \(i^{th}\) and \(j^{th}\) objects is the option \(C_{k}\), then we assume that the difference \(\xi_{i}-\xi_{j}\) of the latent random variables \(\xi_{i}\) and \(\xi_{j}\) is in the interval \(I_{k},k=1,2,3\). The intervals are determined by their initial points and endpoints, which are -\(\infty\), \(-d\), \(d\) and \(\infty\), \(I_{1}\)=(-\(\infty\),\(-d\)), \(I_{2}\)=\([-d,d]\) and \(I_{3}\) =(\(d\),\(\infty\)). The above intervals together with the corresponding options are presented in Figure 1. We can write the differences of the latent random variables in the following form: \[\xi_{i}-\xi_{j}=m_{i}-m_{j}+\eta_{i,j},i=1,...,n,j=1,...,n,i\neq j. \tag{1}\] Now \[E(\xi_{i})=m_{i} \tag{2}\] and \(\eta_{i,j}\) are identically distributed random variables with expectation 0. The ranking of the expectations determines the ranking of the objects and the differences in their values give information concerning the differences of the strengths. We want to estimate the expectations and the value of the border of "equal" (\(d\)) on the basis of the data. For that we use maximum likelihood estimation. The probabilities of the events can be computed on the basis of the assumptions concerning the distributions of \(\eta_{i,j}\) as follows: Figure 1: The options and the intervals belonging to them \[P(\xi_{i}-\xi_{j}\in I_{1})=P(\xi_{i}-\xi_{j}<-d)=F(-d-(m_{i}-m_{j})) \tag{3}\] \[P(\xi_{i}-\xi_{j}\in I_{2})=P(-d<=\xi_{i}-\xi_{j}<=d)=F(d-(m_{i}-m_{j}))-F(-d-(m _{i}-m_{j})) \tag{4}\] \[P(\xi_{i}-\xi_{j}\in I_{3})=P(d<\xi_{i}-\xi_{j})=1-F(d-(m_{i}-m_{j})) \tag{5}\] where \(F\) is the (common) cumulative distribution function (c.d.f) of \(\eta_{i.j}\). Let the number of observers be \(r\). The judgment produced by the \(u^{th}\) observer (\(u=1,2,...,r\)) concerning the comparison of the \(i^{th}\) and the \(j^{th}\) objects is encoded by the elements of a 4 dimensional matrix which has only 0 and 1 coordinates depending on the choice of the respondent. The third indices correspond to the options in choices, \(k\)=1,2,3 are for judgments "worse", "equal", and "better", respectively. The matrix of all judgments be \(X,\) having 4 dimensions, \(i=1,2,...,n,j=1,2,...,n,k=\)1, 2, 3, \(u=1,2,...,r\) and \[X_{i,j,k,u}=\left\{\begin{array}{l}1,\mbox{if the opinion of the }\ u^{th}\mbox{ observer in pursuance}\\ \mbox{ of the comparison of the }i^{th}\mbox{ and the }j^{th}\mbox{ objects is }C_{k}\\ 0,\mbox{ otherwise}\end{array}\right.\] Let \(X_{i,i,k,u}=0\). Of course, due to the symmetry, \(X_{i,j,k,u}=X_{j,i,4-k,u}\). It expresses that if the \(i^{th}\) object is "better" than the \(j^{th}\) object, then the \(j^{th}\) object is "worse" than the \(i^{th}\) object, according to the judgment of the \(u^{th}\) respondent. Let \(A_{i,j,k}=\sum_{u=1}^{r}X_{i,j,k,u}\) be the number of observations \(C_{k}\) in pursuance of the comparison of the \(i^{th}\) and the \(j^{th}\) objects and let \(A\) denote the three dimensional matrix containing the elements \(A_{i,j,k}.\) Of course, \(A_{i,j,k}=A_{j,i,4-k}\). The likelihood function expresses the probability of the sample in the function of the parameters. Assuming independent judgments, the likelihood function is \[L(X|m_{1},m_{2},...,m_{n},d)=\prod_{k=1}^{3}\prod_{i=1}^{n-1}\prod_{j=i+1}^{n} \left(P(\xi_{i}-\xi_{j}\in I_{k})\right)^{A_{i,j,k}} \tag{6}\] which has to be maximized in \(\underline{m}=(m_{1},...,m_{n})\) and \(0<d\). One can realize that the likelihood function depends on the differences of the parameters \(m_{i}\), therefore, one of them can be fixed. Conditions for the existence and uniqueness In (Ford Jr, 1957), the author presents a necessary and sufficient condition for the existence and uniqueness of MLE, if there are only two options for choice and \(F\), the c.d.f. of \(\eta_{i,j}\), is the logistic c.d.f.. The condition is the following: for arbitrary non-empty partition of the objects, \(S\) and \(\overline{S}\), there exists at least one element of \(S\), which is "better" than an element of \(\overline{S}\), and vice versa. In (Davidson, 1970), the author states that this condition supplemented with the condition "there is at least one tie ("equal")" is enough for having a unique maximizer in a modified Bradley-Terry model. The theorem assumes logistic distribution, its proof uses this special form, therefore, it is valid only for the investigated special model. Now we prove it for a broad set of c.d.f.'s. We require the following properties: \(F\) is a c.d.f. with \(0<F(x)<1\), \(F\) is three times continuously differentiable, its probability density function \(f\) is symmetric and the logarithm of \(f\) is a strictly concave function in \(\mathbb{R}\). Gauss and logistic distribution belong to this set, together with lots of others. Let us denote the set of these c.d.f.-s by \(\mathbb{F}\). First we state the following generalization of Ford's theorem: **Theorem 1**: _Let \(F\in\mathbb{F}\) and suppose that there are only two options in choice. Fix the value of the parameter \(m_{1}=0\). The necessary and sufficient condition for the existence and uniqueness of MLE is the following: for arbitrary non-empty partition of the objects, \(S\) and \(\overline{S}\), there exists at least one element of \(S\), which is "better" than an element of \(\overline{S}\), and vice versa._ The proof of sufficiency relies on the argumentation of Theorem 4 omitting variable \(d\). The used steps are (ST3), (ST5), and (ST6) in Appendix A. In the last step, the strictly concave property of \(logL\) can be concluded from the theory of logarithmic concave measures (Prekopa, 1973). The necessity is obvious: if there would be a partition without "better" from one subset to another, then each element of this subset would be "worse" than the elements of the complement, but the measure of "worse" could not be estimated. The likelihood function would be monotone increasing, consequently, the maximum would not be reached. Returning to the case of three options, we formulate conditions of Davidson in the followings: **DC 1**: _There exists an index pair \((i_{1},j_{1})\) for which \(0<\)\(A_{i_{1},j_{1},2}\)._ **DC 2**: _For any non-empty partition of the objects \(S\) and \(\overline{S}\), there exists at least two index pairs (\(i_{2}\),\(j_{2}\)) and (\(i_{3}\),\(j_{3}\)) \(i_{2},i_{3}\in S\), \(j_{2},j_{3}\in\overline{S}\) for which \(0<A_{i_{2},j_{2},3}\) and \(0<A_{i_{3},j_{3},1}\)._ Condition DC 1 expresses that there is a judgment "equal". Condition DC 2 coincides with the condition of Ford in (Ford Jr, 1957) in the case of two options. It expresses that there is at least one object in both subsets which is "better" than an object in the complement. **Theorem 2**: _Let \(F\in\mathbb{F}\). If conditions DC 1 and DC 2 hold, then, fixing \(m_{1}=0\), the likelihood function (6) attains its maximal value and its argument is unique._ Theorem 2 is the consequence of a more general statement, Theorem 4, which will be proved in Appendix A. Now we turn to another set of conditions which guarantees the existence and uniqueness of MLE. These conditions will be abbreviated by the initial letters MC. **MC 1**: _There is at least one index pair_ \((i_{1},j_{1})\) _for which_ \(0<A_{i_{1},j_{1},2}\) _holds._ **MC 2**: _There is at least one index pair_ \((i_{2},j_{2})\) _for which_ \(0<A_{i_{2},j_{2},1}\) _and_ \(0<A_{i_{2},j_{2},3}\)_._ Let us define the graph \(G^{(M)}\) as follows: the nodes are the objects to be compared. There is an edge between two nodes \(i\) and \(j\), if \(0<A_{i,j,2}\) or (\(0<A_{i,j,1}\) and \(0<A_{i,j,3}\)) hold. **MC 3**: _Graph \(G^{(M)}\) is connected._ **Theorem 3**: _(Orban-Mihalyko et al., 2019b) Let \(F\in\mathbb{F}\). If conditions MC 1, MC 2 and MC 3 hold, then, after fixing \(m_{1}\)=0, the likelihood function (6) attains its maximal value and the argument of the maximum is unique._ To clear the relationship between conditions DC 1, DC 2 and MC 1, MC 2, MC 3 we present two examples. In Example 1, DC 1, DC 2 are satisfied but MC 2 and MC 3 are not. In Example 2, DC 2 is not satisfied but MC 1, MC 2, MC 3 are. These examples expose that the sets of conditions DC and MC do not cover each other. Moreover, they support that MLE may exist uniquely even if DC 1 and DC 2 or MC 1, MC 2 and MC 3 do not hold. Therefore, we can see that neither conditions DC nor conditions MC are necessary conditions. **Example 1**: _Let n=3 and \(A_{1,2,2}\)=1, \(A_{1,2,3}\)=1, \(A_{2,3,3}\)=1, \(A_{1,3,1}\)=1 (see Figure 2). Now both DC 1 and DC 2 hold, but MC 3 does not._ **Example 2**: _Let n=3 and \(A_{1,2,1}\)=1, \(A_{1,2,3}\)=1, \(A_{2,3,2}\)=1 (see Figure 3). Now one can easily check that MC 1, MC 2 and MC 3 hold but DC 2 does not._ The above theorems can be generalized. Let us introduce the following set of conditions denoted by SC: **SC 1**: _There is at least one index pair_ \((i_{1},j_{1})\) _for which_ \(0<A_{i_{1},j_{1},2}\) _holds._ Let us introduce a graph belonging to the results of the comparisons as follows: let \(DG^{(SC)}\) be a directed graph, the nodes are the objects, and there is a directed edge from \(i\) to \(j\) if there is an opinion according to which \(i\) is "better" than \(j\), that is \(0<A_{i,j,3}\). Now we can formulate the following conditions: **SC 2**: _There is a cycle in the directed graph \(DG^{(SC)}\)._ **SC 3**: _For any non-empty partition of the objects \(S\) and \(\overline{S}\), there exists at least two (not necessarily different) index pairs (\(i_{2}\),\(j_{2}\)) and (\(i_{3}\),\(j_{3}\)) \(i_{2},i_{3}\in S\), \(j_{2},j_{3}\in\overline{S}\) for which_ \(0<A_{i_{2},j_{2},3}\) and \(0<A_{i_{3},j_{3},1}\),_ _or there exists an index pair (\(i_{4}\),\(j_{4}\)) \(i_{4}\in S\) and \(j_{4}\in\overline{S}\) for which \(0<A_{i_{4},j_{4},2}\)._ It is easy to see that condition SC 2 is more general than condition MC 2 and condition SC 3 is more general than condition DC 2. Condition SC 3 expresses that any subset and its complement are interconnected by an opinion "better" or an opinion "equal". Here Condition DC 2 is replaced by a more general condition: next to "better" the opinion "equal" can also be appropriate judgment for connection. To analyse the relationships between the sets of conditions DC, MC and SC we can recognize that (A) DC 1, MC 1 and SC 1 coincide. (B) If DC 2 holds, then so does SC 2 and SC 3. (C) If MC 2 holds, so does SC 2. (D) If MC 3 holds, so does SC 3. These together present that conditions SC 1, SC 2, and SC 3 are the generalization of the conditions DC and MC. To show that SC is really a more general set of conditions we present Example 3. **Example 3**: _Let n=4, \(A_{1,2,3}\)=1, \(A_{2,3,3}\)=1, \(A_{1,3,1}\)=1 and \(A_{1,4,2}\)=1 (see Figure 4). In this case neither condition DC 2 nor MC 2 hold, but SC 1, SC 2 and SC 3 do._ Now we state the following theorem. **Theorem 4**: _Let \(F\in\mathbb{F}\). If conditions SC 1, SC 2 and SC 3 hold, then, after fixing \(m_{1}\)=0, the likelihood function (6) attains its maximum value and its argument is unique._ The proof of Theorem 4 can be found in Appendix A. We note that Theorem 2 is a straightforward consequence of Theorem 4. Unfortunately, conditions SC 1, SC 2 and SC 3 are not necessary conditions. One can prove that in the case of Example 4 there exists a unique maximizer of function (6) but SC 2 does not hold. **Example 4**: _Let \(n\)=3, \(A_{1,2,3}=1\), \(A_{2,3,3}=1\) and \(A_{1,3,2}=1\) (see Figure 5)._ ## 4 Comparisons of the efficiency of the conditions In this section, we investigate in some special situations which sets of conditions (conditions DC 1, DC 2; conditions MC 1, MC 2, MC 3; conditions SC 1, SC 2, SC 3) are fulfilled, i.e. are able to detect the existence and the uniqueness of the maximizer. From the applications' perspective, there are such cases when the strengths of the objects to rank are close to each other and when they differ very much. On the other hand, there are such cases when the judgment "equal" is frequent, and such cases when it is rare. Referring to sports: in football and in chess the result draw comes up often, but in handball rarely. The most general set of conditions is the set SC. These conditions are fulfilled most frequently from the three sets of conditions. Nevertheless, it is interesting to what extent it is more applicable than the other two sets of conditions. For that we made a large amount of computer simulations in the case of different parameter settings, and we investigated, how frequently the conditions are satisfied and how frequently we experience that the maximum exists. We used Monte-Carlo simulation for the investigations. We fixed the differences between two expectations and the value of parameter \(d\). This means that in our cases \(\underline{m}=(0,h,2h,...,(n-1)h).\) We investigated 8 objects, and we generated randomly the pairs between which the comparisons exist. The number of comparisons was 8, 16, 32, 64. The results of the comparisons were also generated randomly, according to the probabilities (3), (4) and (5). In these random cases we checked whether conditions DC, MC, and SC are satisfied or not. Moreover we performed the numerical optimizations and we investigated whether the maximal value exists. We used 4 parameter ensembles, called situations, which are shown in Table 1. In the presented situations, if the value of \(h\) is small then the strengths of the objects are close to each other. It implies that many "better-worse" pairs could be formed during the simulations. On the other hand, if the value of \(h\) is large, the strengths of the objects are far from each other, then we can expect only few "better-worse" pairs, but a great amount of "better" judgment. In terms of the number of "equal" judgments, if \(d\) is large then lots of "equal" judgment could be formed during the simulations, while only few, when \(d\) is small. The set of conditions DC can apply well the judgments "better", but it require only a single "equal" judgment. However, the set of conditions MC can use the judgments "equal" for connections, and the pairs "better-worse" judgments. Conditions SC do not require pairs, only judgments "better", in one circle. We recall that a single "better-worse" pair is appropriate as a circle. The judgments "equal" are well-applicable for this set of conditions, too. Table 1 summarizes the situations with the presumable ratios of the "equal" judgments and "better-worse" pairs. In addition, Tables 2, 3, 4 and 5 contains the numerical results of the simulations. The order of the situations in terms of the number of the existence of the maximal values is decreasing. Column MAX contains the number of the cases when the maximum exists. Columns DC/MAX, MC/MAX and SC/MAX present the ratios of the cases when the set of conditions DC, MC, SC hold, respectively. We can see that increasing the number of comparisons, the number of such cases when the maximal value exists and the ratios increase. We draw the attention to the fact that the values of the columns SC/MAX are less than 1 on several occasions. This detects again that SC is not a necessary condition. We performed \(10^{8}\) simulations per situation. Table 2 presents the results in Situation I. In this case we can see the DC/MAX rate is lower than the MC/MAX rate. We could predict it because there are lots of "equal" judgment. The SC/MAX rate is high even for 16 comparisons. In the case of 16 comparisons SC is 3.5 times better than MC and over 100 times better than DC. Table 3 presents the results of Situation II. In this case, the rate of "equal" is low, which does not favour the set of conditions MC. This is also reflected in the ratio MC/MAX, which is much worse than the ratio DC/MAX. The set of conditions SC still stands out among the other conditions. Table 4 shows the results of Situation III. Here the maximum values exist more rarely than in the previous two cases. In this case the number of "equal" decisions is high, while the number of "better-worse" pairs is low, which is favorable for the set of conditions MC and disadvantageous for the set of conditions DC, as we can see in Table 4. It can also be \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Situation & \(h\) & \(d\) & Rate of judgments "equal" & Rate of "better-worse" pairs \\ \hline I. & 0.05 & 0.5 & large & large \\ \hline II. & 0.05 & 0.05 & small & large \\ \hline III. & 0.5 & 0.5 & large & small \\ \hline IV. & 0.5 & 0.05 & small & small \\ \hline \end{tabular} \end{table} Table 1: Situations investigated \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Number of comparisons & MAX & DC/MAX & MC/MAX & SC/MAX \\ \hline 8 & 57216 & 0 & 0.0921421 & 0.1941765 \\ \hline 16 & 38664325 & 0.0058568 & 0.2019802 & 0.7097257 \\ \hline 32 & 95920581 & 0.239853 & 0.8280385 & 0.9895364 \\ \hline 64 & 99987066 & 0.883599 & 0.9988596 & 0.9999986 \\ \hline \end{tabular} \end{table} Table 2: Situation I. (\(h=0.05\), \(d=0.5\)) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Number of comparisons & MAX & DC/MAX & MC/MAX & SC/MAX \\ \hline 8 & 371 & 0 & 0 & 0.4070081 \\ \hline 16 & 5448890 & 0.3228876 & 0.0009119 & 0.9937707 \\ \hline 32 & 58963802 & 0.8708119 & 0.1898881 & 0.9999976 \\ \hline 64 & 92019027 & 0.9963352 & 0.9506307 & 1 \\ \hline \end{tabular} \end{table} Table 3: Situation II. (\(h=0.05,d=0.05\)) seen that none of the methods are as good as in the previous tables in terms of detecting the existence of the maximum. SC stands out again from the other two sets of conditions. Nevertheless, SC is able to show the existence of the maximum only in 73% in the case of 32 comparisons, compared to 99% in the previous situations. The set of conditions DC is almost useless, it is useful only in the cases 3.3% even if the number of comparisons equals 64. The set of conditions MC method is slowly catching up and getting better, but for small numbers of comparisons (8, 16, 32) it is far from the much better SC criteria. Table 5 presents the results in Situation IV. In the latter case, the numbers of "equal" choices and "better-worse" pairs are small, which is unfavorable MC, principally. In this situation, SC detects the existence of the maximal value exceptionally well. DC evinces them less fine, but it works better than MC. Nevertheless, for small numbers of comparisons, they are orders of magnitude weaker than SC. In all situations we have found that when we make few comparisons, SC is superior to the other conditions. As we make more and more comparisons, both other methods get better and better, but they are always worse than SC. The clear conclusion from the four tables is that the set of conditions SC is much more effective than the others, especially for small numbers of comparisons. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Number of comparisons & MAX & DC/MAX & MC/MAX & SC/MAX \\ \hline 8 & 248 & 0 & 0.0282258 & 0.0604839 \\ \hline 16 & 1025064 & 0.0005717 & 0.0532279 & 0.4203006 \\ \hline 32 & 23544050 & 0.004597 & 0.2771048 & 0.7256062 \\ \hline 64 & 76946023 & 0.0333163 & 0.8141669 & 0.95373 \\ \hline \end{tabular} \end{table} Table 4: Situation III. (\(h=0.5,d=0.5\)) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Number of comparisons & MAX & DC/MAX & MC/MAX & SC/MAX \\ \hline 8 & 2 & 0 & 0 & 1 \\ \hline 16 & 44246 & 0.1146209 & 0.00020355 & 0.9370956 \\ \hline 32 & 2621654 & 0.35173555 & 0.0184299 & 0.9965827 \\ \hline 64 & 25579173 & 0.6329823 & 0.37594615 & 0.99996685 \\ \hline \end{tabular} \end{table} Table 5: Situation IV. (\(h=0.5,d=0.05\)) Summary In this paper conditions guaranteeing the existence and uniqueness of the maximum likelihood parameter estimation are investigated. The case of general log-concave probability density function is studied. If two options are allowed, the usually applied Ford's condition is generalized from the logistic distribution to a wide set of distributions. This condition is necessary and sufficient condition. In the case of three options in decision, necessary and sufficient condition has not been proved, but there are two different sufficient conditions. We generalized them. A new set of conditions is proved which guarantees the existence and uniqueness of the maximizer. Moreover, we compare the conditions by the help of computer simulations and we have experienced that the set of the new conditions indicates the existence and uniqueness much more frequently, than the previously known conditions. Consequently, it provides more effective methods for such research which was preformed by Yan (Yan, 2016) and Bong and Rinaldo (Bong and Rinaldo, 2022). The research includes the possibility of further developments. It would be desirable to set up the necessary and sufficient condition of the existence and uniqueness of the maximizer for the case of three options in choices, and simulations may help these findings. Further research is necessary to investigate the case of more than 3 options. These would be the subject of a next paper.
2301.07830
Fixed-point iterative algorithm for SVI model
The stochastic volatility inspired (SVI) model is widely used to fit the implied variance smile. Presently, most optimizer algorithms for the SVI model have a strong dependence on the input starting point. In this study, we develop an efficient iterative algorithm for the SVI model based on a fixed-point and least-square optimizer. Furthermore, we present the convergence results in certain situations for this novel iterative algorithm. Compared with the quasi-explicit SVI method, we demonstrate the advantages of the fixed-point iterative algorithm using simulation and market data.
Shuzhen Yang, Wenqing Zhang
2023-01-19T00:19:20Z
http://arxiv.org/abs/2301.07830v1
# Fixed-point iterative algorithm for SVI model ###### Abstract The stochastic volatility inspired (SVI) model is widely used to fit the implied variance smile. Presently, most optimizer algorithms for the SVI model have a strong dependence on the input starting point. In this study, we develop an efficient iterative algorithm for the SVI model based on a fixed-point and least-square optimizer. Furthermore, we present the convergence results in certain situations for this novel iterative algorithm. Compared with the quasi-explicit SVI method, we demonstrate the advantages of the fixed-point iterative algorithm using simulation and market data. KEYWORDS: Iterative algorithm; FPI-SVI; SVI; Quasi-explicit SVI ## 1 Introduction In 1999, Merrill Lynch developed the stochastic volatility inspired (SVI) parameterization model to describe the implied volatility smile appearing in the Black-Sholes option pricing formula. Owing to the profound relationship between implied variance and excellent fit with observations, the SVI model is popular in financial markets [6, 7, 14]. However, there are some limitations to the SVI model when fitting financial market data. In particular, the SVI variance smiles are convex, and do not fit several market variance smiles. In this paper, we aim to develop a efficient iterative algorithm for the SVI model. Based on an appropriate change in variables, [8] showed that the SVI model in [6] is an appropriate solution for the implied variance in the Heston model considered in [4], where [4] established an approximate formula expressed for implied volatility functions in the Heston model. The SVI model provides a simpler expression for asymptotic implied volatility in the Heston model. [2] developed novel implied stochastic volatility models to reproduce the characteristics of the observed implied volatility smiles. Using implied volatility market data, [2] verified which stochastic volatility models are capable of reproducing the observed characteristics of implied volatility smiles (refer to [1]). [2] shared ideas with [6], and the SVI model can reproduce the characteristics of the implied volatility smiles. [9] studied the arbitrage-free SVI volatility smiles and established a large class of closed-form SVI volatility smiles (refer to [10, 11, 12, 13]). Including in particular the SVI model as an important example, [3] derived a closed-form formula for Black-Scholes implied volatility. In most SVI optimization algorithms, an initial guess is used as a local minimizer, which leads to a strong dependence on the input starting point. It is important to find a stable and efficient algorithm for the SVI model from the practical analysis. Based on an initial estimate of two parameters in the SVI model, [15] introduced a quasi-explicit SVI method using an explicit least-squares optimizer, and thus reducing the number of SVI model parameters from five to two. Nelder-Mead was used to optimize the remaining two parameters. However, the quasi-explicit SVI method performs better based on a smart initial guess (refer to [5]). This study focused on establishing an efficient iterative algorithm for the SVI model. The SVI model has five parameters \((a,b,\rho,m,\sigma)\) that satisfy \[v(x)=a+b\left(\rho(x-m)+\sqrt{(x-m)^{2}+\sigma^{2}}\right), \tag{1.1}\] where \(x=\log K-\log F_{T}\), \(K\) denotes the strict price in the Black-Sholes formula, \(F_{T}\) denotes the price of the forward with maturity \(T\), and \(v(x)\) denotes the related implied variance. Each parameter of \((a,b,\rho,m,\sigma)\) can capture certain properties of implied variance \(v(x)\). We denote by \((x_{min},v_{min})\) the minimum point of the SVI model when \(\rho^{2}<1\) and have an explicit formula with parameters \((a,b,\rho,m,\sigma)\), \[m=x_{min}+\frac{\rho(v_{min}-a)}{b(1-\rho^{2})},\quad\sigma=\frac{v_{min}-a}{b \sqrt{1-\rho^{2}}}. \tag{1.2}\] This formula motivated the development of a multistep iterative algorithm. Using an approach similar to that in [15], for a given initial guess \((m_{0},\sigma_{0})\), we can use the least-squares optimizer to obtain \((a_{0},b_{0},\rho_{0})\), and then use formula (1.2) to calibrate the initial guess as \((m_{1},\sigma_{1})\). The above method is an iterative algorithm that combines the fixed-point \((x_{min},v_{min})\) and least-squares optimizer. We call this method the fixed-point iterative SVI (FPI-SVI) algorithm. For the case \(\rho^{2}=1\), based on a simple coordinate rotation transformation, we translate the case \(\rho^{2}=1\) to \(\rho^{2}<1\). Furthermore, we provide a uniform FPI-SVI algorithm to deal with the cases \(\rho^{2}<1\) and \(\rho^{2}=1\). The main contributions of this study are twofold: (i). We develop a novel explicit iterative method based on the minimum point in the SVI model and the least-squares optimizer. As each step of the FPI-SVI algorithm has an explicit formula, the new algorithm is efficient. In the simulation and empirical analysis, the quasi-explicit SVI method requires almost 50 times the calculation time of the FPI-SVI algorithm with the same numbers of iterative steps. (ii). We establish some convergence results of FPI-SVI algorithm for some situations and show that estimations of parameters \((a,b,\rho,m,\sigma)\) satisfy the least-squares optimizer and fixed-point constrained condition (1.2). In the simulation and empirical analysis, the FPI-SVI algorithm has a fast convergence rate when iterative steps are smaller than 50. The constrained condition (1.2) can improve the accuracy of the estimations of parameters \((a,b,\rho,m,\sigma)\). The remainder of this paper is organized as follows. Section 2 introduces a novel iterative algorithm for parameters in the SVI model and establishes some convergence results for this algorithm under certain situations. Based on simulation and empirical analysis, we show some advantages of our FPI-SVI algorithm compared with the quasi-explicit SVI method in Sections 3 and 4. Finally, we conclude the study in Section 5. ## 2 FPI-SVI Algorithm The SVI model perfectly fits several real market data. However, most optimizer algorithms of the SVI model strongly depend on the input starting point. We now show the details of the FPI-SVI algorithm used in this study. We consider the case \(\rho^{2}<1\) and the minimum value of \(v(x)\), \[v_{min}=v(x_{min})=a+b\sigma\sqrt{1-\rho^{2}},\quad x_{min}=m-\frac{\rho \sigma}{\sqrt{1-\rho^{2}}},\] which deduces that \[m=x_{min}+\frac{\rho(v_{min}-a)}{b(1-\rho^{2})}, \tag{2.1}\] and \[\sigma=\frac{v_{min}-a}{b\sqrt{1-\rho^{2}}}. \tag{2.2}\] Now, we consider the sequences \(\{x_{i},v_{i}\}_{i=1}^{N}\) which are the observed points of the model (1.1). Determining the minimum point \((x_{min},v_{min})\) of model (1.1) is important in our new FPI-SVI algorithm. In the following, we present three methods for determining the minimum point \((x_{min},v_{min})\) when \(\rho^{2}<1\). **Remark 2.1**.: _We proposed three methods for finding a better minimum point based on the observations \(\{x_{i},v_{i}\}_{i=1}^{N}\) for the FPI-SVI algorithm developed in this study:_ * _Method 1: A natural method is to use the minimum point of \(\{x_{i},v_{i}\}_{i=1}^{N}\) to estimate \((x_{min},v_{min})\). Let \(p=\arg\min_{1\leq i\leq N}v_{i}\), and thus \((x_{min},v_{min})=(x_{p},v_{p})\);_ * _Method II: Based on the minimum point of \(\{x_{i},v_{i}\}_{i=1}^{N}\), we use a smooth function to approximate the local property of the SVI curve and calibrate the minimum point \((x_{min},v_{min})\) that satisfies_ \[x_{min}=x_{p},\ v_{min}=v_{p},\ p=\arg\min_{1\leq i\leq N}v_{i}.\] _We take three points from observations_ \(\{x_{i},v_{i}\}_{i=1}^{N}\) _that are close to_ \((x_{p},v_{p})\)_,_ \[(x_{p-1},v_{p-1}),\ (x_{p},v_{p}),\ (x_{p+1},v_{p+1}).\] _We then use a quadratic function to fit the above three points,_ \[v(x)=\hat{c}_{1}x^{2}+\hat{c}_{2}x+\hat{c}_{3}\] _and obtain the parameters of the quadratic function_ \((\hat{c}_{1},\hat{c}_{2},\hat{c}_{3})\)_. Thus, we obtain the calibration minimum point, denoted by_ \((x_{min},v_{min})\)_, and_ \[(x_{min},v_{min})=(-\frac{\hat{c}_{2}}{2\hat{c}_{1}},\frac{4\hat{c}_{1}\hat{c} _{3}-\hat{c}_{2}^{2}}{4\hat{c}_{1}}).\] * _Method III: Based on the minimum point_ \((x_{p},v_{p})\) _of_ \(\{x_{i},v_{i}\}_{i=1}^{N}\)_, we guess that the minimum point of the SVI model (_1.1_) is close to the three points_ \[(x_{p-1},v_{p-1}),\ (x_{p},v_{p}),\ (x_{p+1},v_{p+1}).\] _We consider the set_ \(A:=\{(x,y):\ \sqrt{(x-x_{p})^{2}+(y-y_{p})^{2}}<r\}\)_, where_ \(0<r\) _and_ \(r\) _is the maximum value such that_ \((x_{p-1},v_{p-1})\notin A\) _or_ \((x_{p+1},v_{p+1})\notin A\)_. We can randomly choose several points (for example_ 10_) from set_ \(A\)_, and consider our FPI-SVI method under each given point regarded as the minimum point_ \((x_{min},v_{min})\)_. Finally, according to the performance of the FPI-SVI algorithm, we can find a better minimum point._ In practical analysis, we find that **Method II** is better for estimating the minimum point of the model (1.1). Thus, we consider **Method II** in Section 4. Furthermore, we can use **Method III** to find a better estimation of the minimum point of the model (1.1). However, **Method III** requires a significant amount of time compared to **Method II**. We now consider the following two cases: \(\rho^{2}<1\) and \(\rho^{2}=1\). For \(\rho^{2}<1\), we first establish the convergence results for the new FPI-SVI algorithm under certain situations. Subsequently, based on a simple coordinate rotation transformation, we show that one can translate the case \(\rho^{2}=1\) to \(\rho^{2}<1\). ### When \(\rho^{2}<1\) We denote by \[V=(v_{1},v_{2},\cdots,v_{N})^{\top}\in\mathbb{R}^{N\times 1},\quad Y(m,\sigma)=(X_{ 1},X_{2}(m,\sigma),X_{3}(m,\sigma))\in\mathbb{R}^{N\times 3},\] where \[X_{1}=(1,1,\cdots,1)^{\top}\,;\] \[X_{2}(m,\sigma)=(x_{1}-m,x_{2}-m,\cdots,x_{N}-m)^{\top}\,;\] \[X_{3}(m,\sigma)=\left(\sqrt{(x_{1}-m)^{2}+\sigma^{2}},\,\sqrt{( x_{2}-m)^{2}+\sigma^{2}},\cdots,\,\sqrt{(x_{N}-m)^{2}+\sigma^{2}}\right)^{\top}.\] We also construct an iterative algorithm based on the fixed point \((x_{min},v_{min})\) and the quasi-explicit SVI model. We first consider the starting input point and take \[(m_{0},\sigma_{0})=(x_{min},v_{min}).\] From the observations \((Y(m_{0},\sigma_{0}),V)\) and denoting by \(Y_{0}=Y(m_{0},\sigma_{0})\), we introduce the following least-square optimization problem: \[\min_{\beta}\left(V-Y_{0}\beta\right)^{\top}\left(V-Y_{0}\beta\right), \tag{2.3}\] where \(\beta=(a,b\rho,b)^{\top}\). The optimizer for problem (2.3) is given as follows: \[\beta_{0}=\left[Y_{0}^{\top}Y_{0}\right]^{-1}Y_{0}^{\top}V. \tag{2.4}\] Thus, the estimations of \((a,b,\rho)\) are: \[(a_{0},b_{0},\rho_{0})=\left(\beta_{0}(1),\beta_{0}(3),\frac{\beta_{0}(2)}{ \beta_{0}(3)}\right). \tag{2.5}\] By combining equations (2.1) and (2.2), we can obtain the value of \((m,\sigma)\) at step 1: \[m_{1}=x_{min}+\frac{\rho_{0}(v_{min}-a_{0})}{b_{0}(1-\rho_{0}^{2})},\quad \sigma_{1}=\frac{v_{min}-a_{0}}{b_{0}\sqrt{1-\rho_{0}^{2}}}. \tag{2.6}\] Now, we can repeat equations (2.4), (2.5), and (2.6) from step \(n\) to step \(n+1\). \[Y_{n}=Y(m_{n},\sigma_{n}); \tag{2.7}\] \[\beta_{n}=\left[Y_{n}^{\top}Y_{n}\right]^{-1}Y_{n}^{\top}V;\] (2.8) \[(a_{n},b_{n},\rho_{n})=\left(\beta_{n}(1),\beta_{n}(3),\frac{ \beta_{n}(2)}{\beta_{n}(3)}\right);\] (2.9) \[m_{n+1}=x_{min}+\frac{\rho_{n}(v_{min}-a_{n})}{b_{n}(1-\rho_{n}^ {2})},\quad\sigma_{n+1}=\frac{v_{min}-a_{n}}{b_{n}\sqrt{1-\rho_{n}^{2}}}. \tag{2.10}\] We conclude the FPI-SVI algorithm as follows: ``` Input:\((m_{0},\sigma_{0})=(x_{min},v_{min})\), \(\{x_{i},v_{i}\}_{i=1}^{N}\) Output: Estimations of parameters \((a,b,\rho,m,\sigma)\) Initialization:\(n=0,\ M=50\); \(\text{Error}\ \delta=1.0e-3\); \(Y_{0}=Y(m_{0},\sigma_{0})\); \(\beta_{0}=\left[Y_{0}^{\top}Y_{0}\right]^{-1}Y_{0}^{\top}V\); \(L(0)=\sqrt{\left(V-Y_{0}\beta_{0}\right)^{\top}\left(V-Y_{0}\beta_{0}\right)}\); \((a_{0},b_{0},\rho_{0})=\left(\beta_{0}(1),\beta_{0}(3),\frac{\beta_{0}(2)}{ \beta_{0}(3)}\right)\). while\(L(n)>\delta\ or\ n\leq M\)do \(n=n+1\); \(m_{n}=x_{min}+\frac{\rho_{n-1}(v_{min}-a_{n-1})}{b_{n-1}(1-\rho_{n-1}^{2})}\), \(\sigma_{n}=\frac{v_{min}-a_{n-1}}{b_{n-1}\sqrt{1-\rho_{n-1}^{2}}}\); \(Y_{n}=Y(m_{n},\sigma_{n})\); \(\beta_{n}=\left[Y_{n}^{\top}Y_{n}\right]^{-1}Y_{n}^{\top}V\); \(L(n)=\sqrt{\left(V-Y_{n}\beta_{n}\right)^{\top}\left(V-Y_{n}\beta_{n}\right)}\); \((a_{n},b_{n},\rho_{n})=\left(\beta_{n}(1),\beta_{n}(3),\frac{\beta_{n}(2)}{ \beta_{n}(3)}\right)\). end while ``` **Algorithm 1**FPI-SVI Algorithm **Remark 2.2**.: _The algorithm 1 provides details of the pseudocode for the FPI-SVI algorithm. Each step of the FPI-SVI algorithm has an explicit formula. Therefore, the FPI-SVI is an efficient iterative algorithm. Compared to the quasi-explicit SVI method, we use simulation and financial market data to verify the performance of the FPI-SVI algorithm in Sections 3 and 4._ It is theoretically challenging to directly show the convergence of the sequences \(\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\}_{n=1}^{\infty}\). Therefore, we present situations that guarantee the convergence of \(\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\}_{n=1}^{\infty}\). These situations are useful for practical analysis. **Lemma 2.1**.: _Let the sequences \(\{a_{n},b_{n},\rho_{n}\}_{n=1}^{\infty}\) satisfy the following conditions:_ _(i). \(0<\underline{L}_{b}\leq b_{n}\), \(\left|\rho_{n}\right|<L_{\rho}<1\), \(0\leq a_{n}\), \(n\geq 1\);_ _(ii). For a sufficiently small \(\delta>0\), there is a positive integer \(N_{0}\), such that_ \[\left|a_{N_{0}}-a_{N_{0}-1}\right|<\delta;\] \[\left|b_{N_{0}}-b_{N_{0}-1}\right|<\delta;\] \[\left|\rho_{N_{0}}-\rho_{N_{0}-1}\right|<\delta.\] _Then, we have that,_ \[\left|m_{N_{0}+1}-m_{N_{0}}\right|\leq L_{m}\delta,\ \left|\sigma_{N_{0}+1}- \sigma_{N_{0}}\right|\leq L_{\sigma}\delta,\] _where \(L_{m}\) and \(L_{\sigma}\) depend on \(\underline{L}_{b}\) and \(L_{\rho}\)._ Proof.: First, we prove inequality \(\left|m_{N_{0}+1}-m_{N_{0}}\right|\leq L_{m}\delta\). Note that \[m_{N_{0}+1}=x_{min}+\frac{\rho_{N_{0}}(v_{min}-a_{N_{0}})}{b_{N_{0}}(1-\rho_{N_ {0}}^{2})},\quad m_{N_{0}}=x_{min}+\frac{\rho_{N_{0}-1}(v_{min}-a_{N_{0}-1})}{b _{N_{0}-1}(1-\rho_{N_{0}-1}^{2})},\] which deduces that \[\left|m_{N_{0}+1}-m_{N_{0}}\right|\leq \left|\frac{\rho_{N_{0}}(v_{min}-a_{N_{0}})}{b_{N_{0}}(1-\rho_{N_ {0}}^{2})}-\frac{\rho_{N_{0}-1}(v_{min}-a_{N_{0}-1})}{b_{N_{0}-1}(1-\rho_{N_ {0}-1}^{2})}\right|\] \[\leq \left|\frac{\rho_{N_{0}}(v_{min}-a_{N_{0}})}{b_{N_{0}}(1-\rho_{N _{0}}^{2})}-\frac{\rho_{N_{0}-1}(v_{min}-a_{N_{0}})}{b_{N_{0}}(1-\rho_{N_{0}- 1}^{2})}\right|\] \[+\left|\frac{\rho_{N_{0}-1}(v_{min}-a_{N_{0}})}{b_{N_{0}}(1-\rho_ {N_{0}-1}^{2})}-\frac{\rho_{N_{0}-1}(v_{min}-a_{N_{0}-1})}{b_{N_{0}}(1-\rho_{ N_{0}-1}^{2})}\right|\] \[+\left|\frac{\rho_{N_{0}-1}(v_{min}-a_{N_{0}-1})}{b_{N_{0}}(1-\rho _{N_{0}-1}^{2})}-\frac{\rho_{N_{0}-1}(v_{min}-a_{N_{0}-1})}{b_{N_{0}-1}(1-\rho_ {N_{0}-1}^{2})}\right|\] \[\leq \frac{(1+L_{\rho}^{2})v_{min}}{\underline{L}_{b}(1-L_{\rho}^{2}) ^{2}}\delta+\frac{L_{\rho}}{\underline{L}_{b}(1-L_{\rho}^{2})}\delta+\frac{L_{ \rho}v_{min}}{\underline{L}_{b}^{2}(1-L_{\rho}^{2})}\delta\] \[= L_{m}\delta,\] where \[L_{m}=\frac{\underline{L}_{b}(1+L_{\rho}^{2})v_{min}+\underline{L}_{b}L_{\rho} (1-L_{\rho}^{2})+L_{\rho}(1-L_{\rho}^{2})v_{min}}{\underline{L}_{b}^{2}(1-L_{ \rho}^{2})^{2}}.\] Similarly, we have \(\left|\sigma_{N_{0}+1}-\sigma_{N_{0}}\right|\leq L_{\sigma}\delta\), where \[L_{\sigma}=\frac{L_{\rho}\underline{L}_{b}v_{min}+\underline{L}_{b}(1-L_{\rho} ^{2})+(1-L_{\rho}^{2})v_{min}}{\underline{L}_{b}^{2}(1-L_{\rho}^{2})^{\frac{3 }{2}}}.\] For a given positive integer \(N_{0}>0\), we introduce the following notations used in Lemma 2.2: \[L_{0,N_{0}}(1)= N\left|\beta_{N_{0}}(2)\right|L_{m}+N\left|\beta_{N_{0}}(3) \right|(L_{m}+L_{\sigma});\] \[L_{0,N_{0}}(2)= \left|X_{1}^{\top}[V-Y(m_{N_{0}},\sigma_{N_{0}})\beta_{N_{0}}^{1} ]\right|L_{m}+\left|X_{1}^{\top}X_{2}(m_{N_{0}},\sigma_{N_{0}})\beta_{N_{0}}(3 )\right|(L_{m}+L_{\sigma})+\left|\beta_{N_{0}}(2)\right|+\left|\beta_{N_{0}}(3 )\right|;\] \[L_{0,N_{0}}(3)= \left|X_{1}^{\top}[V-Y(m_{N_{0}},\sigma_{N_{0}})\beta_{N_{0}}^{2} ]\right|(L_{m}+L_{\sigma})+\left|X_{1}^{\top}X_{3}(m_{N_{0}},\sigma_{N_{0}}) \beta_{N_{0}}(2)\right|L_{m}+\left|\beta_{N_{0}}(2)\right|+\left|\beta_{N_{0} }(3)\right|,\] where \(\beta_{N_{0}}^{1}=(\beta_{N_{0}}(1),2\beta_{N_{0}}(2),\beta_{N_{0}}(3))^{\top}\) and \(\beta_{N_{0}}^{2}=(\beta_{N_{0}}(1),\beta_{N_{0}}(2),2\beta_{N_{0}}(3))^{\top}\). **Lemma 2.2**.: _Let the sequences \(\{a_{n},b_{n},\rho_{n}\}_{n=1}^{\infty}\) satisfy the following conditions:_ _(i). \(0<\underline{L}_{b}\leq b_{n}\), \(|\rho_{n}|<L_{\rho}<1\), \(0\leq a_{n}\), \(n\geq 1\);_ _(ii). For a sufficiently small \(\delta>0\), there is a positive integer \(N_{0}\), such that_ \[\left|a_{N_{0}}-a_{N_{0}-1}\right|<\delta;\] \[\left|b_{N_{0}}-b_{N_{0}-1}\right|<\delta;\] \[\left|\rho_{N_{0}}-\rho_{N_{0}-1}\right|<\delta;\] _(iii). Absolute value of each element of the vector_ \[[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}L_{0,N_{0}}\] _is smaller than \((1-\alpha)\underline{L}\), where \(0<L<1,\ 0<\alpha<1\), \(L_{0,N_{0}}=(L_{0,N_{0}}(1),L_{0,N_{0}}(2),L_{0,N_{0}}(3))^{\top}\)._ _Then, we have that,_ \[\left|a_{N_{0}+1}-a_{N_{0}}\right|<L\delta;\] \[\left|b_{N_{0}+1}-b_{N_{0}}\right|<L\delta;\] \[\left|\rho_{N_{0}+1}-\rho_{N_{0}}\right|<\frac{2L}{\underline{L}_ {b}}\delta.\] Proof.: Note that \[V=(v_{1},v_{2},\cdots,v_{N})^{\top}\in\mathbb{R}^{N\times 1},\quad Y(m_{N_{0}+1},\sigma_{N_{0}+1})=(X_{1},X_{2}(m_{N_{0}+1},\sigma_{N_{0}+1}),X_{3}(m_{N_{0}+ 1},\sigma_{N_{0}+1}))\in\mathbb{R}^{N\times 3},\] where \[X_{1}=(1,1,\cdots,1)^{\top}\;;\] \[X_{2}(m_{N_{0}+1},\sigma_{N_{0}+1})=(x_{1}-m_{N_{0}+1},\cdots,x_{ N}-m_{N_{0}+1})^{\top}\;;\] \[X_{3}(m_{N_{0}+1},\sigma_{N_{0}+1})=\left(\sqrt{(x_{1}-m_{N_{0}+ 1})^{2}+\sigma_{N_{0}+1}^{2}},\cdots,\sqrt{(x_{N}-m_{N_{0}+1})^{2}+\sigma_{N_ {0}+1}^{2}}\right)^{\top}.\] The following two formulas are considered: \[\beta_{N_{0}}=[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}Y_{N_{0}}^{\top}V,\quad\beta_{N_ {0}+1}=[Y_{N_{0}+1}^{\top}Y_{N_{0}+1}]^{-1}Y_{N_{0}+1}^{\top}V,\] where \(\beta_{N_{0}}=(a_{N_{0}},b_{N_{0}}\rho_{N_{0}},b_{N_{0}})^{\top}\) and \(\beta_{N_{0}+1}=(a_{N_{0}+1},b_{N_{0}+1}\rho_{N_{0}+1},b_{N_{0}+1})^{\top}\). Following that, \[\beta_{N_{0}+1}-\beta_{N_{0}}=[Y_{N_{0}+1}^{\top}Y_{N_{0}+1}]^{-1}Y_{N_{0}+1}^ {\top}V-[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}Y_{N_{0}}^{\top}V. \tag{2.11}\] Let \(\Delta=(\Delta_{0},\Delta_{1},\Delta_{2})\), where \[\Delta_{0} =(0,0,\cdots,0)^{\top};\] \[\Delta_{1} =(\underline{\delta}_{1},\underline{\delta}_{2},\cdots,\underline {\delta}_{N})^{\top};\] \[\Delta_{2} =(\overline{\delta}_{1},\overline{\delta}_{2},\cdots,\overline{ \delta}_{N})^{\top},\] and \(\underline{\delta_{i}}=(x_{i}-m_{N_{0}+1})-(x_{i}-m_{N_{0}})\), \(\overline{\delta_{i}}=\sqrt{(x_{i}-m_{N_{0}+1})^{2}+\sigma_{N_{0}+1}^{2}}-\sqrt {(x_{i}-m_{N_{0}})^{2}+\sigma_{N_{0}}^{2}},\ 1\leq i\leq N\). For any given \(i\), by Lemma 2.1, we have that \[\left|\underline{\delta_{i}}\right|=\left|m_{N_{0}+1}-m_{N_{0}}\right|<L_{m}\delta,\] and \[\left|\overline{\delta_{i}}\right|= \left|\sqrt{(x_{i}-m_{N_{0}+1})^{2}+\sigma_{N_{0}+1}^{2}}-\sqrt{ (x_{i}-m_{N_{0}})^{2}+\sigma_{N_{0}}^{2}}\right|\] \[\leq \left|\sqrt{(x_{i}-m_{N_{0}+1})^{2}+\sigma_{N_{0}+1}^{2}}-\sqrt{ (x_{i}-m_{N_{0}})^{2}+\sigma_{N_{0}+1}^{2}}\right|\] \[+\left|\sqrt{(x_{i}-m_{N_{0}})^{2}+\sigma_{N_{0}+1}^{2}}-\sqrt{ (x_{i}-m_{N_{0}})^{2}+\sigma_{N_{0}}^{2}}\right|\] \[\leq \frac{\left|(x_{i}-m_{N_{0}+1})^{2}-(x_{i}-m_{N_{0}})^{2}\right| }{\left|x_{i}-m_{N_{0}+1}\right|+\left|x_{i}-m_{N_{0}}\right|}+\frac{\left| \sigma_{N_{0}+1}^{2}-\sigma_{N_{0}}^{2}\right|}{\sigma_{N_{0}+1}+\sigma_{N_{0}}}\] \[\leq (L_{m}+L_{\sigma})\delta,\] and thus \[\left|\underline{\delta_{i}}\right|\leq L_{m}\delta;\] \[\left|\overline{\delta_{i}}\right|\leq(L_{m}+L_{\sigma})\delta;\] \[Y_{N_{0}+1}=\Delta+Y_{N_{0}}.\] We first consider the right part of (2.11): \[[Y_{N_{0}+1}^{\top}Y_{N_{0}+1}]^{-1}Y_{N_{0}+1}^{\top}V-[Y_{N_{0 }}^{\top}Y_{N_{0}}]^{-1}Y_{N_{0}}^{\top}V\] \[= [Y_{N_{0}}^{\top}Y_{N_{0}}+Y_{N_{0}}^{\top}\Delta+\Delta^{\top}Y_ {N_{0}}+\Delta^{\top}\Delta]^{-1}[Y_{N_{0}}^{\top}+\Delta^{\top}]V-[Y_{N_{0}}^ {\top}Y_{N_{0}}]^{-1}Y_{N_{0}}^{\top}V\] \[= [Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}\left[Y_{N_{0}}^{\top}Y_{N_{0}}[Y _{N_{0}}^{\top}Y_{N_{0}}+Y_{N_{0}}^{\top}\Delta+\Delta^{\top}Y_{N_{0}}+\Delta^ {\top}\Delta]^{-1}[Y_{N_{0}}^{\top}+\Delta^{\top}]V-Y_{N_{0}}^{\top}V\right]\] \[= [Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}\left[\left[I+[Y_{N_{0}}^{\top} \Delta+\Delta^{\top}Y_{N_{0}}+\Delta^{\top}\Delta][Y_{N_{0}}^{\top}Y_{N_{0}}]^ {-1}\right]^{-1}-I\right]Y_{N_{0}}^{\top}V\] \[+[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}\left[\left[I+[Y_{N_{0}}^{\top} \Delta+\Delta^{\top}Y_{N_{0}}+\Delta^{\top}\Delta][Y_{N_{0}}^{\top}Y_{N_{0}}]^ {-1}\right]^{-1}\Delta^{\top}V\right].\] From \[A= \left[I+B[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}\right]^{-1};\] \[B= Y_{N_{0}}^{\top}\Delta+\Delta^{\top}Y_{N_{0}}+\Delta^{\top}\Delta,\] we obtain \[I= A\left[I+B[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}\right]\] \[= A+AB[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1},\] \[A=I-AB[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}.\] Furthermore, from (2.11), it follows that \[\beta_{N_{0}+1}-\beta_{N_{0}}=[Y_{N_{0}}^{\top}Y_{N_{0}}]^{-1}A\left[\Delta^{ \top}V-B\beta_{N_{0}}\right],\] and \(\Delta^{\top}V-B\beta_{N_{0}}=(E_{1},E_{2},E_{3})^{\top}\), where: \[E_{1}= -\sum_{i=1}^{N}\left(\underline{\delta}_{i}\beta_{N_{0}}(2)+ \overline{\delta}_{i}\beta_{N_{0}}(3)\right);\] \[E_{2}= \sum_{i=1}^{N}\left(\underline{\delta}_{i}v_{i}-\underline{ \delta}_{i}\beta_{N_{0}}(1)-\left[2\underline{\delta}_{i}(x_{i}-m_{N_{0}})+ \underline{\delta}_{i}^{2}\right]\beta_{N_{0}}(2)\right.\] \[\left.-\left[\overline{\delta}_{i}(x_{i}-m_{N_{0}})+\underline{ \delta}_{i}\sqrt{(x_{i}-m_{N_{0}})^{2}+\sigma_{N_{0}}^{2}}+\underline{\delta} _{i}\overline{\delta}_{i}\right]\beta_{N_{0}}(3)\right);\] \[E_{3}= \sum_{i=1}^{N}\left(\overline{\delta}_{i}v_{i}-\overline{\delta} _{i}\beta_{N_{0}}(1)-\left[\overline{\delta}_{i}(x_{i}-m_{N_{0}})+\underline {\delta}_{i}\sqrt{(x_{i}-m_{N_{0}})^{2}+\sigma_{N_{0}}^{2}}+\underline{\delta} \overline{\delta}_{i}\right]\beta_{N_{0}}(2)\right.\] \[\left.-\left[2\overline{\delta}_{i}\sqrt{(x_{i}-m_{N_{0}})^{2}+ \sigma_{N_{0}}^{2}}+\overline{\delta}_{i}^{2}\right]\beta_{N_{0}}(3)\right).\] From the inequalities of \(\underline{\delta}_{i},\overline{\delta}_{i},\ 1\leq i\leq N\), and sufficiently small \(\delta\), we have \[|E_{i}|\leq L_{0,N_{0}}(i)\delta,\quad i=1,2,3,\] where \[L_{0,N_{0}}(1)= N\left|\beta_{N_{0}}(2)\right|L_{m}+N\left|\beta_{N_{0}}(3) \right|(L_{m}+L_{\sigma});\] \[L_{0,N_{0}}(2)= \left|X_{1}^{\top}[V-Y(m_{N_{0}},\sigma_{N_{0}})\beta_{N_{0}}^{1 }]\right|L_{m}+\left|X_{1}^{\top}X_{2}(m_{N_{0}},\sigma_{N_{0}})\beta_{N_{0}} (3)\right|(L_{m}+L_{\sigma})+\left|\beta_{N_{0}}(2)\right|+\left|\beta_{N_{0}} (3)\right|;\] \[L_{0,N_{0}}(3)= \left|X_{1}^{\top}[V-Y(m_{N_{0}},\sigma_{N_{0}})\beta_{N_{0}}^{2 }]\right|(L_{m}+L_{\sigma})+\left|X_{1}^{\top}X_{3}(m_{N_{0}},\sigma_{N_{0}}) \beta_{N_{0}}(2)\right|L_{m}+\left|\beta_{N_{0}}(2)\right|+\left|\beta_{N_{0}} (3)\right|,\] and \(\beta_{N_{0}}^{1}=(\beta_{N_{0}}(1),2\beta_{N_{0}}(2),\beta_{N_{0}}(3))^{\top}\), \(\beta_{N_{0}}^{2}=(\beta_{N_{0}}(1),\beta_{N_{0}}(2),2\beta_{N_{0}}(3))^{\top}\). For a sufficiently small \(\delta>0\), it is convenient to demonstrate that the absolute value of each element of the vector \[A\left[\Delta^{\top}V-B\beta_{N_{0}}\right],\] is smaller than that of vector \[\frac{\delta}{1-\alpha}L_{0,N_{0}},\quad L_{0,N_{0}}=(L_{0,N_{0}}(1),L_{0,N_{ 0}}(2),L_{0,N_{0}}(3))^{\top},\] where \(0<\alpha<1\) is a given constant. Then, from condition (iii), we have \[\left|\beta_{N_{0}+1}(i)-\beta_{N_{0}}(i)\right|\leq L\delta,\ L<1,\ i=1,2,3,\] which deduces that \[\left|a_{N_{0}+1}-a_{N_{0}}\right|<L\delta;\] \[\left|b_{N_{0}+1}\rho_{N_{0}+1}-b_{N_{0}}\rho_{N_{0}}\right|<L\delta;\] \[\left|b_{N_{0}+1}-b_{N_{0}}\right|<L\delta.\] From the inequality \(\left|b_{N_{0}+1}\rho_{N_{0}+1}-b_{N_{0}}\rho_{N_{0}}\right|<L\delta\), following that \[\left|b_{N_{0}+1}\rho_{N_{0}+1}-b_{N_{0}}\rho_{N_{0}}\right|\] \[= \left|b_{N_{0}+1}\rho_{N_{0}+1}-b_{N_{0}+1}\rho_{N_{0}}+b_{N_{0}+1 }\rho_{N_{0}}-b_{N_{0}}\rho_{N_{0}}\right|\] \[\geq \left|b_{N_{0}+1}\rho_{N_{0}+1}-b_{N_{0}+1}\rho_{N_{0}}\right|- \left|b_{N_{0}+1}\rho_{N_{0}}-b_{N_{0}}\rho_{N_{0}}\right|\] and thus \[\left|\rho_{N_{0}+1}-\rho_{N_{0}}\right|\leq\frac{2L}{\underline{L}_{b}}\delta.\] This completes the proof. Based on Lemmas 2.1 and 2.2, we present the main results. **Theorem 2.1**.: _Let conditions (i) and (ii) of Lemma 2.2 hold; condition (iii) in Lemma 2.2 is independent of \(N_{0}\), i.e,_ _(iii')._ \[\sup_{n\geq N_{0}}\left|[Y_{n}^{\top}Y_{n}]^{-1}L_{0,n}(i)\right|<(1-\alpha)L, \ i=1,2,3,\] _where \(0<\alpha<1\), \([Y_{n}^{\top}Y_{n}]^{-1}L_{0,n}(i)\) is the \(i\)-th element of the vector \([Y_{n}^{\top}Y_{n}]^{-1}L_{0,n}\), \(L_{0,n}\) is given in Lemma 2.2, \(0<L<1\), and \(2L<\underline{L}_{b}\)._ _Then, we have that_ \[\left|a_{n}-a_{n-1}\right|<\left(L\vee\frac{2L}{\underline{L}_{b} }\right)^{n-N_{0}}\delta;\] \[\left|b_{n}-b_{n-1}\right|<\left(L\vee\frac{2L}{\underline{L}_{b} }\right)^{n-N_{0}}\delta;\] \[\left|\rho_{n}-\rho_{n-1}\right|<\left(L\vee\frac{2L}{\underline{ L}_{b}}\right)^{n-N_{0}}\delta,\] _and the sequences \(\{a_{n},b_{n},\rho_{n}\}_{n\geq N_{0}}\) converge as \(n\to\infty\)._ Proof.: By applying Lemma 2.2, for \(N_{0}\), we have \[\left|a_{N_{0}+1}-a_{N_{0}}\right| <L\delta;\] \[\left|b_{N_{0}+1}-b_{N_{0}}\right| <L\delta;\] \[\left|\rho_{N_{0}+1}-\rho_{N_{0}}\right| <\frac{2L}{\underline{L}_{b}}\delta.\] Note that \(L<1\) and \(2L<\underline{L}_{b}\); then, we can apply Lemma 2.2 to the case \(N_{0}+1\), and obtain \[\left|a_{N_{0}+2}-a_{N_{0}+1}\right| <\left(L\vee\frac{2L}{\underline{L}_{b}}\right)^{2}\delta;\] \[\left|b_{N_{0}+2}-b_{N_{0}+1}\right| <\left(L\vee\frac{2L}{\underline{L}_{b}}\right)^{2}\delta;\] \[\left|\rho_{N_{0}+2}-\rho_{N_{0}+1}\right| <\left(L\vee\frac{2L}{\underline{L}_{b}}\right)^{2}\delta.\] We complete the proof using the induction method. **Remark 2.3**.: _Theorem 2.1 shows that when the difference in sequences \(\{a_{n},b_{n},\rho_{n}\}_{n\geq 1}\) at some index \(N_{0}\) is sufficiently small and the observations of model (1.1) satisfy bounded condition (iii'), the sequences \(\{a_{n},b_{n},\rho_{n}\}_{n\geq 1}\) converge as \(n\to\infty\). Although the conditions in Theorem 2.1 are rather complicated, we can easily verify these conditions in a practical analysis._ Based on Theorem 2.1, we introduce the following convergence results: **Theorem 2.2**.: _Let the sequences \(\{a_{n},b_{n},\rho_{n}\}_{n=1}^{\infty}\) satisfy the following conditions:_ _(i). \(\{a_{n},b_{n},\rho_{n}\}_{n=1}^{\infty}\) are bounded and \(\sup_{n\geq 1}|\rho_{n}|<1\)._ _(ii). There exists a positive integer \(N_{0}\) such that when \(n>N_{0}\), \(\{a_{n}\}_{n>N_{0}}\) and \(\{b_{n}\}_{n>N_{0}}\) increase with \(n\), and \(\{\rho_{n}\}_{n>N_{0}}\) decreases with \(n\)._ _or_ _(iii). A positive integer \(N_{0}\) exists such that when \(n>N_{0}\), \(\{a_{n}\}_{n>N_{0}}\) and \(\{b_{n}\}_{n>N_{0}}\) decrease with \(n\) and \(\{\rho_{n}\}_{n>N_{0}}\) increases with \(n\)._ _or_ _(iv). Sequences \(\{a_{n},b_{n},\rho_{n}\}_{n\geq 1}\) converge as \(n\to\infty\)._ _Then, we have \(\{m_{n},\sigma_{n}\}_{n>N_{0}}\) as decreasing sequences with \(n\) under conditions (i) and (ii), and \(\{m_{n},\sigma_{n}\}_{n>N_{0}}\) as increasing sequences with \(n\) under conditions (i) and (iii). We have limits \((a^{*},b^{*},\rho^{*},m^{*},\sigma^{*})\) for sequences \(\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\}_{n\geq 1}\) under conditions (i) and (ii), or (iii), or (iv)._ Proof.: It is easy to prove that \(\{m_{n},\sigma_{n}\}_{n>N_{0}}\) are decreasing sequences with \(n\) under conditions (i) and (ii). Similarly, we can show that \(\{m_{n},\sigma_{n}\}_{n>N_{0}}\) are increasing sequences with \(n\) under conditions (i) and (iii). For any given \(n>N_{0}\), from Formula (2.10), it follows that \[m_{n+1}=x_{min}+\frac{\rho_{n}(v_{min}-a_{n})}{b_{n}(1-\rho_{n}^{2})},\quad\sigma_ {n+1}=\frac{v_{min}-a_{n}}{b_{n}\sqrt{1-\rho_{n}^{2}}}.\] According to conditions (i) and (ii), \(\{m_{n+1},\sigma_{n+1}\}_{n>N_{0}}\) decreases with \(n\). Therefore, we set \[(a^{*},b^{*},\rho^{*},m^{*},\sigma^{*})=\lim_{n\to\infty}(a_{n},b_{n},\rho_{n}, m_{n},\sigma_{n}),\] which satisfy \[m^{*}=x_{min}+\frac{\rho^{*}(v_{min}-a^{*})}{b^{*}(1-\rho^{*2})},\quad\sigma^{ *}=\frac{v_{min}-a^{*}}{b^{*}\sqrt{1-\rho^{*2}}}. \tag{2.12}\] Similarly, we can establish formulas (2.12) for \((a^{*},b^{*},\rho^{*},m^{*},\sigma^{*})\) under conditions (i) and (iii) or (iv). Furthermore, we have that \((a^{*},b^{*},\rho^{*},m^{*},\sigma^{*})\) are the optimal parameters of the least-squares optimizer \[\min_{\beta=(a,b,\rho)^{\top}}\left(V-Y(m,\sigma)\beta\right)^{\top}\left(V-Y (m,\sigma)\beta\right) \tag{2.13}\] with the minimum-point constraint condition (2.12). **Remark 2.4**.: _Based on the results of Theorem 2.2, when some monotonic conditions of \(\{a_{n},b_{n},\rho_{n}\}\) are satisfied, our FPI-SVI algorithm can obtain an optimal estimation for parameters \((a,b,\rho,m,\sigma)\) for the least-squares optimizer (2.13) under constraint (2.12) at the minimum point \((x_{min},v_{min})\). The advantage of FPI-SVI algorithm is that we can use the constraint condition (2.12) to improve the accuracy of the least-squares estimations of (2.13)._ ### When \(\rho^{2}=1\) Now, we consider the case \(\rho^{2}=1\) which is an important situation in real financial market. Note that, when the curve is rotated along the origin \((0,0)\), the properties of the curve will not change. Let \(\rho=-1\). It is easy to demonstrate that the SVI model (1.1) takes the minimum point \((+\infty,a)\), and \(v(x)\) decreases with \(x\in\mathbb{R}\). Next, we show how to rotate the curve along the origin \((0,0)\), and translate the case \(\rho=-1\) to \(\rho^{2}<1\). Let \((x,v)\) be a point in \[v(x)=a+b(-(x-m)+\sqrt{(x-m)^{2}+\sigma^{2}}).\] Let \(0<\theta<\frac{\pi}{2}\). After rotating the curve, the new coordinate of the point \((x,v)\) is, \[x^{\prime} =x\cos\theta-v\sin\theta,\] \[v^{\prime} =x\sin\theta+v\cos\theta.\] By complicated calculation, we can show that \((x^{\prime},v^{\prime})\) satisfies the following SVI model \[v^{\prime}(x^{\prime})=a^{\prime}+b^{\prime}(\rho^{\prime}(x^{\prime}-m^{ \prime})+\sqrt{(x^{\prime}-m^{\prime})^{2}+\sigma^{\prime 2}}),\] where \[\left\{\begin{array}{l}a^{\prime}=\frac{a}{\cos\theta}-a_{0},\\ a_{0}=m_{0}\cos^{4}\theta\tan\theta(-\frac{\tan\theta}{b}+2+\frac{\rho^{\prime} }{\cos^{2}\theta b}),\\ b^{\prime}=\frac{b}{(1+2b\tan\theta)\cos^{2}\theta},\\ \rho^{\prime}=(\frac{\tan\theta}{b}+\tan^{2}\theta-1)\cos^{2}\theta,\\ m^{\prime}=\frac{\cos^{2}\theta m_{0}-\frac{a_{0}}{b^{\prime}}}{(\frac{\tan \theta}{b}+\tan^{2}\theta-1)\cos^{2}\theta},\\ m_{0}=\frac{a\tan\theta}{\cos\theta}-\frac{m}{\cos\theta},\\ \sigma^{\prime 2}=\frac{\sigma^{2}b}{b^{\prime}}-\frac{2a_{0}^{2}}{b^{ \prime 2}}+\frac{2a_{0}\cos^{2}\theta m_{0}}{b^{\prime}}.\end{array}\right.\] Let \(\rho^{\prime 2}<1\), which deduces that \[\left[(\frac{\tan\theta}{b}+\tan^{2}\theta-1)\cos^{2}\theta\right]^{2}<1\] and thus \[(\frac{\tan\theta}{b}+2\tan^{2}\theta)(\frac{\tan\theta}{b}-2)<0.\] Note that \(b>0\), thus \(\theta\) should satisfy \[0<\theta<\arctan(2b). \tag{2.14}\] In the following, we consider a simple example of the SVI model with parameters \((a,b,\rho,m,\sigma)=(0.5,0.5,-1,-0.3,0.5)\). In Figure 1, we rotate the blue curve which is an SVI curve with parameters \((a,b,\rho,m,\sigma)=(0.5,0.5,-1,-0.3,0.5)\) to a red curve with \(\theta=\frac{\pi}{12}\) which satisfies condition (2.14). We can now use the method developed for the case \(\rho^{2}<1\) to estimate the parameters of the red curve. We rotate the fitting curve back, which is used to fit the blue curve. We now introduce two error indices to verify our FPI-SVI algorithm. Let \(\hat{V}=(\hat{v}_{1},\hat{v}_{2},\cdots,\hat{v}_{N})^{\top}\) be the points in the fitting curve. The root average squared error (RASE) and root maximum squared error (RMSE) are defined as: \[\text{RASE}=\sqrt{\frac{(V-\hat{V})^{\top}(V-\hat{V})}{N}},\quad\text{RMSE}= \sqrt{\max_{1\leq i\leq N}(v_{i}-\hat{v}_{i})^{2}}.\] In Figure 2, we show that the RASE of the FPI-SVI algorithm is 1.2642e-04, and the RMSE is 1.9806e-04 after 50 steps, which demonstrate that our FPI-SVI algorithm is useful for the case \(\rho^{2}=1\). Figure 1: The curve of SVI under \((a,b,\rho,m,\sigma)=(0.5,0.5,-1,-0.3,0.5)\) and the curve after rotation. **Remark 2.5**.: _In this part, we show a simple method to translate the case \(\rho=-1\) to \(\rho^{2}<1\) via rotating the SVI curve along the origin \((0,0)\). However, it is not possible to distinguish in advance whether the SVI model will be approximated by \(\rho^{2}=1\) or \(\rho^{2}<1\) in real market. These results motivate us to develop a uniform FPI-SVI algorithm to deal with the cases \(\rho^{2}=1\) and \(\rho^{2}<1\), such that the algorithm does not depend on the minimum value \((x_{min},v_{min})\) of SVI model. We investigate the uniform FPI-SVI algorithm in the following subsection._ ### A uniform FPI-SVI algorithm for \(\rho^{2}\leq 1\) Note that, the FPI-SVI algorithm depends on the minimum value point \((x_{min},v_{min})\) of SVI model in Section 2.1. When \(\rho^{2}=1\), we need to transform it into the case \(\rho^{2}<1\) by rotating the SVI curve. In this part, we investigate a method to deal with the cases \(\rho^{2}<1\) and \(\rho^{2}=1\) uniformly. In the following, we show the details of the algorithm. Based on a fixed observation \(\{x,v,v_{x}\}\), where \(v_{x}\) is the first derivative of \(v(x)\) on \(x\), it follows that \[\left\{\begin{aligned} & v=a+b(\rho(x-m)+\sqrt{(x-m)^{2}+\sigma^{2}}), \\ & v_{x}=b\rho+\frac{b(x-m)}{\sqrt{(x-m)^{2}+\sigma^{2}}},\end{aligned}\right.\] and thus \[\left\{\begin{aligned} &\frac{v-a}{b}=\sigma\left(\rho\frac{x-m}{ \sigma}+\sqrt{\left(\frac{x-m}{\sigma}\right)^{2}+1}\right),\\ &\frac{x-m}{\sigma}=\frac{v_{x}-b\rho}{b}\sqrt{\left(\frac{x-m}{ \sigma}\right)^{2}+1}.\end{aligned}\right. \tag{2.15}\] From equation (2.15), we have the following representations for \(m\) and \(\sigma\). Figure 2: FPI-SVI algorithm: the left picture shows the estimations results of parameters in model (1.1), and the right picture shows the RASE of estimations of \(v\) in model (1.1). **Lemma 2.3**.: _Let \(\{x,v,v_{x}\}\) be the observations from SVI curve. We have_ \[m =x-\frac{(v-a)(v_{x}-b\rho)}{b\rho v_{x}+b^{2}(1-\rho^{2})}, \tag{2.16}\] \[\sigma =\frac{(v-a)\sqrt{b^{2}-(v_{x}-b\rho)^{2}}}{b\rho v_{x}+b^{2}(1- \rho^{2})}.\] Proof.: Now, we derive the explicit formulas for \(\sigma\) and \(m\), respectively. By the second equation of (2.15), it follows that \[\frac{x-m}{\sigma}=\frac{v_{x}-b\rho}{\sqrt{b^{2}-(v_{x}-b\rho)^{2}}}. \tag{2.17}\] Combining equation (2.17) and the first equation of (2.15), we have \[\frac{v-a}{b}=\frac{\rho v_{x}+b(1-\rho^{2})}{\sqrt{b^{2}-(v_{x}-b\rho)^{2}}}\sigma\] and thus \[\sigma=\frac{(v-a)\sqrt{b^{2}-(v_{x}-b\rho)^{2}}}{b\rho v_{x}+b^{2}(1-\rho^{2 })}. \tag{2.18}\] Then, from equations (2.17) and (2.18), it follows that \[m=x-\frac{(v-a)(v_{x}-b\rho)}{b\rho v_{x}+b^{2}(1-\rho^{2})}. \tag{2.19}\] This completes the proof. **Remark 2.6**.: _When the observation point is the minimum value of \(v(x)\), it implies that \(v_{x}=0\). Then, the explicit formulas (2.16) reduce to formulas (2.1) and (2.2)._ **Remark 2.7**.: _Applying Lemma 2.3, we can show the FPI-SVI algorithm following the steps given in Section 2.1. Similar with Remark 2.1, we show how to estimate \(v_{x}\) based on the observations \(\{x_{i},v_{i}\}_{i=1}^{N}\)._ * _Method I': A natural method is to use the differential formula to estimate_ \(v_{x}\)_:_ \(v_{x,i}=\frac{v_{i+1}-v_{i-1}}{x_{i+1}-x_{i-1}},2\leq i\leq N-1\)_. As for_ \(v_{x,1}\) _and_ \(v_{x,N}\)_, we let_ \(v_{x,1}=v_{x,2}\) _and_ \(v_{x,N}=v_{x,N-1}\)_._ * _Method II': Based on a fixed observation_ \((x_{j},v_{j})\)_, we use a smooth function to approximate the local property of the SVI curve. We take three points from observations_ \(\{x_{i},v_{i}\}_{i=1}^{N}\)_. Then, we consider the following three points:_ \((x_{j-1},v_{j-1}),\ (x_{j},v_{j}),\ (x_{j+1},v_{j+1})\)_. We use a quadratic function to fit the above three points,_ \[v(x)=\hat{h}_{1}x^{2}+\hat{h}_{2}x+\hat{h}_{3}\] _and obtain the parameters of the quadratic function_ \((\hat{h}_{1},\hat{h}_{2},\hat{h}_{3})\)_. Thus, we get the derivative function equation of_ \(v(x)\)_:_ \(v_{x}=2\hat{h}_{1}x+\hat{h}_{2}\)_. Then we get an initial guess point_ \((x_{j},v_{j},2\hat{h}_{1}x_{j}+\hat{h}_{2})\) Simulation analysis In this section, we first use some true parameters of the SVI model (1.1) to verify the advantages of the FPI-SVI algorithm. We perform simulations and compare our FPI-SVI algorithm with the quasi-explicit SVI method. Considering the following true parameters of the SVI model (1.1): \[(a,b,\rho,m,\sigma)=(0.5,0.5,-0.5,-0.3,0.5). \tag{3.1}\] We take \(x_{i}=-1.9+0.1(i-1)\), and \(v_{i}=a+b\left(\rho(x_{i}-m)+\sqrt{(x_{i}-m)^{2}+\sigma^{2}}\right),\ 1\leq i\leq 39.\) Applying the FPI-SVI algorithm 1, we perform a multistep iteration FPI-SVI algorithm with \(K=50\) and ignore the error threshold \(\delta\). First, we present the convergence property of sequences \(\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\}_{n\geq 1}\) in Figure 3. In Figure 3, the values of sequences \(\{a_{n},b_{n},\rho_{n}\}_{n\geq 1}\) satisfy \[|a_{11}-a_{10}| \leq 0.01;\] \[|b_{11}-b_{10}| \leq 0.01;\] \[|\rho_{11}-\rho_{10}| \leq 0.01,\] and the values of the sequences \(\{m_{n},\sigma_{n}\}_{n\geq 1}\) satisfy \[|m_{11}-m_{10}| \leq 0.01;\] \[|\sigma_{11}-\sigma_{10}| \leq 0.01.\] Furthermore, the sequences \(\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\}_{n\geq 1}\) converge to the limits after the 20-th iterative step, and the limits are \((0.5000,0.5000,-0.5000,-0.3000,0.5000)\) which are almost identical to the true values of the parameters \((a,b,\rho,m,\sigma)\). The calculation results verify the main results of Theorem 2.1. See Table 1. Figure 3: FPI-SVI algorithm: values of sequences \(\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\}_{n=1}^{50}\) along with the true values of parameters \((a,b,\rho,m,\sigma)=(0.5,0.5,-0.5,-0.3,0.5)\). Based on the FPI-SVI algorithm, the estimations and root average squared error (RASE) of model (1.1) are presented in Figure 4. We conclude with the quasi-explicit SVI method of [15] as follows: \begin{tabular}{l} \hline **Algorithm 2:** Quasi-explicit SVI method \\ \hline **Input:**\((m_{0},\sigma_{0})=(x_{min},v_{min})\), \(\left\{x_{i},v_{i}\right\}_{i=1}^{N}\) \\ **Output:** Estimations of parameters \((a,b,\rho,m,\sigma)\) \\ \hline **Initialization**: \(n=0,\ M=50\); \\ \(Y_{0}=Y(m_{0},\sigma_{0})\); \\ \(\hat{\beta}_{0}=\left[Y_{0}^{\top}Y_{0}\right]^{-1}Y_{0}^{\top}V\). \\ \hline **while**\(n\leq M\)**do** \\ \(n=n+1\); \\ \((m_{n},\sigma_{n})=\min_{m,\sigma}\left(V-Y(m,\sigma)\hat{\beta}_{n-1}\right) ^{\top}\left(V-Y(m,\sigma)\hat{\beta}_{n-1}\right)\); \\ \(Y_{n}=Y(m_{n},\sigma_{n})\); \\ \(\hat{\beta}_{n}=\left[Y_{n}^{\top}Y_{n}\right]^{-1}Y_{n}^{\top}V\). \\ \hline \end{tabular} We also perform 50 step iterations in the quasi-explicit SVI method and present the convergence property of sequences \(\left\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\right\}_{n\geq 1}\) in Figure 5. Based on the quasi-explicit SVI method, the estimations and RASE of model (1.1) are recorded in Figure 6. Figure 5: Quasi-explicit SVI method: values of sequences \(\{a_{n},b_{n},\rho_{n},m_{n},\sigma_{n}\}_{n=1}^{50}\) along with the true values of parameters \((a,b,\rho,m,\sigma)=(0.5,0.5,-0.5,-0.3,0.5)\). Based on the quasi-explicit SVI method, from Figure 6, the RASE of the estimations of \(v\) is 0.0088 (see Table 1) with 50 step iterations. To improve the accuracy of the quasi-explicit SVI method, we add the iterative steps of the quasi-explicit SVI method to 500 and estimate the parameters in 500-th step as \((a_{n},b_{n},\rho_{n},m_{n},\sigma_{n})_{n=500}=(0.4992,0.5004,-0.4998,-0.3002, 0.5015)\) which is almost identical to the true parameters \((0.5,0.5,-0.5,-0.3,0.5)\). For 500 iteration steps, the related RASE is 6.0895e-05, and the calculation time is 0.8482 s, which is based on a personal computer (PC). See Figure 7. Figure 6: Quasi-explicit SVI method: the left picture shows the estimations results of parameters in model (1.1), and the right picture shows the RASE of estimations of \(v\) in model (1.1). Figure 7: Quasi-explicit SVI method: the left picture shows the estimations results of parameters in model (1.1), and the right picture shows the RASE of estimations of \(v\) in model (1.1). ## 4 Empirical analysis In the following, we use financial market data to verify our FPI-SVI algorithm. Considering the implied variance smiles on the soybean meal option on Apr. 07, 2022, in China, there are eight contracts:1, m2205.DCE; 2, m2207.DCE; 3, m2209.DCE; 4, m2208.DCE; 5, m2211.DCE; 6, m2212.DCE; 7, m2301.DCE; and 8, m2303.DCE. We take m2205.DCE as an example to explain the details of the contracts: m2205.DCE denotes the maturity time of the underlying future contract in May. 2022, and the option's maturity time is the 5-th trading day of Apr. 2022. The other seven types of contracts provide similar explanations. Other Chinese market data implied variance smiles on copper options on Apr. 07, 2022; in China, there are three contracts:1, cu2205.SHFE; 2, cu2206.SHFE; and 3, cu2207.SHFE. We also consider four option contracts of American market with the underlying asset SPX (S&P500). Application of the FPI-SVI algorithm, we first conclude the estimation results and error RASE of contracts 1-4 of the soybean meal option in Figure 8, and the estimation results and error RASE of contracts 5-8 in Figure 9. \begin{table} \begin{tabular}{c l l l l} \hline \hline & Method & \((a,b,\rho,m,\sigma)\) & RASE (RMSE) & Time \\ \hline \multirow{4}{*}{Case1} & True value & \((0.5000,0.5000,-0.5000,-0.3000,0.5000)\) & \(---\) & \(---\) \\ & QE-SVI & \((0.3545,0.5605,-0.4571,-0.3089,0.7368)\) & \(0.0088\) (\(0.0157\)) & \(0.0890\) \\ & FPI-SVI & \((0.5000,0.5000,-0.5000,-0.3000,0.5000)\) & \(7.1450\)e-11 (\(2.0170\)e-10) & \(0.0017\) \\ \hline \multirow{4}{*}{Case2} & True value & \((0.0500,0.6300,-0.5500,0.0360,0.2600)\) & \(---\) & \(---\) \\ & QE-SVI & \((0.0403,0.6344,-0.5433,0.0396,0.2756)\) & \(0.0014\) (\(0.0028\)) & \(0.0856\) \\ & FPI-SVI & \((0.0500,0.6300,-0.5500,0.0360,0.2600)\) & \(4.8897\)e-16 (\(1.1102\)e-15) & \(0.0016\) \\ \hline \multirow{4}{*}{Case3} & True value & \((0.0500,0.6300,0.5500,0.0360,0.2600)\) & \(---\) & \(---\) \\ & QE-SVI & \((0.0430,0.6332,0.5454,0.0336,0.2716)\) & \(0.0011\) (\(0.0021\)) & \(0.0941\) \\ & FPI-SVI & \((0.0500,0.6300,0.5500,0.0360,0.2600)\) & \(5.2685\)e-16 (\(1.3323\)e-15) & \(0.0016\) \\ \hline \multirow{4}{*}{Case4} & True value & \((0.1000,0.0600,-0.7000,0.2400,0.0600)\) & \(---\) & \(---\) \\ & QE-SVI & \((0.1000,0.0600,-0.5999,0.2401,0.0601)\) & \(2.7540\)e-06 (\(5.3243\)e-06) & \(0.0821\) \\ \cline{1-1} & FPI-SVI & \((0.1000,0.0600,-0.7000,0.2400,0.0600)\) & \(5.0124\)e-16 (\(8.3267\)e-16) & \(0.0015\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparative of FPI-SVI algorithm and Quasi-explicit SVI (QE-SVI) with 50 steps. Figure 8: Estimations and RASE of 1-4 contacts of soybean meal option. Figure 9: Estimations and RASE of 5-8 contacts of soybean meal option. Figures 8 and 9 show that our FPI-SVI algorithm performs perfectly for the implied variance smile of the soybean meal option with 100 iteration steps. The RASE of the implied variance smile of the contract reduces to the error level of 1.0e-4, excluding contract 1. In the practice analysis, the error level of 1.0e-4 matches the accuracy of the requirements. If there are several points far away from the SVI model, we cannot obtain a sufficiently small RASE, as observed for contract 1. This phenomenon helps to locate a point that violates the no-arbitrage principle. However, a possible reason is that we have not found a better minimum point based on the observations of contract 1. We then use the quasi-explicit SVI method to verify these assertions. We consider 10000 steps of quasi-explicit SVI method for contract 1, and the RASE is 0.0021, which is almost identical to the RASE (0.0025) in the FPI-SVI algorithm with 100 iteration steps. Thus, it is reasonable to accept the estimations of the FPI-SVI algorithm. Furthermore, from the values of RASE in Figures 8 and 9, FPI-SVI algorithm almost converges to the limit after the 50-th step. The quasi-explicit SVI method converges exceedingly slowly to the limit. We conclude the estimations of \((a,b,\rho,m,\sigma)\), RASE, and calculation time of the FPI-SVI algorithm and quasi-explicit SVI method for the eight contracts in Table 2 and Figure 10. In Table 2, we consider the FPI-SVI algorithm and quasi-explicit SVI method with 100 steps. We initially analyze the calculation time in Table 2. For the FPI-SVI algorithm, the calculation time of each contract is stable at approximately 0.003 s; for the quasi-explicit SVI method, the calculation time of each contract is stable at approximately 0.015 s. Thus, with the same calculation steps, the quasi-explicit SVI method requires a calculation time of 50 times that of the FPI-SVI algorithm. We now analyze the RASE of the FPI-SVI algorithm and the quasi-explicit SVI method. Figure 10 shows the RASE of each contract under the FPI-SVI algorithm and quasi-explicit SVI methods. For all contracts 1-8, the RASE of the FPI-SVI algorithm is uniformly smaller than that of the quasi-explicit SVI method. We now provide comments on the estimations of \((a,b,\rho,m,\sigma)\). For contracts 5-8, the estimations of \((a,b,\rho,m,\sigma)\) of the FPI-SVI algorithm and quasi-explicit SVI methods are almost identical. These results verify that the error level 1.0e-4 of RASE matches the accuracy of the requirements. For contracts 1-4, the RASE of the quasi-explicit SVI method is almost 1.0e-3. Thus, we should reject the estimations of \((a,b,\rho,m,\sigma)\) of the quasi-explicit SVI method and accept those of the FPI-SVI algorithm. \begin{table} \begin{tabular}{c c c c c} \hline \hline Contract & Method & \((a,b,\rho,m,\sigma)\) & RASE (RMSE) & Time \\ \hline (1) m2205 & QE-SVI & \((-0.2327,2.3892,0.2238,0.0447,0.2243)\) & 0.0191 (0.0358) & 0.1430 \\ & FPI-SVI & \((0.2117,1.2957,0.0528,-0.0084,0.0329)\) & 0.0025 (0.0077) & 0.0036 \\ \hline (2) m2207 & QE-SVI & \((0.0994,0.6478,0.1046,-0.0262,0.2617)\) & 0.0017 (0.0042) & 0.1422 \\ & FPI-SVI & \((0.1966,0.4719,0.0828,-0.0393,0.1469)\) & 1.0674e-04 (3.7943e-04) & 0.0034 \\ \hline (3) m2209 & QE-SVI & \((0.1011,0.5969,0.1011,-0.0105,0.2322)\) & 0.0013 (0.0033) & 0.1520 \\ & FPI-SVI & \((0.1755,0.4475,0.0626,-0.0252,0.1380)\) & 1.0303e-04 (3.9441e-04) & 0.0035 \\ \hline (4) m2208 & QE-SVI & \((0.0974,0.4559,0.0941,-0.0061,0.1402)\) & 0.0011 (0.0022) & 0.1351 \\ & FPI-SVI & \((0.1212,0.3937,0.0502,-0.0168,0.0974)\) & 2.0978e-04 (7.7273e-04) & 0.0045 \\ \hline (5) m2211 & QE SVI & \((0.1100,0.2932,0.0513,-0.0228,0.1530)\) & 1.1742e-04 (3.6059e-04) & 0.1325 \\ & FPI-SVI & \((0.1138,0.2845,0.0502,-0.0232,0.1440)\) & 4.0938e-05 (1.5652e-04) & 0.0042 \\ \hline (6) m2212 & QE-SVI & \((0.1127,0.2710,0.0413,-0.0269,0.1555)\) & 5.2660e-05 (1.5188e-04) & 0.1661 \\ & FPI-SVI & \((0.1128,0.2709,0.0498,-0.0249,0.1551)\) & 1.8714e-05 (5.4755e-05) & 0.0038 \\ \hline (7) m2301 & QE-SVI & \((0.1176,0.2390,0.0299,0.0202,0.1533)\) & 7.0524e-05 (1.8351e-04) & 0.1557 \\ & FPI-SVI & \((0.1126,0.2520,0.0502,0.0249,0.1659)\) & 1.6512e-05 (6.2067e-05) & 0.0033 \\ \hline (8) m2303 & QE-SVI & \((0.1229,0.1991,0.0618,-0.0285,0.1597)\) & 9.6102e-05 (2.0094e-04) & 0.1761 \\ & FPI-SVI & \((0.1123,0.2232,0.0486,-0.0300,0.1905)\) & 9.0554e-06 (1.6536e-05) & 0.0033 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparative of FPI-SVI algorithm and Quasi-explicit SVI method with 100 steps. To further compare the FPI-SVI algorithm and the quasi-explicit SVI methods, we consider three contracts for the implied variance smiles of the copper option. The results in Table 3 lead to the same conclusion for the FPI-SVI algorithm and quasi-explicit SVI method, as presented in the implied variance smiles for the soybean meal option. \begin{table} \begin{tabular}{c c c c c} \hline \hline Contract & Method & \((a,b,\rho,m,\sigma)\) & \multicolumn{1}{c}{RASE (RMSE)} & Time \\ \hline (1) cu2205 & QE-SVI & \((-0.2172,2.5535,0.0854,-0.0020,0.1482)\) & 0.0062 (0.0107) & 0.1307 \\ & FPI-SVI & \((0.1043,1.3224,0.0941,-0.0061,0.0344)\) & 0.0035 (0.0064) & 0.0029 \\ \hline (2) cu2206 & QE-SVI & \((-0.1036,1.6181,-0.0601,-0.0391,0.1680)\) & 9.5164e-04 (0.0018) & 0.1573 \\ & FPI-SVI & \((-0.1312,1.6849,-0.0125,-0.0296,0.1775)\) & 0.0013 (0.0024) & 0.0032 \\ \hline (3) cu2207 & QE-SVI & \((-0.0006,0.9977,-0.0174,-0.0375,0.1766)\) & 8.0186e-04 (0.0018) & 0.1603 \\ & FPI-SVI & \((-0.0433,1.1016,0.0015,-0.0336,0.1990)\) & 8.5023e-04 (0.0020) & 0.0031 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparative of FPI-SVI algorithm and Quasi-explicit SVI method with 100 steps. Figure 10: RASEs of FPI-SVI algorithm and Quasi-explicit SVI method. Now, we consider four contracts with the underlying asset SPX (S&P500) on Feb. 15, 2019. The maturities are Mar. 15, 2019, June. 21, 2019, Sept. 20, 2019, and Dec. 20, 2019. At the beginning of this section, we have verified that the FPI-SVI algorithm has perfect performance, at least for the soybean meal option and the copper option in the Chinese financial market. In Table 4 and Figure 11, we show that the FPI-SVI algorithm is still useful for the stock index SPX option in the American financial market. Figure 11: FPI-SVI: Estimations and RASE of 4 contacts of SPX option. Conclusion The SVI model, applied to describe implied volatility smiles, is widely used in financial markets because of its profound relationship with implied variance and excellent fit with observations. Presently, most optimizer algorithms of the SVI model strongly depend on the input starting point. We develop an explicit iterative algorithm that combines the minimum point in the SVI model and the least-squares optimization to establish a stable and efficient algorithm for the SVI model. We establish some convergence results for the FPI-SVI algorithm under certain situations and demonstrate the excellent performance of this algorithm using simulation and market data. We also compared the FPI-SVI algorithm with the quasi-explicit SVI method regarding the accuracy of the estimations, convergence properties, and RASE (RMSE) of the parameters. The main result shows that using the same number of iterative steps, the quasi-explicit SVI method requires almost 50 times the calculation time of the FPI-SVI algorithm. Furthermore, the performance of the estimation of parameters and RASE (RMSE) of the FPI-SVI algorithm is better than that of the quasi-explicit SVI method, which implies the stability and efficiency of the FPI-SVI algorithm. The FPI-SVI algorithm has several advantages in determining a better estimation of the parameters of the SVI model. We point out that the FPI-SVI algorithm cannot guarantee no arbitrage parameters of SVI model suggested by [12]. Developing no arbitrage SVI model is important for financial markets, we intend to study further in future works.
2305.04616
Optimal Scheduling of Agents in ADTrees: Specialised Algorithm and Declarative Models
Expressing attack-defence trees in a multi-agent setting allows for studying a new aspect of security scenarios, namely how the number of agents and their task assignment impact the performance, e.g. attack time, of strategies executed by opposing coalitions. Optimal scheduling of agents' actions, a non-trivial problem, is thus vital. We discuss associated caveats and propose an algorithm that synthesises such an assignment, targeting minimal attack time and using the minimal number of agents for a given attack-defence tree. We also investigate an alternative approach for the same problem using Rewriting Logic, starting with a simple and elegant declarative model, whose correctness (in terms of schedule's optimality) is self-evident. We then refine this specification, inspired by the design of our specialised algorithm, to obtain an efficient system that can be used as a playground to explore various aspects of attack-defence trees. We compare the two approaches on different benchmarks.
Jaime Arias, Carlos Olarte, Laure Petrucci, Łukasz Maśko, Wojciech Penczek, Teofil Sidoruk
2023-05-08T10:51:08Z
http://arxiv.org/abs/2305.04616v2
# Optimal Scheduling of Agents in ADTrees: Specialised Algorithm and Declarative Models ###### Abstract Expressing attack-defence trees in a multi-agent setting allows for studying a new aspect of security scenarios, namely how the number of agents and their task assignment impact the performance, _e.g._ attack time, of strategies executed by opposing coalitions. Optimal scheduling of agents' actions, a non-trivial problem, is thus vital. We discuss associated caveats and propose an algorithm that synthesises such an assignment, targeting minimal attack time and using the minimal number of agents for a given attack-defence tree. We also investigate an alternative approach for the same problem using Rewriting Logic, starting with a simple and elegant declarative model, whose correctness (in terms of schedule's optimality) is self-evident. We then refine this specification, inspired by the design of our specialised algorithm, to obtain an efficient system that can be used as a playground to explore various aspects of attack-defence trees. We compare the two approaches on different benchmarks. attack-defence trees, multi-agent systems, scheduling, rewriting logic ## I Introduction Security of safety-critical multi-agent systems [1] is a major challenge. Attack-defence trees (ADTrees) have been developed to evaluate the safety of systems and to study interactions between attacker and defender parties [2, 3]. They provide a simple graphical formalism of possible attacker's actions to be taken in order to attack a system and the defender's defences employed to protect the system. Recently, it has been proposed to model ADTrees in the formalism of asynchronous multi-agent systems (AMAS) extended with certain ADTree characteristics [4, 5]. In this setting, one can reason about attack/defence scenarios considering agent distributions over tree nodes and their impact on the feasibility and performance (quantified by metrics such as time and cost) of attacking and defending strategies executed by specific coalitions. ### _Minimal schedule with minimal number of agents_ The time metric, on which we focus here, is clearly affected by both the number of available agents and their distribution over ADTree nodes. Hence, there arises the problem of optimal scheduling, _i.e._ obtaining an assignment that achieves the lowest possible time, while using the minimum number of agents required for an attack to be feasible. To that end, we first preprocess the input ADTree, transforming it into a Directed Acyclic Graph (DAG), where specific types of ADTree gates are replaced with sequences of nodes with normalised time (_i.e._ duration of either zero, or the greatest common factor across all nodes of the original ADTree). Because some ADTree constructs (namely, OR gates and defences) induce multiple alternative outcomes, we execute the scheduling algorithm itself on a number of independently considered DAG variants. For each such variant, we synthesise a schedule multiple times in a divide-and-conquer strategy, adjusting the number of agents until the lowest one that produces a valid assignment is found. Since we preserve labels during the preprocessing step, all DAG nodes are traceable back to specific gates and leaves of the original ADTree. Thus, in the final step we ensure that the same agent is assigned to nodes of the same origin, reshuffling the schedule if necessary. ### _An alternative approach: Rewriting Logic_ We also study the optimal scheduling problem for ADTrees through the lenses of Rewriting Logic (RL) [6] (see also the surveys in [7, 8]). RL is a formal model of computation whose basic building unit is a rewrite theory \(\mathcal{R}\). Roughly, the states of the modelled system are encoded in \(\mathcal{R}\) via algebraic data types, and the (non-deterministic) transitions of the system are expressed by a set of (conditional) rewriting rules. If the theory \(\mathcal{R}\) satisfies certain executability conditions (making the mathematical and the execution semantics of \(\mathcal{R}\) coincide), \(\mathcal{R}\) can be executed in Maude [9], a high-performance language and system supporting rewriting logic. We start with a rewrite theory giving meaning to the different gates of an ADTree. The correctness of such a specification is self-evident and it allows us to solve the optimal scheduling problem by exploring, via a search procedure, all the possible paths leading to an attack. Unfortunately, this procedure does not scale well for complex ADTrees. Hence, we refine the first rewrite theory by incorporating some of the design principles devised in our specialised algorithm. We better control the non-deterministic choices in the specification, thus reducing the search space. The resulting theory can be effectively used in the case studies presented here and it opens the possibility of exploring different optimisation ideas and different aspects of ADTrees as discussed in Section VII. ### _Contributions_ In this paper, we: (\(i\)) present and prove the correctness of an algorithm for ADTrees which finds an optimal assignment of the minimal number of agents for all possible DAG variants of a given attack/defence scenario, (\(ii\)) show the scheduling algorithm's complexity to be quadratic in the number of nodes of its preprocessed input DAG, (\(iii\)) implement the algorithm in our tool ADT2AMAS, (\(iv\)) propose a rewrite theory, implemented in Maude, for a general solution to the considered problem, evaluate results and compare them against those of our specialised algorithm. ### _Related work_ ADTrees [2, 10] are a popular formalism that has been implemented in a broad range of analysis frameworks [11, 12, 13, 14], comprehensively surveyed in [15, 16]. They remain extensively studied today [17]. Of particular relevance is the ADTree to AMAS translation [5], based on the semantics from [18]. Furthermore, the problem discussed in this paper is clearly related to parallel program scheduling [19, 20]. Due to time normalisation, it falls into the category of Unit Computational Cost (UCC) graph scheduling problems, which can be effectively solved for tree-like structures [21], but cannot be directly applied to a set of DAGs. Although a polynomial solution for interval-ordered DAGs was proposed by [22], their algorithm does not guarantee the minimal number of agents. Due to zero-cost communication in all considered graphs, the problem can also be classified as No Communication (NC) graph scheduling. A number of heuristic algorithms using list scheduling were proposed [19], including Highest Levels First with No Estimated Times (HLFNET), Smallest Co-levels First with no Estimated Times (SCFNET), and Random, where nodes in the DAG are assigned priorities randomly. Variants assuming non-uniform node computation times are also considered, but are not applicable to the problem solved in this paper. Furthermore, this class of algorithms does not aim at finding a schedule with the minimal number of processors or agents. On the other hand, known algorithms that include such a limit, _i.e._ for the Bounded Number of Processors (BNP) class of problems, assume non-zero communication cost and rely on the clustering technique, reducing communication, and thus schedule length, by mapping nodes to processing units. Hence, these techniques are not directly applicable. The algorithm described in this paper can be classified as list scheduling with a fusion of HLFNET and SCFNET heuristics, but with additional restriction on the number of agents used. The length of a schedule is determined as the length of the critical path of a graph. The number of minimal agents needed for the schedule is found with bisection. Branching schedules analogous to the variants discussed in Section III have been previously explored, albeit using different models that either include probability [23] or require an additional DAG to store possible executions [24]. Zero duration nodes are also unique to the ADTree setting. To the best of our knowledge, this is the first work dealing with agents in this context. Rather, scheduling in multi-agent systems typically focuses on agents' _choices_ in cooperative or competitive scenarios, _e.g._ in models such as BDI [25, 26]. Rewriting logic and Maude have been extensively used for the formal analysis and verification of systems. The reader can find in [7, 8] a survey of the different techniques and applications in this field. In the context of ADTrees, the work in [27] and the companion tool SPTool define a rewrite theory that allows for checking the equivalence between ADTrees featuring sequential AND gates. The work in [28] extends the SPTool by adding different backend theories written in Maude: one for checking equivalence of ADTrees and one implemented a linear-logic based semantics [29] for it. In none of these works and tools, the problem of finding the optimal scheduling for agents is considered. ### _Outline_ The next section briefly recalls the ADTree formalism. In Section III, several preprocessing steps are discussed, including transforming the input tree to a DAG, normalising node attributes, and handling different types of nodes. Section IV describes the main algorithm, as well as a proof of its correctness and optimality. The algorithm, implemented in our tool ADT2AMAS[30], is benchmarked in Section V. The rewriting logic specification is described and experimented in Section VI, and we discuss the pros and cons with respect to the specialized algorithm proposed here. Section VII concludes the paper and provides perspectives for future work. This paper is an extended version of [31]. From the theoretical point of view, the rewriting semantics in Section VI is completely new. From the practical side, we provide another tool, ADT2MAUDE, that enacts the rewriting approach. ## II Attack-Defence Trees To keep the paper self-contained, we briefly recall the basics of ADTrees and their translation to a multi-agent setting. ### _Attack-defence trees_ ADTrees are a well-known formalism that models security scenarios as an interplay between attacking and defending parties. Figure 1 depicts the basic constructs used throughout the paper. For a more comprehensive overview, we refer the reader to [5]. Attacking and defending actions are depicted in red and green, respectively. Leaves represent individual actions at the highest level of granularity. Different types of gates allow for modelling increasingly broad intermediary goals, all the way up to the root, which corresponds to the overall objective. OR and AND gates are defined analogously to their logical counterparts. SAND is a sequential variant of the latter, _i.e._ the entire subtree \(a_{i}\) needs to be completed before handling \(a_{i+1}\). While only shown in attacking subtrees here, these gates may refine defending goals in the same way. Reactive or passive countering actions can be expressed using gates \(\mathsf{CAND}\) (counter defence; successful iff \(a\) succeeds and \(d\) fails), \(\mathsf{NODEF}\) (no defence; successful iff either \(a\) succeeds or \(d\) fails), and \(\mathsf{SCAND}\) (failed reactive defence; sequential variant of \(\mathsf{CAND}\), where \(a\) occurs first). We collectively refer to gates and leaves as _nodes_. ADTree nodes may additionally have numerical _attributes_, _e.g._ the time needed for an attack, or its financial cost. Boolean functions over these attributes, called _conditions_, may then be associated with counter-defence nodes to serve as additional constraints for the success or failure of a defending action. In the following, the _treasure hunters_\(\mathsf{ADTree}\) in Figure 2 will be used as a running example. While both the gatekeeper b and the door f need to be taken care of to steal the treasure (ST), just one escape route (either h or e) is needed to flee (GA), with TF enforcing sequentiality. ### _Translation to extended AMAS_ Asynchronous multi-agent systems (AMAS) [18] are essentially networks of automata, which synchronise on shared transitions and interleave private ones for asynchronous execution. An extension of this formalism with attributes and conditional constraints to model ADTrees, and the translation of the latter to extended AMAS, were proposed in [5]. Intuitively, each node of the ADTree corresponds to a single automaton in the resulting network. Specific patterns, embedding reductions to minimise state space explosion [4], are used for different types of ADTree constructs. As the specifics exceed the scope and space of this paper, we refer the reader to [18] for the AMAS semantics, and to [5] for the details on the translation. In the multi-agent setting, groups of agents working for the attacking and defending parties can be considered. Note that the _feasibility_ of an attack is not affected by the number or distribution of agents over ADTree nodes, as opposed to some _performance_ metrics, such as time (_e.g._ a lone agent can handle all the actions sequentially, albeit usually much slower). ### _Assignment of agents for ADTrees_ Consequently, the optimal distribution of agent coalitions is of vital importance for both parties, allowing them to prepare for multiple scenarios, depending on how many agents they can afford to recruit (thereby delaying or speeding up the completion of the main goal). For instance, the thieves in Figure 2, knowing the police response time, would have to plan accordingly by bringing a sufficiently large team and, more importantly, schedule their tasks to make the most of these numbers. Thus, we can formulate two relevant and non-trivial scheduling problems. _The first one_, not directly addressed here, is obtaining the assignment using a given number of agents that results in optimal execution time. _The second one_, on which we focus in this paper, is synthesising an assignment that achieves a particular execution time using the least possible number of agents. Typically, the minimum possible time is of interest here. As we show in Section III, this time can be computed from the structure of the input ADTree itself (and, of course, the time attribute of nodes). However, our approach can also target a longer attack time if desired. In the next section, we discuss it in more detail as normalisation of the input tree is considered, along with several other preprocessing steps. ## III Preprocessing the tree In this preprocessing step, an ADTree is transformed into DAGs (_Directed Acyclic Graphs_) of actions of the same duration. This is achieved by splitting nodes into sequences of such actions, mimicking the scheduling enforced by ADTrees sequential gates, and considering the different possibilities of defences. Therefore, we introduce a sequential node \(\mathsf{SEQ}\), which only waits for some input, processes it and produces some output. It is depicted as a lozenge (see Figure 3(a)). In what follows, we assume that one time unit is the greatest common factor of time durations across all nodes in the input ADTree, _i.e._\(t_{unit}=\mathit{gcf}(t_{N_{1}}\dots t_{N_{|ADTree|}})\). By _time slots_, we refer to fragments of the schedule whose length is \(t_{unit}\). That is, after normalisation, one agent can handle exactly one node of non-zero duration within a single time slot. Note that, during the preprocessing steps described in this section, node labels are preserved to ensure backwards traceability. Their new versions are either primed or indexed. ### _Nodes with no duration_ It happens that several nodes have no time parameter set, and are thus considered to have a duration of \(0\). Such nodes play essentially a structuring role. Since they do not take any time, the following proposition is straightforward. **Proposition 1**.: _Nodes with duration \(0\) can always be scheduled immediately before their parent node or after their last occurring child, using the same agent in the same time slot._ Fig. 1: Basic ADTree constructs Fig. 2: Running example: treasure hunters Preprocessing introduces nodes similar to SEQ but with \(0\) duration, called NULL and depicted as trapeziums (Fig. 3(b)). ### _Normalising time_ The first preprocessing step prior to applying the scheduling algorithm normalises the time parameter of nodes. **Proposition 2**.: _Any node \(N\) of duration \(t_{N}=n\times t_{unit},n\neq 0\) can be replaced with an equivalent sequence consisting of a node \(N^{\prime}\) (differing from \(N\) only in its \(0\) duration) and \(n\)_SEQ_ nodes \(N_{1}\),..., \(N_{n}\) of duration \(t_{unit}\)._ ### _Scheduling enforcement_ SAND nodes enforce some scheduling, and are transformed into a sequence containing their subtrees and NULL nodes. **Proposition 3**.: _Any_SAND _node \(N\) with children subtrees \(T_{1}\),..., \(T_{n}\) can be replaced with an equivalent sequence \(T_{1}\), \(N_{1}\), \(T_{2}\),..., \(N_{n-1}\), \(T_{n}\), \(N_{n}\), where each \(N_{i}\) is a NULL node, its input is the output of \(T_{i}\) and its outputs are the leaves of \(T_{i+1}\) (except for \(N_{n}\) which has the same output as \(N\) if any)._ ### _Handling defences_ The scheduling we are seeking to obtain will guarantee that the necessary attacks are performed. Hence, when dealing with defence nodes, we can assume that all attacks are successful. However, they may not be mandatory, in which case they should be avoided so as to obtain a better scheduling of agents. Taking into account each possible choice of defences will lead to as many DAGs representing the attacks to be performed. This allows for answering the question: "What is the minimal schedule of attackers if these defences are operating?" _Composite defences._ Defences resulting from an AND,SAND or OR between several defences are operating according to the success of their subtrees: for AND andSAND, all subtrees should be operating, while only one is necessary for OR. This can easily be computed by a boolean bottom-up labelling of nodes. Note that different choices of elementary defences can lead to disabling the same higher-level composite defence, thus limiting the number of DAGs that will need to be considered. _No Defence nodes_ (NODEF). A NODEF succeeds if its attack succeeds or its defence fails. Hence, if the defence is not operating, the attack is not necessary. Thus, theNODEF node can be replaced by aNULL node without children, and the children subtrees deleted. On the contrary, if the defence is operating, the attack must take place. The defence subtree is deleted, while the attack one is kept, and theNODEF node can be replaced by aNULL node, as depicted in Figure 3. _Counter Defence (CAND) and Failed Reactive Defence (SCAND) nodes._A CAND succeeds if its attack is successful and its defence is not. ASCAND additionally specifies that the defence takes place after the attack. In both cases, if the defence is not operating, its subtree is deleted, while the attack one is kept, and theCAND (or SCAND) node can be replaced by aNULL node, as in Figure 2(c). Otherwise, theCAND (or SCAND) node is deleted, as well as its subtrees. Moreover, it transmits its failure recursively to its parents, until a choice of another branch is possible. Thus, all ancestors are deleted bottom up until an OR is reached. Thus, we have a set of DAGs with attack nodes only. ### _Handling OR branches_ OR nodes give the choice between several series of actions, only one of which will be chosen in an optimal assignment of events. However, one cannot simply keep the shortest branch of an OR node and prune all others. Doing so minimises attack time, but not necessarily the number of agents. In particular, a slightly longer, but narrower branch may require fewer agents without increasing attack time, provided there is a longer sequence elsewhere in the DAG. Consequently, only branches that are guaranteed not to lead to an optimal assignment can be pruned, which is the case when a branch is the longest one in the entire graph. All other cases need to be investigated, leading to multiple variants depending on theOR branch executed, similar to the approach for defence nodes. ### _Preprocessing the treasure hunters ADTree_ Figures 4 and 3(a) detail the preprocessing of the treasure hunters example step by step. The time unit is one minute. Long sequences of SEQ are shortened with dotted lines. Note that when handling the defence, at step 3, we should obtain two DAGs corresponding to the case where the defence fails (see Figure 3(c)), or where the defence is successful. This latter case leads to an empty DAG where no attack can succeed. Therefore, we can immediately conclude that if the police is successful, there is no scheduling of agents. ## IV Best minimal agent assignment At this stage, we have DAGs where nodes are either (i) a leaf, or of type AND, OR, or NULL, all with duration \(0\) or (ii) of type SEQ with duration \(t_{unit}\). Their branches mimic the possible runs in the system. The algorithm's input is a set of DAGs preprocessed as described in Section III, corresponding to possible configurations of defence nodes' outcomes and choices of OR branches in the original ADTree. For each of these DAGs, \(n\) denotes the number of SEQ nodes (all other ones have \(0\)-duration). Furthermore, nodes (denoted by \(N\)) have some attributes: their \(type\); four integers \(depth\), \(level\), \(agent\) and \(slot\), initially with value \(0\). The values of \(depth\) and \(level\) denote, respectively, the height of a node's tallest subtree and the distance from the root (both without counting the zero duration nodes). The attributes \(agent\) and \(slot\) store the node's assignment in the schedule. Fig. 3: Handling NODEF \(A\) ### _Depth and level of nodes_ We first compute the nodes' depth and level, handled by procedures DepthNode and LevelNode, respectively. They explore the DAG in a DFS (_depth first search_) manner, starting from the root. Both attributes are assigned recursively, with \(depth\) computed during backtracking, _i.e._ starting from the leaves. There are slight differences in the way specific node types are handled; we refer the reader to [31] for the details. ### _Number of agents: upper and lower bounds_ The upper bound on the number of agents is obtained from the maximal width of the preprocessed DAG, _i.e._ the maximal number of SEQ nodes assigned the same value of _level_. These nodes must be executed in parallel to guarantee that the attack is achieved in the minimal time. The minimal attack time is obtained from the number of levels \(l\) in the preprocessed DAG. Note that the longest path from the root to a leaf has exactly \(l\) nodes of non-zero duration. Clearly, none of these nodes can be executed in parallel, therefore the number of time slots cannot be smaller than \(l\). Thus, if an optimal schedule of \(l\times t_{unit}\) is realisable, the \(n\) nodes must fit in a schedule containing \(l\) time slots. Hence, the lower bound on the number of agents is \(\lceil\frac{n}{l}\rceil\). There is, however, no guarantee that it can be achieved, and introducing additional agents may be necessary depending on the DAG structure, _e.g._ if there are many parallel leaves. ### _Minimal schedule_ The algorithm for obtaining a schedule with the minimal attack time and also minimising the number of agents is given in Alg. 1. Input DAGs are processed sequentially and a schedule is computed for each one. Not restricting the output to the overall minimum allows to avoid "no attack" scenarios where the time is 0 (_e.g._ following a defence failure on a root NODEF node). Furthermore, with information on the distribution of agents for a successful minimal time attack in all cases of defences, the defender is able to decide which defences to enable according to these results. The actual computation of the schedule is handled by the function Schedule (Alg. 2). Starting from the root and going top-down, all SEQ nodes at the current level are added to set \(S\). The other nodes at that level have a null duration and can be scheduled afterwards with either a parent or child. An additional check in l. 5 ensures that non-optimal variants (whose longest branch exceeds a previously encountered minimum) are discarded without needlesly computing the schedule. Nodes in \(S\) are assigned an agent and time slot, prioritising those with higher \(depth\) (_i.e._ taller subtrees), as long as an agent is available. Assigned nodes are removed from \(S\), and any that remain (_e.g._ when the bound was exceeded) are carried over to the next level iteration. At this point, it is possible for a parent and a child node to be in \(S\) concurrently. However, since higher \(depth\) takes precedence, they will never be scheduled in the wrong order, and an extra check in the while loop avoids scheduling both nodes to be executed in parallel. Algorithm 2 calls function ReshuffleSlot after the complete assignment of a time slot at l. 12 to ensure consistent assignment of sub-actions of the same ADTree node. Note that depending on \(depth\), a sub-action may be moved to the next slot, creating an interrupted schedule where an agent stops an action for one or more time units to handle another. Alternatively, agents may collaborate, each handling a node's action for a part of its total duration. Such assignments could Fig. 4: Treasure hunters ADTree: preprocessing steps (top, left, middle) and initial part of the main algorithm (bottom right) be deemed unsuitable for specific scenarios where extra conditions need to be satisfied. In those cases, manual reshuffling or adding extra agent(s) is left to the user's discretion. At this point, either the upper or the lower bound on the number of agents is adjusted, depending on whether the resulting schedule is valid (that is, there are no nodes left to assign at the end). Scheduling is then repeated for these updated values until the minimal number of agents is found (_i.e._ the two bounds are equal). After the complete computation for a given DAG, I. 22 calls function ZeroAssign in order to obtain assignments for all remaining nodes, _i.e._ those of zero duration. Functions ResthuffleSlot and ZeroAssign are detailed in Sections IV-D and IV-E, respectively. Although this algorithm assumes the minimal time is of interest, it can be easily modified to increase the number of time slots, thus synthesising the minimal number of agents required for a successful attack of any given duration. ``` 1\(output=\emptyset\) 2whileDAG_set\(\neq\emptyset\)do 3 Pick DAG\(\in DAG\_set\) 4ifDAG\(n=0\)thencontinue\(\triangleright\) Skip empty DAGs 5 DepthNode(\(root(DAG)\))\(\triangleright\) Compute depth of nodes DAG\(\leftarrow\)DAG\(\setminus\)\(\{N\ |\ \neg N.keep\}\) 6 LevelNode(\(root(DAG),0\))\(\triangleright\) Compute level of nodes slots\(\gets root(DAG).depth\) 7\(low\_bound\leftarrow\lceil\frac{DAG\,n}{\mathit{slots}}\rceil-1\) 8\(max\_agents\leftarrow\max_{j}(|\{N:N.\mathit{type}=\) 9\(\texttt{SEQ}\wedge N.\mathit{level}=j\}|)\)\(\triangleright\) Max. level width (concur. SEQ nodes) 10\(up\_bound\leftarrow\)max_agents 11curr\(\_\)output\(=\emptyset\) 12while\((\mathit{up\_bound}-\mathit{low\_bound}>1)\)do 13\(agents\leftarrow\mathit{low\_bound}+\lfloor\frac{up\_bound-low\_bound}{2}\rfloor\) 14\((candidate,n\_\mathit{remain})\leftarrow\) Schedule(DAG,\(slots,agents\)) 15ifn_remain\(=0\)then\(\triangleright\) Candidate schedule OK 16\(up\_bound\gets\)agents 17\(curr\_output\leftarrow\)candidate 18else\(low\_bound=\)agents\(\triangleright\) Cand. schedule not OK 19ifup_bound\(=\)max_agentsthen 20\((curr\_output,\_)\leftarrow\) Schedule(DAG,\(slots,max\_agents\)) 21\(\mathit{ZeroAssign}(DAG)\) 22\(output\leftarrow\mathit{output}\cup curr\_output\) 23\(DAG\_set\leftarrow\)DAG\(\_set\setminus\)DAG returnoutput ``` **Algorithm 1**MinSchedule(\(DAG\_set\)) ### _Uniform assignment for SEQ nodes_ A separate subprocedure, given in Algorithm 3, swaps assigned agents between nodes at the same level so that the same agent handles all SEQ nodes in sequences obtained during the time normalisation step (_i.e._ corresponding to a single node in the original ADTree). ``` 1\(agent\in\{1.num\_agents\}\)do 2\(current\_node\leftarrow\)\(N\),s.t. 3\(\mathit{\_agent}=\mathit{agent}\wedge N.\mathit{slot}=\mathit{slot}\) 4\(par\_agent\leftarrow\mathit{parent}(current\_node).\mathit{agent}\) 5if\(par\_agent\neq agent\wedge\mathit{par\_agent}\neq 0\)then 6if\(\exists N^{\prime}\neq current\_node\),s.t. 7\(N^{\prime}.\mathit{\_agent}=\mathit{par\_agent}\wedge N^{\prime}.\mathit{slot}= \mathit{slot}\)then 8\(N^{\prime}.\mathit{\_agent}\leftarrow\mathit{agent}\)\(\triangleright\) Swap with \(N^{\prime}\) if it exists 9\(N^{\prime}.\mathit{\_slot}\leftarrow\mathit{slot}\) 10\(current\_node.\mathit{agent}\leftarrow\mathit{par\_agent}\) 11\(current\_node.slot\leftarrow\mathit{slot}\) ``` **Algorithm 2**Schedule(\(DAG\_sots,agents\)) **Proposition 4**.: _Resthuffling the assignment by swapping the agents assigned to a pair of nodes in the same slot does not affect the correctness of the scheduling._ Proof.: See [31, Proposition 4]. ### _Assigning nodes without duration_ After all non-zero duration nodes have been assigned and possibly reshuffled at each level, Alg. 4 handles the remaining nodes. Our choice here stems from the ADTree gate the node originates from. We first assign zero-duration nodes to the same agent and time slot as their parent if the parent is a SEQ node (l. 2-6). NULL, OR and LEAF nodes get the same assignment as their only child if any, or as their parent if they have no child (l. 8-19). The latter case may happen for NULL when handling defences as in _e.g._ Fig. 2(b), and for LEAF nodes originally of null duration. AND nodes are assigned the same agent and time slot as the child that occurs last (l. 20-30). Note that in all cases the agents (and time slots) assigned to zero duration nodes are the same as those of their immediate parents or children. Hence, no further reshuffling is necessary. **Proposition 5**.: _Adding nodes of zero duration to the assignment in Alg. 4 does not affect the correctness of the scheduling._ Proof.: See [31, Proposition 5]. ### _Complexity and correctness_ We now consider the algorithm's complexity and prove that it achieves its intended goal. **Proposition 6**.: _Algorithm 1 is in \(\mathcal{O}(kn^{2}\log n)\), where \(k\) is the number of input DAGs, and \(n\) their average number of nodes._ Proof.: See [31, Proposition 6]. Thus, while the scheduling algorithm itself is quadratic, it is executed for \(k\) DAG variants, where \(k\) is exponential in the number of OR and defence nodes in the ADTree. **Proposition 7**.: _The assignments returned by Algorithm 1 are correct and use the minimal number of agents for each variant \(\mathit{DAG}\in\mathit{DAG}\_set\) to achieve the attack in minimal time._ Proof.: See [31, Proposition 7]. ### _Scheduling for the treasure hunters ADTree_ We now apply these algorithms to the treasure hunters example. Figure 3(d) shows the output of the three initial subprocedures. The depth of nodes assigned by DepthNode is displayed in green. The branch corresponding to attack e has been pruned as per Section III-E. Levels assigned by LevelNode are displayed in blue. Finally, the agents assignment computed by Algorithm 1 is shown in Figure 5. ## V Experiments The algorithms presented here are implemented in our open source tool ADTZAMAS[32], written in C++17. It allows for specifying input ADTrees either via simple-syntax text files or using an intuitive GUI, and handles both their translation to extended AMAS and computation of an optimal schedule with minimal number of agents. Intermediary steps of the algorithm can be exported as Tikz figures, allowing to easily visualise and understand them. For more details on the architecture of ADTZAMAS, we refer the reader to [30]. Here, we present its application to the use cases from [5], plus examples that feature some specific behaviour. All the figures and tables of the examples can be found in the supplementary material of this paper [https://bit.ly/3ONeSzq](https://bit.ly/3ONeSzq) and in the extended version of [31] available at [https://arxiv.org/abs/2101.06838](https://arxiv.org/abs/2101.06838). forestallThis case study models forestalling a software instance. Depending on the active defences, 4 cases are possible. However, the DAG for no active defence and the one where the only active defence is id (intrusion detection [5]), are the same. All three remaining DAGs have an optimal schedule with only \(1\) agent, in 43 days for the no defence (or id only) case, 54 if only scr (secure coding rooms) is active, and 55 if both defences occur. Although only a single agent is needed to achieve the attack in minimal time, the schedule exhibits which specific attacks must be performed to do so. GVC (get valid credentials) which in turn makes APN (access private net) and then APNS fail, independent of the defence inc (inform of new connections). Thus the attack necessarily fails. This is also the case if defence inc is active. The only way for an attack to succeed is that all defences fail, leading to an optimal schedule in 694 minutes with 2 agents. Hence an attacker will use 2 agents to perform the fastest attack. On the other hand, the defender knows that a single one of the two defences is sufficient to block any attack. _gain-admin:_ This third case is about an attacker trying to gain administration privileges on a computer system. There are 16 possible defences combinations, which are covered by only 3 cases: scr (secure coding rooms) is not active; scr is active but not DTH (defence against trojans); both of them are active. In all three cases, the shortest attack requires only a single agent, and can be scheduled in 2942, 4320 and 5762 minutes, respectively. _Exhibiting particular scheduling features:_ Experiments were conducted on the example used in [5] to evaluate the impact of the number of agents on the attack time, and two small examples designed to exhibit particular characteristics of the schedule. Our algorithm confirms an optimal schedule in 5 minutes with 6 agents for the example of [5]. Then, _interrupted_ (see Figure 6) shows that the scheduling algorithm can produce an interleaved execution of two attacks (b and e), assigned to the same agent. Finally, the _last_ example provides a succession of nodes with 0 duration ( a\({}^{\prime}\), e\({}^{\prime}\), f\({}^{\prime}\), h\({}^{\prime}\) and i\({}^{\prime}\)), and shows they are handled as expected. _Scaling example:_ In the _scaling_ example, the first agent processes the longest path while the second agent handles all other actions. It is extended to analyse the scaling capabilities of the scheduling algorithm. For this purpose, we wrote an automatic generator of ADTrees. The parameters of the generated ADTrees are the _depth_, the _width_ corresponding to the number of deepmost leaves, the number of _children_ for each AND, and the total number of _nodes_. All nodes have time 1 except the first leaf that has time \(width-1\). The results show that the number of agents is not proportional to the width of the tree, and the optimal scheduling varies according to the time of nodes. We refer the reader to [31] for a detailed comparison. ## VI A general approach with Rewriting Logic This section presents an alternative approach for solving the optimal scheduling problem in ADTrees, which is more general in the sense that it does not build upon a dedicated algorithm. We start with an appropriate representation for the ADTree structure (SSVI-A) and present a rewrite theory giving meaning to the gates of the tree (SSVI-B). Since the resulting theory is executable, we can use the system Maude [9] as a decision procedure to enumerate all the possible configurations leading to an attack and find the optimal one (SSVI-C). However, without a suitable strategy, it is not efficient enough for more complex scenarios. Hence, we refine (SSVI-D) the theory by adapting some of the ideas and heuristics implemented in the specialised algorithm proposed in Section IV. The resulting procedure is easy to prove correct, and exhibits good performance for all the case studies considered in Section V. In what follows, we explain the main concepts behind Rewriting Logic (RL) [6, 7], while gradually introducing the proposed rewrite theory for ADTrees. We adopt, in most cases, the notation of Maude [9], a high-level language supporting rewriting logic theories. This allows for producing an executable specification. For the sake of readability, we omit some details and the complete specification can be found at the website of our tool ADTZMAUDE[33]. A _rewrite theory_ is a tuple \(\mathcal{R}=(\Sigma,E\uplus B,R)\). The static behaviour (SSVI-A) of the system is modelled by the order-sorted equational theory \((\Sigma,E\uplus B)\) and the dynamic behaviour (SSVI-B) by the set of rewrite rules \(R\). ### _Equational theory_ The signature \(\Sigma\) defines a set of typed operators used to build the terms of the language (_i.e._ the syntax of the modelled system). \(E\) is a set of (conditional) equations over \(T_{\Sigma}\) (the set of terms built from \(\Sigma\)) of the form \(t=t^{\prime}\) if \(\phi\). The equations specify the algebraic identities that terms of the language must satisfy. For instance, if the operator \(|\cdot|\) denotes the length of a sequence of symbols, then the following equations must hold: \(|\epsilon|=0\) and \(|ax|=1+|x|\) (where \(\epsilon\) is the empty sequence). In \((\Sigma,E\uplus B)\), \(B\) is a set of structural axioms over \(T_{\Sigma}\) for which there is a finitary matching algorithm. Such axioms include associativity, commutativity, and identity, or combinations of them. For instance, \(\epsilon\) is the identity for concatenation and then, modulo this axiom, the terms \(xc\) and \(x\) are equivalent. The equational theory associated with \(\mathcal{R}\) thus defines algebraic data types and deterministic and finite computations as in a functional programming language. RL allows for defining any syntax for the operators in \(\Sigma\), using _sorts_ along with constructors and operators for them. Here is a simple example defining Peano's natural numbers: ``` fmodNATis---equationaltheory sortNat.---sortdefinition op0:->Nat[ctor].---zero ops:Nat->Nat[ctor].---successor op-='.NatNat->Nat.---addition varsxy:Nat.--- logicalvariables eq0+x=x.---equationsdefining+ eqs(y)+x=s(y+x). endfm ``` The attribute [ctor] in the definition of zero and successor is optional. It is used to document that these operators are constructors for terms of sort Nat. The positions of the arguments in the (mixfix) operator + are indicated with underscores and the equations give meaning to it: \(\forall x:Nat,0+x=x\) and \(\forall xy:Nat,s(y)+x=s(x+y)\). Hence, the term \(s(0)+s(s(0))\) reduces to the normal form \(s(s(s(0)))\). The starting point for our specification is to define an equational theory for building terms representing ADTrees. In Maude, systems are specified using a syntax resembling that of object oriented languages. The needed sorts and operators are defined in the module CONFIGATION, available in Maude's prelude. The idea is to represent entities as record-like structures (sort Object) of the form \(\langle O:C\mid a_{1}:v_{1},\cdots a_{n}:v_{n}\rangle\) where \(O\) is an object identifier (sort old), \(C\) is a class identifier (sort cid), \(a_{i}\) is an attribute (sort Attribute) and \(v_{i}\) is a term that represents the current value of \(a_{i}\). We start by defining the class identifiers for each kind of gate: mod ADTree is --- Rewrite theory ADTree --- Class IDs for Nodes OP NOT : -> Cid. OP AND : -> Cid. OP SAND : -> Cid. OP OR : -> Cid. OP ATK : -> Cid. OP DEF : -> Cid. The class NOT is used to define subtrees that are defences (as in NAND gates); SAND stands for sequential AND; and the last two classes represent attacks and defences. The attributes for the gates include the (accumulated) time, cost, and the number of agents needed to perform the attack: --- attributes for gates op time:_ : Nat -> Attribute. OP cost:_ : Nat -> Attribute. OP agents:_ : Nat -> Attribute. OP accctime:_ : Nat -> Attribute. OP accccost:_ : Nat -> Attribute. The equational theory is ordered-sorted, _i.e._ there is a partial order on sorts defining a sub-typing relation: subsort Qid < Oid. The sort Qid is part of Maude's standard library and represents quoted identifiers, _e.g._ 'TS (a sequence of characters preceded by an apostrophe). Hence, 'TS is both a quoted identifier and an object identifier. An interesting RL feature is the definition of axioms for the operators (\(B\) above), _e.g._ it is straightforward to define a list as a non-commutative monoid and a set as an abelian monoid: subsort Old < List. --- singleton list subsort Old < Set. --- singleton set op nil :-> List [ctor]. --- empty list op empty :-> Set [ctor]. --- empty set --- building lists and sets op _: List List -> List [ctor assoc id: nil]. op _= : Set Set -> Set [ctor assoc comm id: empty]. In this specification, the term "'A 'B 'C" (resp. 'A; 'B, 'C") represents a list (resp. a set) of three object identifiers. The concatenation operator _-- is called _empty syntax_, since a white space is used to concatenate elements. Note that being associative, the lists "'A ('B 'C)" and "('A 'B) 'C" are equivalent (modulo assoc), as are the terms "'A; 'B, 'C" and "'C, 'B, 'A" due to commutativity. The sorts and operators needed to specify lists and sets are already available in Maude. The sorts for these data structures are renamed here, respectively, as NodeList and NodeSet and used below to define two new attributes for gates: --- ordered and unordered children op lchd:_ : NodeList -> Attribute. OP schd:_ : NodeSet -> Attribute. The first one is used for sequential gates SAND, and the second one for all others. Each node is associated with a state: --- states for nodes in the tree sort Status. ops Fail Succeed Unknown : -> Status. OP stat:_ : Status -> Attribute. Initially, all the nodes are in state Unknown, which may change to Succeed or Fail, according to the rules described in the next section. Suitable operators for building the different gates in an ADTree are introduced. For instance: --- building an attack: ID, time and cost op makeAtk : Qid Nat Nat -> Object. eq makeAtk(Q, t, c) = < Q : ATK | time: t, cost: c, agents: 1, accctime: 0, accccost: 0, stat: Unknown >. --- build. an QR gate: ID, children, time and cost op makeor : Qid NodeSet Nat Nat -> Object. eq makeOr(Q, S, t, c) = < Q : OR | time: t, cost: c, agents: 0, accccost: 0, schd: S, stat: Unknown >. Note that a leaf attack requires one agent and the number of agents for the on gate is initially zero. That value will be updated as explained below. An equational theory is executable only if it is terminating, confluent and sort-decreasing [9]. Under these conditions, the mathematical meaning of the equality \(t\equiv t^{\prime}\) coincides with the following strategy: reduce \(t\) and \(t^{\prime}\) to their unique (due to termination and confluence) normal forms \(t_{c}\) and \(t^{\prime}_{c}\) using the equations in the theory as _simplification rules_ from left to right. Then, \(t\equiv t^{\prime}\) iff \(t_{c}=_{B}t^{\prime}_{c}\) (note that \(=_{B}\), equality modulo \(B\), is decidable since a finitary matching algorithm for \(B\) is assumed). For instance, the term makeAtk('A, 3,2) can be reduced to the normal form <'A : ATK | time: 3, agents: 1, stat: Unknown,...> using the equations above. The Maude's theory CONFIGATION defines the sort Configuration as a set of objects concatenated with the empty syntax (an associative and commutative operator with none as identity). Hence, the term \(t_{GA}\) below, with sort Configuration, encodes the subtree GA in Figure 2. --- t_GA (subtree GA) MakeAtk('h,3,500) MakeAtk('e, 10,0) MakeOr('GA, ('h, 'e), 0, 0) Finally, two additional constructors for the sort Configuration are defined in the theory ADTree: op {_,_} : Oid Configuration -> Configuration. op {_} : Configuration -> Configuration. Given an ADTree \(T\), we shall use \([\![T]\!]\) to denote the corresponding term of the form {O,Cnf} where Q is the root of \(T\) and Cnf is the set of objects encoding the gates in \(T\). As shown in the next section, the second operator \(\circ\)\(\!\! equal (modulo axioms) to that state fragment. If the condition \(\theta(\phi(\vec{x}))\) is true, the new state fragment is \(\theta(r(\vec{x}))\), leading to a local transition. Hence, rules define state transformations modelling the dynamic behaviour of the system (which is not necessarily deterministic, nor terminating). Conditions and patterns in rules may considerably affect the performance of a rewrite theory when it is used to explore all the possible reachable states from a given term. In this section, we propose rules that are self-explanatory but that may exhibit unnecessary non-determinism during the search procedure. Later, we add extra conditions to reduce the search space and improve efficiency. **Leaves.** Let us start defining the behaviour for the gates representing leaves of an ADTree, i.e., attacks and defences: --- semantics for attacks rl [ATKOK] : < Q : ATK | stat: Unknown, ats > -> < Q : ATK | stat: Succeed, ats >. rl [ATKOK]: < Q : ATK | stat: Unknown, ats > -> < Q : ATK | stat: Fail, ats >. These are unconditional (\(\phi=true\)) rules and then, \(\phi\) is omitted. Q (resp. ats) is a logical variable of sort Q (resp. AttributeSet, a set of attributes). These rules change the state of an attack currently in state Unknown to either Succeed or Fail. For instance, consider the term \(t_{GA}\) (of sort Configuration) above. Due to the structural axioms governing the juxtaposition operator ([assoc comm id: none]), these two rules can be applied in two different positions (local fragments) of the system represented by \(t_{GA}\). More precisely, the rules [ATKOK] and [ATKOK] can be applied by either substituting the variable Q with the term 'h (and ats with time: 3, cost: 500,...) or substituting Q with 'e. Hence, the term \(t_{GA}\) can be rewritten in two steps into four possible configurations where: both attacks fail, one of the attacks succeeds and the other fails, or both attacks succeed. That is, all the possible outcomes for the attacks are covered. The rules for defences are defined similarly: --- semantics for defences rl [DEFOR] : < Q : DEF | stat: Unknown, ats > -> < Q : DEF | stat: Succeed, ats >. rl [DEFNOK] : < Q : DEF | stat: Unknown, ats > -> < Q : DEF | stat: Fail, ats >. **Gates.** Let us start with the rules for the OR gate: rl [OR] : < Q : OR | schd: (o, S), stat: Unk., used: U, ats> < Q : C | stat: Succeed, ats' -> < Q : OR | schd: empty, stat: Succeed, ats >. The left-hand side (LHS) of the rule matches a fragment of the global system containing two objects: an OR gate and an object o of any class (o and c are variables of sort Oid and Cid respectively). The term (o, S), where S has sort NodeSet, is a set. Hence, this rule applies to any of the children (in state Succeed) of the gate. The right-hand side (RHS) dictates the new state: the OR gate moves to the state Succeed; the node o is added to the attribute used, witnessing that o is required to perform the attack Q; and the attributes for time, cost and the number of agents in o are accumulated in Q. This is the purpose of the function accumulate that computes the new values from the attributes of o (ats) and those of o (ats'). The new values for time and cost result from adding the time and cost accumulated in the children o with the time and cost of the gate c. Moreover, the number of agents needed to perform o is set to the number of agents needed to perform o. This is an upper bound for the number of agents needed, where one of the agents working on the subtree o can complete Q. Now we consider two rules for handling the cases when one of the children of the OR gate fails and where there are no more children to be considered: rl [OR] : --- failing child < Q : OR | schd: (o, S), stat: Unknown, ats > < o : C | stat: Fail, ats' > -> < Q : OR | schd: S, stat: Unknown, ats > < o : C | stat: Fail, ats' >. rl [OR] : --- no more children < Q : OR | schd: empty, stat: Unknown, ats > -> < Q : OR | schd: empty, stat: Fail, ats >. The first rule discards a failing child of the OR gate. The second rule changes the state of the gate to Fail when there are no remaining children. With these rules, the term \(t_{GA}\) can be rewritten into three possible configurations where the gate GA: fails (when both h and e fail); succeeds with total time \(3\) (when h succeeds, regardless the state of e); and succeeds with total time \(10\) (when e succeeds). The rules for the (parallel) AND gate are defined as follows: rl [AND] : --- succeeded child < Q : AND|schd: (o, S), stat: Unk., used: U, ats> < o : C | stat: Succeed, ats' > -> < Q : AND|schd: S, stat: Unknown, used:(U,o), ac-max(ats,ats') > < o : C | stat: Succeed, ats' >. rl [AND] : --- failing child < Q : AND | schd: (o, S), stat: Unknown, ats > < o : C | stat: Fail, ats' > -> < Q : AND | schd: empty, stat: Fail, ats >. rl [AND] : --- no more children < Q : AND | schd: empty, stat: Unknown, ats > -> < Q : AND | schd: empty, stat: Succeed, ats >. In the first rule, the operator acc-max accumulates the time attribute by using the function max. That is, the AND gate computes the maximal value among the time needed to perform the attacks in each of the children of Q. On the contrary, the number of agents is accumulated by adding the value of the attribute agents of o and Q. Intuitively, since the children of o can be executed in parallel (and in any order), an upper bound for the number of agents needed in o is the sum of the agents needed for each of o's children. In the second rule, as expected, a failure of one of the children implies the failure of the gate. In the third rule, when all the children succeed (and schd is empty) the gate succeeds. The behaviour of the sequential gate is specified as follows: rl [SAND] : < Q : AND|lchd: (o L), stat: Unk., used: U, ats > < o : C | stat: Suc., ats' -> < Q : SAND|lchd: L, stat: Unk., used: (U,o) accumulate(ats, ats') > < o : C| stat: Suc., ats' >. The term (o L) is a list and this rule only matches a state where the first child of the gate is in state Succeed. Similar rules to those presented for the AND gate handling the cases for a failing child and an empty list of children are also part of the specification and omitted. The attribute time is accumulated in this case by adding the values in \(o\) and \(Q\). For the number of agents, the value is accumulated using the function max: the attack is sequential and the number of agents needed in \(Q\) is bound by the child that requires more agents. The next rules give meaning to the NOT gate, used to model the gates CAND, NODF and SCAND in Figure 1: r1 [NOT] : < Q : NOT | lchd: o, stat: Unknown, ats > < o : C | stat: Succeed, ats' > -> < Q : NOT | stat: Fail, acc-def(ats) > < o : C | stat: Succeed, ats' >. [NOT] : < Q : NOT | lchd: o, stat: Unknown, ats > < o : C | stat: Fail, ats' > -> < Q : NOT | stat: Succeed, acc-def(ats) > < o : C | stat: Fail, ats' >. As expected, if the (unique) child of a NOT gate succeeds, the gate fails and vice-versa. The time, cost and number of agents are accumulated in a different attribute (acc-def) since those correspond to the resources for a defence (and not for an attack). We add an extra rule whose unique purpose is to summarise the results of the analysis: r1 [END] : {Q ;< Q : C | stat: Succeed, agents: a, acctime: t, ats > Cnf } => { < Q : C | agents: a, acctime: t > < gates: attacks((< Q : C | ats > Cnf )) > < defences: act-defences(Cnf) > }. This rule is enabled only when the root of the tree Q is in state Succeed. All the attributes but the accumulated time and the number of agents are discarded. The nodes of the tree (Cnf) but the root are also discarded. Two new objects are created, namely gates and defences, that store the set of attacks and defences enabled in the final configuration. Such sets are computed with the aid of the operators attacks (that uses the attribute used in the gates) and act-defences. Note that the shape of the configuration has changed, from {Q:Cnf} to {Cnf} (see the operators defined in the end of Section VI-A). **Exploring the search space.** A rewrite theory \(\mathcal{R}\) proves sequents of the form \(\mathcal{R}\vdash t\longrightarrow^{*}t^{\prime}\) meaning that the term \(t\) rewrites in zero or more steps into \(t^{\prime}\). Here, we are interested in proving sequents of the form \(\mathcal{R}\vdash t\longrightarrow!t^{\prime}\) meaning that \(t\longrightarrow^{*}t^{\prime}\) and \(t^{\prime}\) cannot be further rewritten. Let us call \(\mathcal{R}_{ADT}\) the rewrite theory defined above that represents the state of an ADTree and its execution. For an ADTree \(T\), if \(\mathcal{R}_{ADT}\vdash\llbracket T\rrbracket\longrightarrow!t^{\prime}\) then \(t^{\prime}\) can be either a configuration where the root node \(Q\) fails (and the other gates are in a state different from Unknown) or a term of the form \[\{\langle Q:C\mid agents:a,acctime:t\rangle\langle gates:SA\rangle\langle defences: SD\rangle\}\] where \(a\) and \(t\) are, respectively, the upper bound for the number of agents and the time needed to perform the root attack \(Q\). Moreover, \(SA\) and \(SD\) are, respectively, the set of enabled attacks and defences in the final configuration. For now on, the term above will be written as \([a,t,SA,SD]\). **Example 1**.: _Let \(T\) be the ADTree in Figure 2 and \(t_{TS}=\llbracket T\rrbracket\). Using the above defined rewrite theory, the Maude's command search t-TS ->! Chf:Configuration finds four (distinct) final configurations corresponding to the two possible outcomes of the defence \(p\) and the choice of the attack used in the gate \(GA\). In the two non-failing configurations, \(p\) is not enabled. In one of them, \(h\) is chosen and the total time for the attack is \(125\). In the other, \(e\) is executed with total time \(132\)._ **Theorem 1** (Correctness).: _Let \(T\) be an ADTree. Then, \(\mathcal{R}_{ADT}\vdash\llbracket T\rrbracket\longrightarrow![a,t,SA,SD]\) iff there is an attack in \(T\) of time \(t\) where the attacks (resp. defences) in \(SA\) (resp. SD) are enabled._ Proof.: (\(\Rightarrow\)) We must have \(\llbracket T\rrbracket\longrightarrow^{*}t^{\prime}\longrightarrow[a,t,SA,SD]\) where the last rule applied is necessarily [END]. Consider the derivation \(\llbracket T\rrbracket\longrightarrow^{*}t^{\prime}\) where the rules for the different gates are applied. An invariant in each step of such a derivation is that when the accumulated time attribute is modified in a gate, it is computed correctly. For instance, when the rule [SAND] is applied, the accumulated time is the sum of the time needed to perform each of the children of the gate. Following the rules applied in the derivation, we can reconstruct the attack in \(T\). (\(\Leftarrow\)) Consider the particular sequence of rewriting where the rule [ATROK] is applied in all the attack leaves in \(SA\) and [ATROK] in the others. Similarly for the defences in \(SD\). This completely determines the way the rules for the gates need to be applied, thus reproducing the same attack. As illustrated in Example 1, we can use the search facilities in Maude to list all the final (successful) configurations to perform an attack and find the minimal time. **Theorem 2** (Optimal time).: _Let \(T\) be an ADTree. If the minimal time to perform the main attack in \(T\) is \(t\), then there exists a, \(SD\) and \(ST\) s.t. \(\mathcal{R}\vdash\llbracket T\rrbracket\longrightarrow![a,t,SA,SD]\)._ Proof.: Immediate from Theorem 1. Unfortunately, this procedure does not allow for finding the minimal number of agents but only an upper bound for it. The reason is that the operator acc-max (see rule [AND]) sums the number of agents needed for each child of the gate. Hence, for instance, this procedure determines that the number of agents to perform the attack in Figure 6 is \(3\) (two agents to perform concurrently \(d\) and \(e\) and an extra one to perform \(b\)). However, there is an attack using only 2 agents (Example 2). The key point is that the semantics does not handle the case where an agent can be shared between different branches of the tree. **Theorem 3** (Upper bound for the number of agents).: _Let \(T\) be an ADTree. If \(n\) agents can perform an attack on \(T\) with time \(t\), then there exists \(a\), \(SA\) and \(SD\) s.t. \(\mathcal{R}\vdash\llbracket T\rrbracket\longrightarrow^{*}[a,t,SA,SD]\) and \(n\leq a\)._ Proof.: Similar to the proof of Theorem 1. ### _Minimal set of agents_ This section proposes a second rewrite theory \(\mathcal{R}^{A}_{ADT}\) useful for finding the minimal set of agents to perform an attack. The starting point is a new constructor for the sort Configuration with the following attributes: ``` {agents:_ ---schedule global-time:_ --- elapsedtime max-time:_ --- max time for the attack enabled:_ --- set of enabled attacks disabled:_ --- --- <<ats. that cannot be performed now system:_ --- representation of the system/gates } ``` The first attribute is a list of terms of the form [L] :: N where L is a list of node identifiers and N a natural number. The term ([ ab ] :: 3) ([ c ] :: 0 ) represents a scenario with two agents: the first one has already performed the attack \(a\) and she is currently working on \(b\) with remaining duration \(3\); and the second agent has already performed \(c\) and she is currently free (\(N=0\)). The attribute global-time is a global clock indicating the current time-unit. max-time is the maximal time the agents have to perform the attack, and its value will be initialised with the time computed with the theory \(\mathcal{R}_{ADT}\). The set \(SA\), computed by \(\mathcal{R}_{ADT}\), is partitioned into two sets, namely, enabled and disabled. All the non-leaf gates are in the second set as well as the leaves which belong to a subtree that is not the first child of a sequential gate. The other (leaf) attacks are in the set enabled. The last attribute stores the representation of the ADTree ([\(T\)]). The following operator will be useful to build the initial configuration: ``` opmake-schedule:NatNNodeSetConf->Conf. cegmake-schedule(n,t,s,Sys)- {agents:make-agents(n)---build.thelistofag. global-time:0 max-time:t enabled:intersection(S',S) disabled:S'---setdifference system:Sys ifS':=all-attacks(Sys). ``` where the first two parameters are, respectively, the number of agents and the total time for the attack. The third parameter is the set of enabled attacks and the last parameter the representation of the ADTree. In what follows, we define rules to non-deterministically assign attacks to agents, move attacks from the set disabled to the set enabled and make the global time advance. Let us start with the rule assigning an attack to an agent. For the sake of readability, the parts of the configuration not modified by the rule are omitted: ``` rl[pick]: {agents:SL([L]::0)SL' enabled:(o,S) system:(Q',Cnf<o:C|ats,time:t>)|-> {agents:SL([L0]::t)SL' enabled:(S) system:(Q',Cnf<o:C|ats,time:t>)}. ``` One of the enabled attacks o is assigned to a free agent (SL and SL' are lists of terms of the form [L] ::N). After the transition, the chosen agent is working on o with duration t. It is also possible for an agent to interrupt the current attack she is working on and pick another (enabled) attack. This is the purpose of the following rule: ``` rl[inter]: {agents:SL([L0]::nt)SL' enabled:(o',S) system:(Q',S) <o':C|ats,time:t> > <o':C'|ats',time:t'>} {agents:SL([L0']::t')SL' enabled:(o,S) system:(Q',Cnf<o:C|ats,time:nt> <o':C'|ats',time:t'>}) ``` After the transition: the attack o is back to the set enabled; the remaining time for o is updated to nt in the attribute system; and the attack o' with duration t' is scheduled. The next rule models the fact that the time advances for all the (busy) agents: ``` rl[time]:{agents:SLglobal-time:n } ->{agents:minus(SL,1)global-time:n+1}. ``` The function minus simply decrements by \(1\) the time needed to finish the current task for each busy agent. Since time advances by one unit and agents are free to interrupt their current task, these rules effectively model the preprocessing proposed in Section III. Now, consider the two rules below: ``` rl[END]:{agents:SLglobal-time:n enabled:empty system:{Q',Cnf<Q:C|stat:Suc,ats>}} ->{agents:SL():. crl[FAIL]:{global-time:nmax-time:n'}->fail ifn>n'. ``` The rule [END] finishes the computation when the root of the ADTree is in state Succeed and there are no more pending attacks to be executed. The second rule is conditional: if the global time \(n\) is greater than the maximal time \(n^{\prime}\), then the configuration reduces to fail. That is, the agents could not meet the deadline for the attack. To conclude, we introduce rules governing the movement of attacks between the sets enabled and disabled: ``` rl[done]: {agents:SL([L0]::0)SL' system:(Q',Cnf<o:C|ats,stat:Unknown>) }-> {agents:SL([L0]::0)SL' system:(Q',Cnf<o:C|ats,stat:Succeed>}). rl[active]: {enabled:S disabled:(o,S') system:(Q',Cnf <o:SAND|ats,stat:Unk.,lchd:nil>) }-> {enabled:(o,S) disabled:S' system:(Q',Cnf <o:SAND|ats,stat:Unk.,lchd:nil>) }. ``` If an agent has already finished the attack o, the rule [done] updates the state of o from Unknown to Succeed. The second rule enables the attack o when it is a sequential gate whose children have all already been performed (lsch=nil). Similar rules are introduced for the other gates. **Example 2**.: _Consider the ADTree in Figure 6. The \(\mathcal{R}_{ADT}\) theory determines that the attack can be performed in \(5\) time-units with at most \(3\) agents. Starting from a configuration where the attribute agents is set to ([nil]::0) ([nil]::0)([nil]::0) and max-time to \(5\), we can enumerate all the possible schedules leading to the attack. One of these includes the configuration (['d'c'a]::0) (['e'b]::0) ([] ::0), where the third agent was not assigned any attack._ In what follows, we use \([n,t,S,T]\) to denote the term make-schedule(n,t,S,[T] ) and \([SL]\) to denote the term (agents: SL) (see the RHS in rule [END]). **Theorem 4** (Correctness).: _Let \(T\) be an ADTree. \(\mathcal{R}^{A}_{ADT}\vdash[n,t,S,T]\longrightarrow![SL]\) iff there is an attack in \(T\) with \(n\) agents and time \(t\) where all the attacks in \(S\) are performed._ Proof.: As in Theorem 1, the close correspondence of steps in the attack and rules in \(\mathcal{R}^{A}_{ADT}\) allows us to rebuild the attack in \(T\) from the derivation in \(\mathcal{R}^{A}_{ADT}\) (\(\Rightarrow\)) and vice-versa (\(\Leftarrow\)). ### _Heuristics and strategies_ As illustrated in Examples 1 and 2, it is possible to explore the reachable state space generated from a given term. The search command uses a breadth-first strategy: for each node of the search tree, all the rules, with all possible matchings, are applied to produce the next level in the search tree. This guarantees completeness: if \(\mathcal{R}\vdash t\longrightarrow^{*}t^{\prime}\) then the search command will eventually find \(t^{\prime}\). The search space generated by terms in the theories \(\mathcal{R}_{ADT}\) and \(\mathcal{R}^{A}_{ADT}\) is certainly finite but it can grow very fast, especially in \(\mathcal{R}^{A}_{ADT}\). Hence, for more complex ADTrees, the search procedure will not terminate in a reasonable time. In this section we show how to control the non-determinism in the proposed theories. The result is a decision procedure that can be effectively used in the case studies presented in Section V. **Strategy for \(\mathcal{R}_{ADT}\)**. By inspecting the rules in the theory \(\mathcal{R}_{ADT}\), we can observe that there are different sources of non-determinism that can be controlled (without losing solutions). For instance, the last two rules for the OR gate (failing child and no more children) can be eagerly applied: any interleaving with those rules will produce the same effect. Note that this is not the case for the first OR rule: different choices for matching the pattern (o, S) produce different results and all the possibilities need to be explored. Now consider the rules for the (parallel) and gate. A failing child implies the failing of the gate, regardless of the state of the other children. Moreover, given two children in state Succeed, it is irrelevant which one is considered first in an application of the first rule (pattern (o,S )). This is the case since function act-max accumulates values using + and max, both commutative operations. Now let us explore the rules for the nodes in the leaves of the ADTree. Consider {ATKOX} and {ATKNOK} and the gate GA in Figure 2. This attack succeeds only if either \(h\) or \(e\) succeeds. If both succeed, the [OR] rule discards one of them. In other words, when the rule [OR] is applied, the status of the discarded children S in the pattern (o, S) is irrelevant and we can safely assume that the attacks in the subtree S were not performed. This means that we can dispense with the application of {ATKNOK} and rely on the rule OR to explore all the possible configurations. Also, the rules for defences are both needed: the activation or not of a defence limits the attacks that can be accomplished. Strategies [34] provide a mechanism for controlling the way rules are applied in a given theory. In Maude, this is implemented with the help of a strategy language that tells the rewriting engine how to explore the state space. The command srew T using STR rewrites the term T according to the strategy expression STR and returns all its possible results. The basic building block in the strategy language is the application of a rule. For instance, the command srew T using OR will apply the rule [OR] in all possible ways on term T. As discussed above, if there are different matchings for the application of {AMD}, all of them lead to the same result. The strategy one(AND) applied to a term T succeeds if {AMD} matches, possibly in different ways, but only one matching is considered and the others discarded. Strategies can be defined by using constructors similar to regular expressions (see the complete list in [8, Section 4]): idle (identity); empty set / no solution (fail); concatenation (\(\alpha;\beta\)); disjunction (\(\alpha\mid\beta\)); iteration (\(\alpha^{*}\)); conditional application, \(\alpha\)? \(\beta\) : \(\gamma\), where \(\beta\) is applied on the resulting terms after the application of \(\alpha\), or \(\gamma\) is applied if \(\alpha\) does not produce any result. From these, it is possible to define: \(\alpha\) or-else \(\beta\) that executes \(\beta\) if \(\alpha\) fails; and the normalisation operator \(\alpha!\) that applies \(\alpha\) until it cannot be further applied. Consider the following strategy: ``` deter:=(one(ATKOX)or-elseone(NOT)or-elseone(ORD)or-elseone(SAND)or-elseone(PAND))!. solve:=(ChoiceOK|ChoiceNOK)!; ``` where ORD refers to the second and third rules for the OR gate. The strategy deter applies the confluent rules until a fixed point is reached (!). The strategy solve first explores all the configurations for the defences (active or inactive). Then, the confluent rules are eagerly applied. Next, if the [END] rule can be applied, the computation finishes: the rules for gates do not apply on the resulting term on the RHS of {END}, and a further application of deter necessarily fails. If this is not the case, the [OR] rule is tried. If there are no more OR gates in the configuration, the strategy fails. Otherwise, the OR rule is applied (considering all possible matchings) and the confluent rules are used again. Fig. 6: Interrupted schedule example Recall that final/irreducible configurations can be either \(\{C\}\) (RHS in [END]) or \(\{Q;C\}\) where the gate \(Q\) is in state Fail and all the other rules are in a state different from Unknown. **Theorem 5** (Completeness).: \(\mathcal{R}_{ADT}\vdash\{Q;C\}\longrightarrow!\{C^{\prime}\}\) _iff the configuration \(\{C^{\prime}\}\) is reachable from \(\{Q;C\}\) following the strategy solve._ Proof.: As explained above, the final outcome of the attack depends on the defences and the choices in or gates (if any). Consider the rules applied in the derivation \(\{Q;C\}\longrightarrow!\{C^{\prime}\}\) (where the last one is necessarily [end]). The activation or not of a defence does not depend on any other action (leaf nodes). Hence, we can permute the application of those rules to be done at the beginning of the derivation. Due to the commutativity of the operations for accumulating values (max and +), we can also rearrange the application of the rules (except [OR]) following deter. Note that the rule [ATKNOK] may appear in the derivation. However, this is only possible in the scope of a subtree discarded in an OR gate. Hence, we still have a valid derivation without using that rule. **Non-determinism in \(\mathcal{R}^{A}_{ADT}\)**.** Now let us consider the theory \(\mathcal{R}^{A}_{ADT}\) that exhibits many sources of non-determinism. The [pick] rule can select any enabled element \(\circ\) and schedule it for any free ([L] :: 0) agent. Since in the current model agents have the same abilities to perform any of the attacks, we may impose an additional restriction in this rule: all the agents in the list SL must be working (remaining time different from zero). Hence, [pick] will schedule \(\circ\) to the first free agent in the list, thus eliminating some (unnecessary) choices. The rules [pick] and [time] can be interleaved in many ways. One might be tempted to restrict the application of [time] to configurations where either there is no enabled activity or where all the agents are busy. Let us call this strategy PBT ([pick] before [time]). Since \(\mathcal{R}_{ADT}\) computes an upper bound for the number of agents, the strategy PBT cannot be used to compute the minimal set of agents: it will enforce the use of all of them. An approach to circumvent the problem above is the following. Assume that for a given ADTree, \(\mathcal{R}_{ADT}\) finds an attack with a number of agents \(n\). Then, execute \(\mathcal{R}^{A}_{ADT}\) with the strategy PBT with a configuration of \(i\) agents, iterating \(i\) from \(1\) to \(n\). The first value for \(i\in 1..n\) that succeeds will correspond to the optimal number of agents. The easiest way to enforce PBT is by adding an extra condition to [time]: cr1 [time] : {agents: SL global-time: nenabled: S } => { agents: minus(SL, 1) global-time: n + 1 enabled: S } if all-busy(SL) or (some-not-busy(SL) and S -- empty) Hence, time advances only if all the agents are currently working or the set of enabled attacks is empty. There is one extra source of non-determinism that we can control. The [pick] rule, in its current form, can choose any of the enabled activities. How can we guide such a choice? The answer is in the algorithm in Section IV: choose by levels and prioritising the activities with higher depth. Based on the level and depth, we can define the strict lexicographical total order \((l,d,id)\prec(l^{\prime},d^{\prime},id^{\prime})\) iff \(l<l^{\prime}\) (first nodes with higher levels); or \(l=l^{\prime}\) and \(d<d^{\prime}\) (priority to higher depth); or \(l=l^{\prime}\), \(d=d^{\prime}\) and \(id<id^{\prime}\) (needed to break ties on activities with the same level and depth). Hence, the rule [pick] becomes: cr1 [pick] : { agents: SL [(L] :: 0) SL' enabled: (o, S) system: { Q ; Cnf < o : C | ast, time: t >} } -> { agents: SL ([L 0] :: t) SL' enabled: (S) system: { Q ; Cnf < o : C | ast, time: t >} if all-busy(SL) -- [L]::0 is the fst free ag. /\(\circ\) -- max(o, S). -- \(\circ\) is the max wrt < Let \(\mathcal{R}^{\prime A}_{ADT}\) be as \(\mathcal{R}^{A}_{ADT}\) but replacing [time] and [pick] with the conditional rules above. **Theorem 6** (Correctness).: _Let \(T\) be an ADTree and suppose that \(\mathcal{R}_{ADT}\) finds an attack with time \(t\) and number of agents \(n\) using the set of attacks \(S\). If \(\mathcal{R}^{A}_{ADT}\vdash[n,t,S,T]\longrightarrow^{*}[SL]\) and \(m\) agents in SL were not assigned any task, then \(\mathcal{R}^{\prime A}_{ADT}\vdash[n-m,t,S,T]\longrightarrow^{*}[SL^{\prime}]\) where \(SL^{\prime}\) is as \(SL\) but with the \(m\) (unused) agents removed._ Proof.: Assume that in a given state, there are two enabled attacks \(o\) and \(o^{\prime}\) and \(o^{\prime}\prec o\). \(\mathcal{R}^{A}_{ADT}\) may pick either \(o\) or \(o^{\prime}\) and \(\mathcal{R}^{\prime A}_{ADT}\) is forced to pick \(o\). We show that \(\mathcal{R}^{A}_{ADT}\) necessarily chooses \(o\). Let \(X\) be the common ancestor of \(o\) and \(o^{\prime}\). Since both actions are enabled, \(X\) is necessarily an AND gate. The minimal remaining time \(mt\) for \(X\) is bound by the maximum time needed the perform the actions in the path from \(o\) to \(X\) (say \(t\)) and the time needed to perform the path from \(o^{\prime}\) to \(X\) (say \(t^{\prime}\)). Since \(o^{\prime}\prec o\), then \(t^{\prime}<t\). Suppose, to obtain a contradiction, that at a given time, \(o^{\prime}\) is scheduled and \(o\) is not. When the time advances, \(t^{\prime}\) is decremented but \(t\) remains the same. Hence, the deadline \(mt\) for \(X\) cannot be met. ### _Results_ In the repository [33] of ADT2MAUDE, the reader can find the complete specification of the proposed rewrite theories. A script written in Python, using the bindings for Maude ([https://github.com/fadoss/maude-bindings](https://github.com/fadoss/maude-bindings)), translates the input format for ADTrees used in ADT2MAAS and produces a term representing the tree (\([\![T]\!]\)). Then, the analyses for finding the minimal time and the optimal schedule are performed. The resulting schedules coincide with those reported in Section V. Even though the specialised algorithm outperforms Maude in most cases, Table I shows that the specification is useful in practice. Additional benchmarks can be found at [https://bit.ly/3ONeSzq](https://bit.ly/3ONeSzq). Being declarative (since behaviour is easily described by rules) and based on a search procedure, the rewriting logic specification is easily extensible to consider other constraints and metrics in ADTrees. For instance, the algorithm (and the optimisation in \(\mathcal{R}^{\prime A}_{ADT}\)) assumes that agents can interrupt an activity and start another one. We may add, as an additional attribute, that such an interruption requires additional time since the tasks are not executed in the same room. It is also possible to specify different kind of agents where only some of them are trained for some specific tasks. The RL approach also opens the possibility of considering multi-objective optimisations including the cost, time and number of agents to perform the attack. ## VII Conclusion This paper has presented an agents scheduling algorithm that allows for evaluating attack/defence models. It synthesises a minimal number of agents and their schedule, providing insight to both parties as to the number of agents and actions necessary for a successful attack, and the defences required to counter it. We have also presented an executable rewrite theory to solve the same problem. The specialised algorithm inspired some optimisations that allowed us to reduce the state space and show that the specification can be used in practice. The declarative model in RL opens different alternatives to consider other constraints and quantitative measures in ADTrees. We thus obtain a complete framework for not only analysis but also synthesis of agent configurations and schedules to achieve a given goal in a multi-agent system. Targeting more elaborate goals, expressed in the TATL logic [35], will allow for analysing more general multi-agent systems and their properties. Also, we plan to use rewriting modulo SMT [36] to encode configurations induced by OR and defence nodes and perform symbolic analysis [8] on ADTrees.
2303.17282
Diagnosis of 3D magnetic field and modes composition in MHD turbulence with Y-parameter
Magnetic fields are crucial in numerous astrophysical processes within the interstellar medium. However, the detailed determination of magnetic field geometry is notoriously challenging. Based on the modern magnetohydrodynamic (MHD) turbulence theory, we introduce a novel statistical technique, the "Y-parameter", to decipher the magnetic field inclination in the ISM and identify dominant turbulence modes. The Y-parameter, calculated as the ratio of anisotropies of different Stokes parameter combinations, displays contrasting trends with the mean-field inclination angle in Alfv\'enic and compressible turbulence modes. A Y-parameter value around $1.5\pm0.5$ provide a statistical boundary to determine the dominant MHD turbulence modes. We have discovered specific correlations between the Y-parameter value and the inclination angle that unveil the dominant turbulence mode. This methodology, when applied to future radio polarisation surveys such as LOFAR and SKA, promises to significantly enhance our knowledge of 3D magnetic field in the ISM and improve our understanding of interstellar turbulence.
Sunil Malik, Ka Ho Yuen, Huirong Yan
2023-03-30T10:44:31Z
http://arxiv.org/abs/2303.17282v2
# Diagnosis of 3D magnetic field and modes composition in MHD turbulence with Y-parameter ###### Abstract Magnetic field is ubiquitous in interstellar media and channels turbulent flow from kilo-parsec to sub-parsec scales in both diffuse interstellar medium (ISM) and molecular clouds. The determination of magnetohydrodynamic (MHD) turbulence and 3D magnetic field properties in ISM is notoriously difficult. In this study, we establish a statistical recipe "Y-parameter" based on the recent development of turbulence statistical theory, namely the turbulence anisotropy analysis, to reconstruct 3D magnetic fields in MHD simulations and understand the mode-decomposition of MHD turbulence. In our analysis, we used 25 MHD turbulence datacubes simulated using ZEUS-MP and Athena++ codes. We found that the anisotropy of Stokes parameters can act as a diagnostic for retrieving the magnetic field inclination in ISM and identifying the dominating turbulence. It is supported by the value space separation of the Y-parameter for decomposed Alfvenic and compressible MHD cubes and which is decreasing and increasing with mean field inclination angle, \(\theta_{\lambda}\), respectively. In the total cube analysis, Y \(\sim\) 1.5 (with Y \(>\) 1.5 for A-mode and Y \(<\) 1.5 for C-mode) provides a statistical demarcation to obtain the dominant fraction of MHD turbulence modes in the region. Furthermore, we have found that the (i) if Y \(\gtrsim\) 2.5, 10\({}^{\circ}\)\(<\theta_{\lambda}<\) 30\({}^{\circ}\) and A mode or 5\({}^{\circ}\)\(<\theta_{\lambda}<\) 10\({}^{\circ}\) and C-mode (ii) if Y \(\lesssim\) 1.0, \(\theta_{\lambda}\)\(\lesssim\) 5\({}^{\circ}\) and C-mode or \(\theta_{\lambda}\)\(\gtrsim\) 60\({}^{\circ}\) and A-mode (iii) 40\({}^{\circ}\)\(\lesssim\theta_{\lambda}\)\(\lesssim\) 60\({}^{\circ}\) with A-mode or \(\theta_{\lambda}\)\(\gtrsim\) 70\({}^{\circ}\) with C-mode, if Y-parameter is in the intermediate range. As a consequence, in the future with the availability of vast radio polarisation surveys, this technique can play a leading role in detecting 3D magnetic field in ISM and characterizing the nature of the interstellar turbulence. keywords: Synchrotron radiation, magnetic fields - polarization, Stokes parameters, general -interstellar medium - techniques: Astrophysical Plasma turbulence ## 1 Introduction The ISM is a complex, multi-phase environment composed of gas, dust, and magnetic fields. Its magnetized and turbulent nature provides us a good environment to study the magneto-hydrodynamic (MHD) turbulence in our galaxy (Elmegreen & Scalo, 2004; Draine, 2011). MHD turbulence is important for a wide range of astrophysical processes, such as the formation and evolution of stars, the transport of energy and momentum, cosmic ray scattering and acceleration (Yan & Lazarian, 2002, 2004, 2008; Yan, 2022; Lemoine, 2022) and the amplification of magnetic fields (Lazarian et al., 2020). Observational detection of the properties of MHD turbulence in the ISM is essential for a comprehensive understanding of the ISM and the processes that take place within it. The statistics of turbulence that we detect is highly dependent on the 3D magnetic field projection particularly since the MHD turbulence is in general anisotropic. The 3D inclination of the magnetic field in interstellar media and its relation to other physical processes is therefore one of the most important scientific questions in the astrophysical community. However, determining the properties of the magnetic field, in particular, its interplay with ubiquitous interstellar turbulence, is notoriously difficult. Measurement of magnetic field properties mainly relies on two popular observational techniques: polarimetry from synchrotron radiation or dust emission/absorption that only gives the line-integrated or plane of sky magnetic field direction (Lazarian & Hoang, 2007; Andersson et al., 2015), and Zeeman splitting that gives line-of-sight magnetic field strength in dense clouds (Crutcher, 1999; Chepurnov et al., 2010). Recent effort based on atomic alignment in the magnetic field suggests that the 3D magnetic field topology could possibly be measured (Yan & Lazarian, 2006, 2007, 2008, 2012), but currently restricted to metal absorption lines due to instrumental restrictions (Zhang et al., 2020). The search of 3D magnetic field and its underlying relation to turbulence is therefore in a deadlock. Several efforts have been made for the identification of MHD turbulence nature in the interstellar medium using different observations. One common method is to measure the power spectrum of electron density fluctuations in the ISM (Armstrong et al., 1995; Chepurnov & Lazarian, 2010), which can reveal the presence of turbulent motions. Other methods include studying the distribution of velocities in the ISM using spectroscopic observations(Lazarian et al., 2004; Kandel et al., 2016), and mapping the distribution of magnetic fields using polarization measurements. The polarized radiation reflects on the fluctuations in the embedded magnetic fields caused due to the turbulence, which in turn allows us to study its strength and morphology. Recent theoretical development on magnetized turbulence theory suggests that the properties of the magnetic field are encoded in the statistics of MHD turbulence (Yan & Lazarian, 2004; Lazarian & Pogosyan, 2012; Makwana & Yan, 2020). Conceptually, MHD turbulence can primarily be decomposed into three modes: Alfven mode, and the fast and slow magnetosonic modes (also known as magneto-acoustic modes) (Cho & Lazarian, 2003). Magnetic field lines are stretched differently by Alfven and magnetosonic modes and therefore the statistics of magnetic field observables are different. Utilizing this fact, Zhang et al. (2020) and later Yuen et al. (2023) both suggest that the statistics of polarized synchrotron radiation reflect the fluctuations in the embedded magnetic fields caused by the turbulence, which in turn allows us to study the strength and morphology of the magnetic field. The structure of the paper is as follows. In SS2, we discuss our theoretical construction in measuring the line of sight angle from the theory of turbulence statistics. The numerical simulation setup and the observables constructed from it are described in SS3. Our detailed analysis and results for decomposed and total cubes can be found in SS4. We discuss the impact of our results in SS5 and we conclude our paper at SS6. ## 2 The essence of the Y-parameter analysis ### Mapping theory of turbulence statistics and their discrepancies Let us first briefly summarize the "Y-parameter" analysis proposed in Yuen et al. (2023). The fundamental question that statistical theory of MHD turbulence based on the axis-symmetric assumption (Lazarian & Pogosyan, 2000; Yan & Lazarian, 2002; Lazarian & Pogosyan, 2004, 2012, 2016; Kandel et al., 2016; Yuen et al., 2021) wants to resolve is "how observational statistics are mapped from the 3D turbulence statistics". These series of works usually assume a given statistics of 3D turbulence variables (i.e., 3D density \(\rho\), 3D turbulence velocity \(\mathbf{v}\) and magnetic field \(\mathbf{B}\)) in the form of spectral slopes (Armstrong et al., 1995; Chepurnov et al., 2010; Yuen et al., 2022), anisotropy measure (Cho & Lazarian, 2002, 2003; Esquivel & Lazarian, 2005) and tensor structures (Yan & Lazarian, 2002; Kandel et al., 2016, 2017; Zhang et al., 2020). The mapping of the 3D statistics to observable statistics is highly nontrivial both in the case of interferometry (Lazarian & Pogosyan, 2000; Yuen et al., 2021) and polarimetry (Lazarian & Pogosyan, 2012, 2016). Notably, both line of sight angles and energy fraction of MHD modes are stored nonlinearly in the statistics (i.e, spectrum, anisotropy, and tensors) of observables. Attempts of retrieving the line of sight angles and mode fraction from observational data have been made. For instance, earlier attempts asserted that the line of sight angles can be estimated by inspection of polarization percentages (Clark et al., 2015). However, these methods are subjected to strong nonlinear interference from "3D \(\rightarrow\) observable" projection, and therefore the accuracy of these methods is questionable. In parallel, the _qualitative_ analysis of mode fraction has been proposed by Zhang et al. (2020) yet there are currently no ways to retrieve the actual quantitative fraction of MHD modes in observations. ### Y-parameter science Very recently, Yuen et al. (2023) proposed based on the "mapping theory" of MHD turbulence statistics (Lazarian & Pogosyan, 2012, 2016) on how to retrieve both line of sight angle \(\theta_{i}\) and mode fraction concurrently via inspection of observable statistics, namely "Y-parameter analysis". Suppose \(X\) is a 2D observable, then \(D_{X}\) represents the global structure function of the observable \(X\): \[D_{X}(\mathbf{R})=(\langle X(\mathbf{R}^{\prime})X(\mathbf{R}^{\prime}+\mathbf{ R})\rangle^{2})_{\mathbf{R}}, \tag{1}\] where one can always write \(D_{x}(\mathbf{R})=D_{X}(R,\phi)\) via series of multipoles (c.f.Kandel et al., 2016): \[D_{X}(R,\phi)=\sum_{m=0,1,2,\ldots}^{\infty}D_{m}(R)\cos(m\phi) \tag{2}\] where the odd terms of \(m\) is zero due to \(D_{X}\) being even. The mapping theory (Lazarian & Pogosyan, 2000, 2004, 2012, 2016; Kandel et al., 2016, 2017) assumes that the quadrupole-to-monopole ratio (\(D_{4}/D_{0}\)) and higher order terms are small, which as a result \(\theta_{i}\) is at most quadratic in the statistics of \(D_{X}\). This assumption was used in the previous analyses of anisotropy-related methods (e.g. Lazarian et al., 2022). However, the appendix of Yuen et al. (2023) showed that in the case of small \(M_{A}\) none of \(D_{m\phi 4}/D_{0}\) are vanishing, raising concerns that whether studying the statistics of \(D_{2}\) is sufficient in describing the full fluctuations of observables. Yuen et al. (2023) pointed out that the two-point statistics of observables of the same origin from the same turbulence region carry the same spectrum and anisotropy factors. For instance, Stokes parameters in the case of synchrotron emissions are mostly coming from combinations of magnetic fields, whose statistics are derived from the complete functional forms of spectral, anisotropy, and tensor functions (c.f.Yan & Lazarian, 2002; Yuen et al., 2023). Distinct Stokes parameters typically possess identical spectral indices and anisotropy scaling; however, their tensor functions, which depend on the line-of-sight angle, have varying forms. Consequently, the observed statistics of Stokes Q and U, for instance, exhibit differences. Since we usually consider 2nd order statistics, the tensor functions are at most second order (see, e.g. Lazarian & Pogosyan, 2000; Kandel et al., 2016). As a result, the fraction of two-point statistics from two observables will be at most quadratic of \(\theta_{i}\). Based on the aforementioned principles, Yuen et al. (2023) suggests that the following parameter \[Y=\frac{\mathrm{Anisotropy}(D_{t+Q})}{\mathrm{Anisotropy}(D_{t-Q})}=\frac{v /h(D_{t+Q})}{v/h(D_{t-Q})} \tag{3}\] is a measure of line of sight angle \(\theta_{i}\) and mode fraction. Where \(v\) and \(h\) are indicating the length in the vertical and horizontal directions with respect to the plane of sky B-field. This Y-parameter is used in the remainder paper. ## 3 Method ### Numerical simulations We employ two publicly available MHD codes in simulating magnetized turbulence with different physical conditions. The two codes that we employed in this paper are ZEUS-MP/3D (Hayes et al., 2006) and Athena++(Stone et al., 2020). We run our simulations for at least 2 sound crossing time (\(\tau_{s}=L_{box}/c_{s}\)). Our data cubes are time series of three-dimensional, triply periodic, isothermal MHD simulations with continuous force driving via Ornstein-Uhlenbeck forcing, where the smoothing is controlled by \(t_{corr}=0.01\tau_{z}\). The energy injection rate is adjusted so that various Alfvenic Mach number, \(M_{A}\), and plasma \(\beta\) are simulated. The injection is performed so that we only have eddies with scales \(L_{inj}/L_{Max}\geq 1/2\) are being injected, which corresponds to \(0\leq|\mathbf{k}|\leq 2\). The driving force contains both incompressible and compressible driving controlled by a free parameter \(\zeta\): \[\mathbf{f}=\mathbf{f}_{mean}\zeta+\mathbf{f}_{mean}(1-\zeta) \tag{4}\] where \(\nabla\cdot\mathbf{f}_{mean}=0\). A summary of the simulation parameters is given in Table 1. In our calculations, all physical parameters are set to unity unless specified. Notice that isothermal simulations are scale-free, and therefore units are not an issue in our calculation. ### Synthesis of synchrotron polarization observables In general, the synchrotron emission depends on the distribution of relativistic electrons \[N_{\nu}(\mathcal{E})d\mathcal{E}\sim\mathcal{E}^{\alpha}d\mathcal{E}, \tag{5}\] with the intensity of the synchrotron emission being \[I_{syn,\mathbf{(X)}}\propto\int dzB_{\perp}^{2}(\mathbf{x}) \tag{6}\] where \(\mathbf{X}=(x,y)\) is the 2D position of the sky (POS) vector and \(B_{\perp}=\sqrt{B_{\perp}^{2}+B_{y}^{2}}\) being the magnitude of the magnetic field perpendicular to the line of sight in \(z\)-direction. In general, \(\eta=0.5(\alpha+1)\) is a fractional power, which was a serious problem that was successfully addressed in Lazarian & Pogosyan (2012). There it was proven that the statistics of \(I(\alpha)\) is similar to that of \(I(\alpha=3)\). Therefore it suffices to discuss the statistical properties of the case \(\alpha=3\). Per Lazarian & Pogosyan (2012), Synchrotron complex polarization function _with Faraday rotation_ is given by: \[P_{syn,\mathbf{(R)}}(\mathbf{R})=\int dz\epsilon_{syn,\mathbf{(}}\rho_{val}B ^{2}e^{2(\mathbf{R},z)+C\mathbf{I}\Phi(\mathcal{E},z)}) \tag{7}\] where \(\epsilon_{syn,\mathbf{(}}\rho_{val}\) is the emissivity of synchrotron radiation, \[\Phi(R,z)=\int_{\infty}^{\zeta}d\zeta^{\prime}(4\pi)^{-1/2}\rho_{thermal}( \mathbf{R},z)B_{z}(\mathbf{R},z)\mathrm{rad}\ \mathrm{m}^{-2} \tag{8}\] is the Faraday Rotation Measure 1. Notice that \(\rho_{rel}\) is the relativistic electron density, while \(\rho_{thermal}\) is the thermal electron density. The C-factor \(\approx 0.81\)(Lazarian et al., 2017; Malik et al., 2020). The projected magnetic field orientation is then given by: Footnote 1: It is usually more convenient to use \(H_{z}=B_{z}/\sqrt{4\pi}\) for analysis. \[\theta_{B}=\frac{\pi}{2}+\frac{1}{2}\tan_{2}^{-1}(\frac{U}{Q}) \tag{9}\] where \(\tan_{2}^{-1}\) is the 2-argument arc-tangent function. In the current paper, we assume that \(\lambda\ll 1\) so that the effect of Faraday rotation is close to zero. Also, we will consider only the statistics of \(\eta=2\) (i.e. \(\alpha=3\)). Figure 1: A figure showing how polarized synchrotron emissions store the information of 3D magnetic field. Panel (1): Emissions from synchrotron emission are stored in a spectral-spatial 3D cube \(P=P(x,y,\lambda)\), where the 3rd axis is in the unit of wavelength and it’s along the line of sight. (2). For each \(\lambda\), the magnetic field direction is stored in the synchrotron polarization angle _assuming the Faraday rotation is weak_. (3) The anisotropy of Stokes parameters stores the information of the magnetic field inclination. (4) With all this information combined, the 3D magnetic field angle can therefore be reconstructed with appropriate mathematical procedures. ### Mode fraction analysis The MHD mode vectors in the case of isothermal MHD is given by (Cho & Lazarian, 2003): \[\zeta_{\lambda}(\hat{\mathbf{k}},\lambda) \propto\hat{\mathbf{k}}\times\hat{\lambda}\] \[\zeta_{\zeta}(\hat{\mathbf{k}},\hat{\lambda}) \propto(-1+\alpha-\sqrt{D})(\mathbf{k}\cdot\lambda)\lambda\] \[+(1+\alpha-\sqrt{D})(\hat{\lambda}\times(\mathbf{k}\times\hat{ \lambda})) \tag{10}\] \[\zeta_{\mathcal{P}}(\hat{\mathbf{k}},\hat{\lambda}) \propto(-1+\alpha+\sqrt{D})(\mathbf{k}\cdot\lambda)\lambda\] \[+(1+\alpha+\sqrt{D})(\hat{\lambda}\times(\mathbf{k}\times\hat{ \lambda}))\] where \(\alpha=\beta\Gamma/2\), \(D=(1+\alpha)^{2}-4\alpha\cos^{2}\theta_{\lambda}\), \(\cos\theta_{\lambda}=\hat{\mathbf{k}}\cdot\lambda\), plasma \(\beta\equiv P_{gas}/P_{mag}\) measures the compressibility and \(\Gamma=\partial P/\partial\rho\) is the polytropic index of the adiabatic equation of state. The presence of \(\hat{\mathbf{k}}\) suggests that the direction of the three mode vectors are changing as \(\mathbf{k}\) changes. In this scenario, the perturbed quantities, say for the velocity fluctuations \(\mathbf{v}_{1}=\mathbf{v}-\langle\mathbf{v}\rangle\) can be written as: \[\mathbf{v}_{1}(\mathbf{r})=\int d^{3}\mathbf{k}e^{\beta\mathbf{k}\cdot\mathbf{ r}}\sum_{x\in\lambda,\xi,F}F_{0,x}(\mathbf{k})F_{1,x}(\mathbf{k},\hat{\lambda})C_{x} \xi_{x}(\hat{\mathbf{k}},\hat{\lambda}) \tag{11}\] The "magnetic field" is simply given by an additional rotation of \(\tan\theta_{\lambda}\) from the P(otential)-C(compressible)-A(Ifven) frame (\(\hat{\xi}_{p}=\hat{\mathbf{k}},\hat{\zeta}_{\lambda}=\hat{\mathbf{k}}\times\hat{ \lambda},\hat{\zeta}_{C}=\hat{\mathbf{k}}\times(\hat{\mathbf{k}}\times\hat{ \lambda})\)). The PCA frame has its special advantage since the sampling of \(\mathbf{k}\) is usually complete in \(d\Omega_{\lambda}\). That means we have the freedom to fix \(\mathbf{k}\) despite other unit vectors are changing. From the tensor product, we can always write the arbitrary vector in the Fourier space as : \[\zeta_{\lambda}(\mathbf{k})=C_{p}\hat{k}_{+}+C_{C}\frac{(\hat{\mathbf{k}} \times(\hat{\mathbf{k}}\times\hat{\lambda}))_{i}}{|\hat{\mathbf{k}}\times \hat{\lambda}|}+C_{\lambda}\frac{(\hat{\mathbf{k}}\times\hat{\lambda})_{i}}{| \hat{\mathbf{k}}\times\hat{\lambda}|} \tag{12}\] which we will name the unit vector \(\zeta_{PC,A}\). ### Rotation algorithm In this paper we adopt the Euler rotation algorithm2 that is adopted in Lazarian et al. (2018); Yuen et al. (2018)& Lazarian et al. (2022) in obtaining numerical cubes with arbitrary orientations3. The rotation \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Model Name} & \multirow{2}{*}{Code} & Sonic & Alfvenic & Plasma & Energy & \multirow{2}{*}{Resolution} & C-Mode \\ & & Mach & Mach & Beta & & & \\ & & Number & Number & & Rate & & Fraction \\ & & \(M_{s}\) & \(M_{A}\) & \(\beta\) & \(\epsilon\) & \(N_{x}\) & \(f_{C}\) \\ \hline \hline AS7 & Athena++ & 1.47 & 0.30 & 0.08 & 1.0 & 512 & 0.18 \\ Z15 & ZEUS-MP & 2.13 & 0.52 & 0.12 & 0.1 & 512 & 0.24 \\ Z12 & ZEUS-MP & 1.06 & 0.26 & 0.12 & 0.01 & 512 & 0.24 \\ AS5 & Athena++ & 1.43 & 0.29 & 0.08 & 1.0 & 512 & 0.27 \\ AS6 & Athena++ & 0.29 & 0.13 & 0.40 & 0.1 & 512 & 0.30 \\ AS3 & Athena++ & 1.40 & 0.29 & 0.08 & 1.0 & 512 & 0.32 \\ AS4 & Athena++ & 0.28 & 0.13 & 0.45 & 0.1 & 512 & 0.33 \\ Z8 & ZEUS-MP & 0.44 & 0.45 & 2.06 & 0.001 & 512 & 0.34 \\ Z9 & ZEUS-MP & 0.45 & 0.11 & 0.12 & 0.001 & 512 & 0.35 \\ AS1 & Athena++ & 1.33 & 0.27 & 0.08 & 1.0 & 512 & 0.35 \\ Z11 & ZEUS-MP & 0.93 & 0.99 & 2.28 & 0.01 & 512 & 0.36 \\ Z5 & ZEUS-MP & 0.15 & 0.15 & 2.00 & 0.0001 & 512 & 0.39 \\ AS2 & Athena++ & 0.26 & 0.12 & 0.42 & 0.1 & 512 & 0.41 \\ Z4 & ZEUS-MP & 0.15 & 0.61 & 32.88 & 0.0001 & 512 & 0.44 \\ A7 & Athena++ & 0.18 & 0.10 & 0.61 & 0.1 & 512 & 0.44 \\ A9 & Athena++ & 0.18 & 0.10 & 6.47 & 0.1 & 512 & 0.45 \\ Z6 & ZEUS-MP & 0.14 & 0.04 & 0.13 & 0.0001 & 512 & 0.49 \\ A8 & Athena++ & 0.03 & 0.04 & 3.55 & 0.01 & 512 & 0.50 \\ A10 & Athena++ & 0.03 & 0.04 & 3.55 & 0.01 & 512 & 0.51 \\ A5 & Athena++ & 0.17 & 0.10 & 0.69 & 0.1 & 512 & 0.58 \\ A6 & Athena++ & 0.03 & 0.04 & 3.55 & 0.01 & 512 & 0.59 \\ A3 & Athena++ & 0.15 & 0.09 & 0.72 & 0.1 & 512 & 0.66 \\ A4 & Athena++ & 0.02 & 0.04 & 8.00 & 0.01 & 512 & 0.75 \\ A1 & Athena++ & 0.13 & 0.09 & 0.95 & 0.1 & 512 & 0.84 \\ A2 & Athena++ & 0.02 & 0.03 & 4.50 & 0.01 & 512 & 0.91 \\ \hline \hline \end{tabular} \end{table} Table 1: This table contains all the properties of numerical simulations used in the analysis, sorted according to the Compressible mode (C-Mode) Energy Fraction. The cubes with ’Z’ and ’AS’ model name are simulated with Zeus and Athena++ solenoidal driving, whereas the datacubes with ’A’ model name is generated with Athena++ compressible driving. matrices are defined as : \[\mathbf{\hat{T}}_{x} =\left[\begin{array}{ccc}1&0&0\\ 0&\cos(\theta_{x})&-\sin(\theta_{x})\\ 0&\sin(\theta_{x})&\cos(\theta_{x})\end{array}\right]\] \[\mathbf{\hat{T}}_{y} =\left[\begin{array}{ccc}\cos(\theta_{y})&0&\sin(\theta_{y})\\ 0&1&0\\ -\sin(\theta_{y})&0&\cos(\theta_{y})\end{array}\right] \tag{13}\] \[\mathbf{\hat{T}}_{z} =\left[\begin{array}{ccc}\cos(\theta_{z})&-\sin(\theta_{z})&0 \\ \sin(\theta_{z})&\cos(\theta_{z})&0\\ 0&0&1\end{array}\right]\] where we will write the notation matrix \(\mathbf{\hat{T}}=\mathbf{\hat{T}}_{x}\mathbf{\hat{T}}_{y}\mathbf{\hat{T}}_{z}\) and \(\theta_{x,yz}\) are desired rotations along the x,y,z axis respectively. For rotations of 3D scalar cubes, say \(\rho(\mathbf{r})\) with \(\mathbf{r}\in\mathcal{R}^{3}\), the new cube is given by \(\rho(\mathbf{\hat{T}}^{-1}\mathbf{r})\). For vector cubes \(\mathbf{x}(\mathbf{r})\), the new cube is given by \(\mathbf{\hat{T}}_{\mathbf{x}}(\mathbf{\hat{T}}^{-1}\mathbf{r})\). The reason why inverse transform is invoked for the position vector \(\mathbf{r}\) is because the rotation of cubes is equivalent to rotating the observing frame in the opposite direction. ## 4 Analysis and Results To establish the Y-parameter as a reliable technique, we applied this method to numerical MHD cubes with various plasma properties, as outlined in Tables 1. While ISM compositions can be highly complex, consisting of a combination of emitting and Faraday rotating layers, we simplified our analysis by treating each layer as an independent emitting layer, as illustrated in the first panel of Figure 1. These emitting layers contain fluctuating small-scale magnetic fields, which are shown in panel 2 of Figure 1. In addition to these small-scale components, our data cubes also exhibit a mean magnetic field since \(M_{A}\lesssim 1\). By analyzing these data cubes, we can gain insight into the nature of MHD turbulence and orientation of B-field in the ISM plasma, and further develop our understanding of this important astrophysical phenomenon. ### Analysis of decomposed data cubes In order to gain a better understanding of the behavior of the Y-parameter for various MHD turbulent plasma modes and its dependence on \(\theta_{x}\), we initially decomposed the magnetic fields of our MHD cubes into two general components: Alfven mode (A) and compressible modes (F or C, which we use interchangeably), utilizing PCA decomposition techniques as briefly described in SS 3.3 (See Lazarian & Pogosyan (2012) for more details). We then proceeded to investigate the anisotropies of projected observables \(I+Q\approx B_{x}^{2}\) and \(I-Q\approx B_{z}^{3}\) for each of these modes separately. It is worth noting that due to their different dependence on the \(B_{\perp}\) and \(B_{\parallel}\) components, the anisotropy also inherently depends on \(\theta_{x}\). To illustrate the variation of anisotropy, we have presented two cases in Figure 2, one with \(\theta_{x}\sim 0.0^{\circ}\) and another at \(\theta_{x}\sim 80.0^{\circ}\). The upper panel of Fig. 2 shows the orientation of the anisotropy for both A and F decomposed modes for the solenoidal-driven Z12 MHD datacube. Meanwhile, the lower panel of Figure 2 shows the anisotropy orientations for the compressible-driven A1 cube. By analyzing these figures, we can draw the following conclusions: 1. Low \(\theta_{x}\) case: The anisotropy of \(B_{x}^{2}\) and \(B_{y}^{2}\) for the A mode is perpendicular and parallel to the projected mean magnetic field, respectively. However, it is vice versa for the case of F mode. 2. High \(\theta_{x}\) case: These anisotropies of \(B_{x}^{2}\) and \(B_{y}^{2}\) are more or less the same irrespective of A and F mode. 3. The relative anisotropies (Eq. 3) for \(B_{x}^{2}\) and \(B_{y}^{2}\) can give us a parameter that can be used to characterize the MHD turbulence modes in the given turbulent media. Due to its dependency on \(\theta_{x}\), it can be used as a probe to retrieve the mean field inclination angle in the region. To investigate the relationship between the Y-parameter and the mean-field inclination angle, \(\theta_{x}\), we analyzed the Y-parameter for decomposed A and F modes obtained from two distinct MHD turbulence datacubes, as illustrated in Fig.3. Our analysis revealed that the Y-parameters for the A and F modes exhibit different characteristics. Specifically, the Y-parameter for the pure F (compressible) mode shows an increasing trend as \(\theta_{z}\) increases, while the Y-parameter for the pure A (Alfven) mode exhibits a decreasing trend with \(\theta_{z}\). Despite the fact that they were generated using different driving mechanisms (the left panel of Fig.3 depicts the Z12 cube with solenoidal driving, while the right panel depicts the A1 cube with compressible driving) the functional dependency of the Y-parameter for pure F and pure A mode exhibit similar functional dependencies on \(\theta_{z}\). Notably, average _Y-parameter_ has clear separation at the value of Y \(\sim 1.5\), which could be used as a criterion to distinguish between MHD turbulence modes. To replicate the complex composition of the ISM, we employed the Y-parameter as a mode decomposition technique on 25 datacubes of MHD turbulence, spanning a wide range of plasma properties, energy injection rates, and driving mechanisms (see Table 1). To ensure the validity of our results, we excluded 5 datacubes with C-mode energy fractions falling between 0.5 and 0.6. The remaining dataset comprised of 12 turbulence cubes subjected to solenoidal forcing, exhibiting less than 40% of energy in the Compressible turbulence mode, and 8 datacubes characterized by compressible-driven turbulence, with a dominating energy fraction in the compressible mode. We investigated the anisotropy of \(B_{x}^{2}\) and \(B_{y}^{2}\) for the individual modes decomposed from all these cubes and found that they follow a similar trend, as shown in Fig.2. Furthermore, we evaluated the relative anisotropies, or "Y-parameter," for each cube and found an equivalent trend for the Z12 and A1 cubes, as demonstrated in Fig.3. To improve the statistical significance of our results, we calculated the statistical probability density and averages of the Y-parameter for the A and C modes in their respective decomposed data cubes at every \(\theta_{x}\) value. The probability density (PDF) and average Y-parameter against \(\theta_{x}\) is presented in Fig.4, where the left panel represents the statistical averages over 12 MHD turbulence cubes with \(f_{C}<0.4\), while the right panel comprises 8 compressible dominant datacubes with \(f_{C}>0.5\). The upper and lower sub-panels in Fig.4 show purely Alfvenic and compressible modes, respectively. Notably, the functional dependency of the Y-parameter on \(\theta_{x}\) in both these plots for the Alfven and compressible modes is consistent with the behavior of the Y-parameter in the individual data cube. Thus it can act as a powerful diagnostic for the reconstruction of 3D magnetic fields and understand the mode-decomposition of MHD turbulence. ### Analysis of Total Datacube The MHD turbulence in the real scenario is always complex and always exhibits a mixture of turbulence modes. To validate our technique for the reconstruction of 3D magnetic fields and to understand the MHD turbulence mode, we investigated the total B-field of 25 datacubes listed in Table 1. We began by estimating the anisotropy of \(B_{x}^{2}\) and \(B_{y}^{2}\) for total field and examining their behavior with respect to \(\theta_{z}\). The anisotropies of two cubes are plotted in Fig.5. The left panel of this figure shows that the anisotropies for the \(Z12\) MHD cube fol Figure 2: _Upper panel:_ The figure shows the anisotropy of decomposed plane of sky components of \(\mathbf{B}\) field (\(B_{x}\), \(B_{y}\)) of the A (Alfven) and F (compressible) modes for the solenoidal driven Z12 MHD data cube with \(M_{A}\sim 0.26\) and \(\beta\sim 0.12\) at two \(\delta_{j}\) values. The upper and lower sub-panels are for \(B_{x}^{2}\) and \(B_{y}^{2}\), respectively. _Lower panel:_ same as an upper panel but for the compressible driven Atenens++ A1 cube with \(M_{A}\sim 0.09\). Here, the mode-decomposition is performed using the PCA decomposition method describes in (Lazarian & Pogosyan, 2012). low the anisotropy distribution for the decomposed Alfven mode in the upper panel of Fig.2. Since the Z12 MHD cube are solenoidally driven MHD turbulence cubes dominated by the Alfven energy fraction, it is natural that they exhibit pure Alfven behavior. Similarly, the anisotropy distribution for the A1 MHD cube with the total field in the right panel of Fig.5 is presided over by the anisotropy of the pure compressible mode, as seen in the lower panel of Fig.2. This motivates us to evaluate the relative anisotropies as a Y-parameter for the total field in these MHD turbulence cubes. We calculated the Y-parameter from the turbulence data cubes with varying inclination angles of the total field and plotted it in Fig.6. From these plots, it is clear that the Y-parameter for these cubes follows the trend of Y \(>1.5\) for Alfven mode dominance (Z12) and Y \(<1.5\) for compressible mode dominance (A1). To further explore the behavior of the Y-parameter, we investigated all of our MHD cubes and plotted the heatmap of the Y-parameter against \(\theta_{i}\) in ascending order of \(f_{C}\), as shown in Fig. 7. We observed that all these cubes maintained their respective trend depending on the mode energy fraction. To ensure the validity of our results, we excluded 5 datacubes with \(f_{C}\) falling between 0.5 and 0.6. The rest of the total B-field analysis is with the Figure 4: _Left Panel_: To enhance the statistics of the Y-parameter for mode-decomposition, in this plot we show the probability density and mean Y-parameter for the decomposed B field against the inclination angle of total mean-field, \(\theta_{i}\). We have taken the mean and variance of the Y-parameter for 12 MHD turbulence cubes with \(f_{C}<0.4\) (see Table 1 for more details). The black color represents the vacancy of data points and hence probability is zero. The color bar represents the PDF at every value of \(\theta_{i}\). The upper and lower sub-panels are for purely Alfvenic and Compressive modes, respectively. The white cross indicates the mean values of the Y-parameter. _Right Panel_: It is the same as the left panel except for MHD cubes. Here we have taken 8 MHD turbulence cubes with dominated C-mode. Figure 3: _Left Panel_: The plot of Y-parameter for the decomposed B field into pure A (Alfven mode) and pure C (Compressible mode) of Z12 MHD turbulence cube with \(\theta_{i}\). The red and blue diamond symbol represents the Alfvén and Compressible MHD turbulence modes. Where the black dashed line indicates the Y \(\sim 1.5\). _Right Panel_: It is the same as the left panel for the compressible-driven MHD turbulence datacube A1. remaining 20 data cubes where 12 datacubes have \(f_{C}<0.4\) and 8 datacubes with \(f_{C}>0.5\). Moreover, to strengthen the statistical confidence of the Y-parameter as a diagnostic for 3D magnetic field and MHD turbulence mode, we evaluated the probability density (PDF) and mean of the Y-parameter at every value of \(\theta_{\lambda}\). The PDF and mean Y-parameter are shown in Fig. 9, and they follow the respective decomposed mode trends. This indicates that the Y-parameter can serve as an important tool for understanding the 3D magnetic field structures and underlying MHD turbulence in a region. As a result, from Fig. 9, we can conclude the following inferences for \(\theta_{\lambda}\): * If Y \(\gtrsim 2.5\), we have two scenarios (i) \(10^{\circ}<\theta_{\lambda}<30^{\circ}\) and Alfven dominated i.e. \(f_{C}<0.4\) (ii) \(5^{\circ}<\theta_{\lambda}<15\) with \(f_{C}>0.5\) (less probable). * If Y \(\lesssim 1.0\) there are two possibilities: (i) \(\theta_{\lambda}\) is \(\lesssim 5^{\circ}\) and compressible dominated i.e. \(f_{C}>0.5\) (ii) \(\theta_{\lambda}\) is \(\gtrsim 60^{\circ}\) and A-mode dominated. * \(60^{\circ}\gtrsim\theta_{\lambda}\gtrsim 40^{\circ}\) and turbulence is Alfvenic or \(\theta_{\lambda}\) is \(\gtrsim 70^{\circ}\) and the turbulence is compressible if Y-parameter is in the intermediate range; We have also elaborated our scheme for the estimation of magnetic field inclination angle and identification of dominant MHD turbulence mode in the flowchart shown in Fig 8. From the above discussion, it is clear that the observed statistics of MHD turbulence modes contain a dependence on the mean magnetic field in the region. As a result, this can act as one of the important diagnostics to investigate the 3D magnetic field in the ISM and extended objects such as SNR, and PWN. ## 5 Discussion ### Implication to general studies of ISM magnetic field and turbulence Our paper extends the formalism of the two-point studies in synchrotron analysis (see, e.g. Lazarian and Pogosyan, 2012). Indeed, how the statistics of different observables are connected to the intrinsic statistics of 3D MHD turbulence variable is a long-haul question in the ISM community, both due to the absence of true sampling of 3D data like the solar wind community (see, e.g. Zhao et al., 2021, Figure 5: _Left panel:_ The figure shows the anisotropy of the plane of sky components of the total \(\mathbf{B}\) field for the solenoidal driven Z12 MHD data cubes with \(M_{\lambda}\sim 0.26\) and \(\beta-0.12\) at \(\theta_{\lambda}\sim 0.0^{\circ}\), \(40.0^{\circ}\), \(80.0^{\circ}\). The upper and lower panel is for \(B_{s}^{2}\) and \(B_{y}^{2}\), respectively. _Right panel:_ This is the same as the left panel but for the compressible-driven A1 MHD cube. Figure 6: _Left Panel:_ The plot of the Y-parameter for the total \(\mathbf{B}\) field for the Z12 MHD cube with \(\theta_{\lambda}\). The black dashed line represents the threshold value of Y = 1.5. _Right Panel:_ This plot is the same as the left panel but for the A1 MHD cube. Figure 8: Flowchart to elaborate the scheme for estimating mean field inclination angle and identification of dominant turbulence mode. Figure 7: The heatmap shows the distribution of the Y-parameter for 25 MHD datacubes. Here x and y-axises represent the \(\theta_{\lambda}\) and different data cubes sorted in descending order of \(fc\), respectively. 2022, 2023), and the complications of line-of-sight projections (See, e.g. the same problem for velocity variables, Lazarian & Pogosyan 2000, 2004; Kandel et al. 2016; Yuen et al. 2021). Only in very rare circumstances will one obtain measurements of 3D information in observations, for instance, ground state alignment (Yan & Lazarian 2006, 2007, 2008a; Zhang et al. 2020b), dust estimation (Zucker et al. 2020) or starlight polarization from GAIA. These measurements unfortunately are sparsely measured to date, which is insufficient for analysis of the properties of interstellar media (See some attempts in ISM like Seta et al. 2023). Techniques that allow retrieval of intrinsic turbulence properties in ISM is therefore of great importance (e.g. VCS, Lazarian & Pogosyan 2000) More recently, the ISM community has developed a few ways in obtaining the turbulence parameters in synchrotron observations, namely sonic Mach number (Gaensler et al. 2011; Burkhart & Lazarian 2012; Yuen & Lazarian 2020a), Alfvenic Mach number (Lazarian et al. 2018; Yuen & Lazarian 2020b), turbulence index (Lazarian & Pogosyan 2016; Seta et al. 2023) and mode signatures (Zhang et al. 2020a). The remaining puzzles in regarding the search of turbulence properties are (i) the relative orientation of the magnetic field angle to the line of sight; (ii) the dependence of energy modes composition diagnosis on the magnetic field inclination angle. Based on the qualitative study of Yuen et al. (2023), our paper is the first paper that systematically study how do we recover the LOS inclination angle \(\theta_{\rm d}\) of the magnetic field when all turbulence parameters are varying, which could be important for studies of magnetic field geometry and relevant sciences that depends on the magnetic field orientation. ### Application to ongoing and upcoming radio survey Radio surveys such as the Low-Frequency Array (LOFAR)4, and Square Kilometer Array (SKA)5 are currently underway or planned for the near future, promising high-resolution and high-sensitivity observations of the interstellar medium (ISM) and galaxies (Gitti et al. 2018). However, our diagnostic techniques are currently limited to Faraday rotation and polarization measurements of synchrotron emission (Van Eck et al. 2019; Basu et al. 2019; Heald et al. 2020; O'Sullivan et al. 2023), which are sensitive to different components of magnetic fields and are affected by the line-integrated effect. Therefore, there is a pressing need to develop new diagnostic techniques. Our work proposes a technique that is well-suited for diffuse radio observations from these surveys (Van Eck et al. 2019; O'Sullivan et al. 2023). By analyzing the relative anisotropies involved in observable Stoke parameters, our technique can determine magnetic field orientation and complement the estimation of magnetic field geometry using Faraday rotation, leading to more accurate estimates of the strength as well as the 3D inclination of the B-field in the ISM. Additionally, our analysis can enable systematic investigation of the composition of MHD turbulence modes in different plasma environments. Footnote 4: [https://lofar-surveys.org/index.html](https://lofar-surveys.org/index.html) Footnote 5: [https://www.skao.int/en](https://www.skao.int/en) ### Implication to observations of Pulsar TeV's halo To ensure the reliability of our analysis aimed at investigating the mean field inclination angle and turbulence mode classification, we have applied this technique to radio polarisation observations from the \(5^{\circ}\times 5^{\circ}\) region of Pulsar Wind Nebulae. The radio observations were conducted using the Effelsberg 100-m telescope at 21cm, which provided a resolution of 9.5 arcmin (Uyaniker et al. 1999). Our detailed analysis of this region will be presented in an upcoming publication (Sunil et al. 2023). In brief, this region encompasses the Monogem Pulsar, also known as pulsar B0656+14. Recent observations by the High-Altitude Water Cherenkov Observatory (HAWC) have revealed a spherical TeV halo around both the Monogem and Geminga pulsars Abeysekara et al. (2017). To explain the origin of such high-energy emission, Liu et al. (2019) proposed a model involving an anisotropic diffusion model, to explain the TeV observations from the Geminga source. One of the crucial parameters for their analysis is the mean magnetic field inclination angle of the region. To estimate the mean field inclination angle for the Monogem region, we applied our analysis to the radio polarisation observations and found the Y-parameter is below the 1.5 range. This suggests that the region is primarily dominated by _compressible_ turbulence, with a low mean-field inclination angle. These outcomes are consistent with theoretical predictions for such a halo Liu et al. (2019). The detailed results are presented in our forthcoming work (Sunil et al. 2023). ### Impact of our work on the studies of magnetic field strength via DCF Obtaining the value of \(\theta_{\rm d}\) also allows us to determine the magnetic field strength in a much more accurate fashion. The recent debate Figure 9: _Left panel:_ The plot shows the probability density and mean Y-parameter for the PoS total **B** field against the \(\theta_{\rm d}\). We have used 12 MHD cubes with \(f_{C}<0.4\). The black color represents the vacancy of data points and hence probability is zero. The color bar represents the PDF at every value of \(\theta_{\rm d}\). The white cross indicates the mean values of the Y-parameter. _Right Panel:_ It is the same as the left panel except for MHD cubes. Here we have taken 8 MHD turbulence cubes with dominated C-mode with \(f_{C}>0.5\). about the validity of the Davis-Chandrasekhar & Fermi technique (1951; 1953) suggests that the DCF technique is inaccurate in estimating magnetic field strength, with errors up to 1000 times (Skalidis & Tassis, 2021; Skalidis et al., 2021). The call for revision of the DCF technique is urgent as a majority of B-field strength estimation in ISM and molecular cloud studies are done by the DCF technique (See, e.g. Liu et al. 2022 for a review). Lazarian et al. (2022) outlined that the correction of the DCF technique requires extra turbulence information from interstellar media: Alfvenic Mach number, mode fraction, and line of sight angle. Our work is complementary to the series of works on the correction of the DCF technique in obtaining the true B-field strength in the sky. ### Extending the Y-parameter analysis to non-polarization data The main analysis of our paper relies mostly on the fact that polarization mapping is available for analysis. However, in most of the cases of synchrotron and spectral line measurements, the polarization mapping is either unavailable or contains insufficient statistics, which highly restricts the application of the polarization-based techniques (Zhang et al., 2020; Yuen et al., 2023). In this subsection, we discuss how to utilize other polarization approximation techniques (e.g. The Velocity Correlation Anisotropy (VCA), Esquivel & Lazarian, 2005; The Principle Component Analysis (PCA), Heyer et al. 2008; Velocity Gradient Technique (VGT)in its crude form, Yuen & Lazarian 2017, 2018, 2018; and Rolling Hough Transform (RHT), Clark et al. 2015) to compute the inclination angle. To proceed, we have to make a few assumptions: 1. A spectroscopic position-position-velocity (PPV) cube \(p=p(\mathbf{R},v)\) with sufficient resolution is available. 2. The polarization approximation technique can give good enough statistics compared to the actual polarization and produces Q, U. We denote the polarization angle approximated by any of the techniques to be \(\phi_{ap}(\mathbf{R},v)\) and we can characterize the goodness of approximation by the Alignment Measure (AM) function between the approximated angles \(\phi_{ap}\) and the true B-field angles \(\phi_{B}\)(Gonzalez-Casanova & Lazarian, 2017): \[AM=\left\langle\cos 2(\phi_{ap}-\phi_{B})\right\rangle\] (14) \[(3)\] 3. The computation of I, Q, U are given by (Clark et al., 2015; Lazarian & Yuen, 2018): \[I_{ap}(\mathbf{R}) =\int dvp(\mathbf{R},v)\] \[Q_{ap}(\mathbf{R}) =\int dvp(\mathbf{R},v)\cos 2\phi_{ap}(\mathbf{R},v)\] (15) \[U_{ap}(\mathbf{R}) =\int dvp(\mathbf{R},v)\sin 2\phi_{ap}(\mathbf{R},v)\] Now it suffices to compute the Y-parameter via the anisotropies of \(I+Q\) and \(I-Q\). ## 6 Conclusion In summary, we have developed a novel diagnostic technique to investigate three-dimensional magnetic fields and underlying MHD turbulence in the ISM. Our technique is primarily based on polarisation observations of the ISM and represents a significant advancement in the field, as no similar diagnostic exists to date. To validate the efficacy of our technique, we tested it extensively using simulated MHD turbulence data cubes with varying plasma properties, energy injection rates, and driving mechanisms (solenoidal or compressible forcing). These simulations also included differences in the composition of Alfven modes and Compressible modes. Our results show that our technique can successfully retrieve the mean-field inclination angle, \(\theta_{\lambda}\), along the line of sight and the underlying dominant MHD turbulence mode. Our major findings are listed below; * If Y \(\gtrsim\) 2.5, we have two scenarios (i) \(10^{\circ}<\theta_{\lambda}<30^{\circ}\) and Alfven dominated i.e. \(f_{C}<0.4\) (ii) \(5^{\circ}<\theta_{\lambda}<15\) with \(f_{C}>0.5\). * If Y \(\lesssim\) 1.0 there are two possibilities: (i) \(\theta_{\lambda}\) is \(\lesssim 5^{\circ}\) and compressible dominated (i.e. \(f_{C}>0.5\)) (ii) \(\theta_{\lambda}\) is \(\gtrsim 60^{\circ}\) and A-mode dominated. * \(60^{\circ}\gtrsim\theta_{\lambda}\gtrsim 40^{\circ}\) and turbulence is Alfvenic or \(\theta_{\lambda}\) is \(\gtrsim 70^{\circ}\) and the turbulence is compressible if Y-parameter is in the intermediate range; * We found a statistical demarcation of Y \(\sim\) 1.5 (with Y \(>\) 1.5 for A mode and Y \(<\) 1.5 for C-mode) to obtain the dominant fraction of MHD turbulence modes. Thus, our technique represents a significant advancement in the field of interstellar medium research, enabling us to investigate three-dimensional magnetic fields and MHD turbulence in greater detail than previously possible. Our results demonstrate the effectiveness of the technique and provide valuable insights into the nature of turbulence in the ISM. ## Acknowledgments SM would like to thank P. Pavaskar and SQ Zhao for the helpful discussions. SM and HY gratefully acknowledge the computing time granted by the Resource Allocation Board and provided on the supercomputer Lise and Emmy at NHR@ZIB and NHR@Gottingen as part of the NHR infrastructure. The calculations for this research were conducted with computing resources under the project bbp00062. The research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number(s) 20220700PRD1. This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001. This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award FES-ERCAP-m4239 (PI: KHY, LANL). ## Data Availability We utilized our own simulated MHD turbulence datacubes, which are available upon reasonable request from the corresponding author.
2305.05508
Practical Channel Splicing using OFDM Waveforms for Joint Communication and Sensing in the IoT
Channel splicing is a rather new and very promising concept. It allows to realize a wideband channel sounder by combining multiple narrow-band measurements. Among others, channel splicing is a sparse sensing techniques suggested for use in joint communication and sensing (JCAS), channel measurements and prediction using cheap hardware that cannot measure wideband channels directly such as in the internet of things (IoT). This work validates the practicality of a channel splicing technique by integrating it into an OFDM-based IEEE 802.11ac system, which we consider representative for many IoT solutions. Our system allows computing both the channel impulse response (CIR) and the channel frequency response (CFR). In this paper, we concentrate on the impact of the number of sub-bands in our study and show that even using only 50% of the overall spectrum leads to very accurate CIR measures. We validate the system in simulation and confirm the results in an experimental in-door scenario using software defined radios.
Sigrid Dimce, Anatolij Zubow, Alireza Bayesteh, Giuseppe Caire, Falko Dressler
2023-05-09T14:57:12Z
http://arxiv.org/abs/2305.05508v1
# Practical Channel Splicing using OFDM Waveforms for Joint Communication and Sensing in the IoT ###### Abstract Channel splicing is a rather new and very promising concept. It allows to realize a wideband channel sounder by combining multiple narrow-band measurements. Among others, channel splicing is a sparse sensing techniques suggested for use in joint communication and sensing (JCAS), channel measurements and prediction using cheap hardware that cannot measure wideband channels directly such as in the internet of things (IoT). This work validates the practicality of a channel splicing technique by integrating it into an OFDM-based IEEE 802.11ac system, which we consider representative for many IoT solutions. Our system allows computing both the channel impulse response (CIR) and the channel frequency response (CFR). In this paper, we concentrate on the impact of the number of sub-bands in our study and show that even using only 50% of the overall spectrum leads to very accurate CIR measures. We validate the system in simulation and confirm the results in an experimental in-door scenario using software defined radios. Joint communication and sensing, JCAS, channel sounder, channel splicing, internet of things, IoT ## I Introduction Joint communication and sensing (JCAS) is becoming more important in different application domains, both in 6G as well as for the internet of things (IoT) [1, 2, 3, 4]. This also triggered an ongoing discussion of appropriate waveforms, orthogonal time frequency space (OTFS) being considered a possible compromise [5]. However, most existing communication systems are based on orthogonal frequency division multiplexing (OFDM), so, integrating sensing here is very important [6]. Channel sounding is the core functionality required for JCAS, but also for channel estimation and channel prediction techniques. For example, commodity WiFi devices have been used for purposes other than wireless communication such as indoor localization and channel sounding due to the relatively low cost [7, 8]. Channel sounding relies on the channel state information (CSI) of the communication link. In case of a single-antenna device, the CSI is equivalent to the channel frequency response (CFR), which provides channel information in the frequency domain. Through Fourier transformation, the channel is characterized in the delay domain, represented by the channel impulse response (CIR). The CIR describes the multipath channel over the delay domain. However, the accuracy of the estimated CIR is limited by the channel bandwidth (BW) supported by the system. Precise measurement of multipath components is only possible using wideband signals. In the delay domain, the resolution is equal to \(1/BW\), which, multiplied with the speed of light, gives the necessary difference between the distance of two distinct paths. For instance, a bandwidth of 20 MHz implies that the distance traveled by the signal from two distinct paths should have at least a difference of 15 m, so that the paths are distinguishable at the receiver [8]. However, in general, wideband sounding is quite complex and energy inefficient for existing communication systems. Also, many IoT systems can only process narrow-band subchannels. A possible solution to this issue is given by multi-band sensing or spectrum splicing [9]. Conceptually, channel splicing means measuring multiple narrow-band subbands/subchannels and then combining the results to obtain, ideally, the same results as generated by a single wideband measurement. The concept is depicted in Figure 1. Initially, spectrum splicing was developed targeting indoor localization applications [8, 10, 11], and later extended to other applications such as human sensing [12]. This technique allows a single WiFi device to transmit packets and extract CSI in the multiple frequency bands tens of megahertz wide. The collected information through multiple bands is combined mimicking a wideband channel. Channel splicing exploits the sparse nature of the CIR and applies sparse recovery method on the collected data to obtain CIR with high resolution. The delay associated with the shortest path enables estimating the signal time of flight (ToF), which is used for localization. As we will demonstrate in this paper, splicing can also be performed on a subset of the sub-bands with only small reduction in accuracy. Despite its advantages, splicing faces several challenges due to transceiver impairments. Figure 1: Illustration of the channel splicing concept A few algorithms have been developed aiming to enhance the splicing performance and resolve the impairments challenges [7, 8, 9, 10, 11, 13]. Our work builds upon the theoretical concepts of spectrum splicing presented in [8]. We first implement a communication system in Matlab based on the IEEE 802.11ac standard. We perform channel estimation based on the least square (LS) estimation technique both in time and frequency domain. We extended the system by spectrum splicing allowing to combine CFR measurements from multiple bands. The channel sounder utilizes software defined radio (SDR) components, we used two USRP N310 in the lab, for transmitting/receiving the signal over the air. We validate the practical use of splicing technique in simulations and indoor experiments by splitting a wide channel bandwidth into narrower bands, collect the CFR over these bands, perform splicing on the collected data and compare the estimated CIR with the wide channel. Our results build the basis for OFDM-based JCAS solutions and for low-cost IoT channel sounding. Our main contributions can be summarized as follows: * We implement a practical OFDM communication system based on IEEE 802.11ac and the LS estimation technique on the receiver to compute both CIR and CFR. * We extend the system by incorporating the spectrum splicing to investigate the impact of the number of sub-bands, and the accuracy of the sensing approach. * We validate the splicing technique in a controlled simulation environment as well as in a set of indoor experiments focusing on the numbers of multipath component. ## II Related Work Channel sounding is a crucial technique for generating knowledge to characterize the wireless channel in a certain frequency band. The principle of sounding is to transmit a known baseband signal up-converted at the frequency of interest, which is post-processed on the receiver side for extracting metrics (CIR, CFR) that provide channel information. Several sounding techniques were developed over the years with the sole purpose to accurately characterize the propagation channel. The two most common techniques are the spread spectrum sliding correlator [14] and the OFDM-based system [15]. Both techniques have demonstrated undeniable success at lower frequencies, as well as at millimiter-wave (mmWave) frequency band [16, 17]. The OFDM-based system consists of multiple subcarriers, where each subcarrier sees the channel as flat, hence, allowing it to overcome the main drawback of wideband transmission in terms of frequency selectivity. Nevertheless, the system requires a large peak-to-average-power ratio (PAPR) as well as tight receiver synchronization. On the other hand, the spread spectrum sliding correlator "spreads" the carrier signal over a large bandwidth by mixing it with a binary PN sequence. The received signal is mixed with a slower identical version of the PN sequence, which makes the system less vulnerable towards interference, but more complex in terms of hardware and software implementation compared to the OFDM system. Thus, choosing one of the aforementioned sounding techniques is application specific and requires to compromise between robustness and complexity. Propagation information is provided in the delay and frequency domain, respectively, CIR and CFR, which are computed by the channel estimation techniques. Among the existing ones, the LS estimation is the most common method characterized by low computational complexity. Yet, in a few application scenarios, this technique yields inferior performance [18]. Another well-known estimation method is the minimum mean square error (MMSE) [15], which minimizes the channel estimation error. However, MMSE leads to high computational complexity and requires prior channel statistic information, which sometimes is not available. Beside these traditional techniques, new models based on deep learning are being developed [18], targeting performance improvement, and using LS estimation for training their model. Reducing system costs by using existing hardware infrastructure for applications such as channel sounding is becoming more crucial. In this aspect, spectrum splicing methods are attracting high attention in the recent years. Most of the existing works target indoor localization applications [7, 8, 10, 13], and a few others e.g., human sensing [12]. The developed methods aim estimating the multipath component (MPC) delays precisely considering the present of hardware distortions. For instance, Chronos [7] is an indoor positioning system, which estimates sub-nanosecond ToF using compressed sensing sparse recovery methods on the collected CFRs over multiple bands. The algorithm addresses the phase offset issue as a result of hopping between frequency bands. Splicer [11] is a software-based system that splices multiple CSIs measurements and achieves single-AP localization. The authors propose several techniques for hardware impairment corrections. The work in [13] presents a two-stage global estimation scheme, which aims to improve estimation accuracy firstly by achieving initial delay estimation based on a coarse signal model and then a global delay estimation. Other authors [10] use the shift-invariance structure in the multiband CSI and propose a weighted gridless subspace fitting algorithm for the delay estimation. The fundamental limits and optimization of multi-band splicing in terms of the time delays are analyzed and presented in [9]. The statistical resolution limit is derived for the delay resolution and an algorithm is proposed to solve the parameters optimization problem. A grid-based multi-band splicing technique is presented in [8]. The method is characterized by low-complexity and can easily scale to large-dimensional problems. Besides localization, other mechanisms, such as WiRIM [12], utilize spectrum splicing for human sensing applications. We integrate the theoretical concepts proposed in [8] into an OFDM-based JCAS system. We validated our practical implementation of channel splicing both in simulation as well as an experimental indoor scenario. ## III Spectrum Splicing Architecture Our channel sounder is based on the spectrum splicing technique presented in [8]. The developed method exploits the sparse nature of the CIR and utilizes a grid-based compressed sensing technique to estimate the path delays and amplitude. On a high abstraction level, the idea is to use subchannels to reconstruct the CIR with high resolution as shown in Figure 1. ### _Spectrum Splicing Concept_ The system is based on OFDM and packets are transmitted over \(M\) frequency bands, where each band is composed of \(N\) subcarriers. The subcarriers are indexed according to the integer set \(\mathcal{N}=\{-\frac{N-1}{2},...,\frac{N-1}{2}\}\), where \(N\) is an odd integer. The presented technique performs the estimation based on the pilot signals, which in the receiver side can be written as \[y[m,n]=H[m,n]S_{m,n}+z[m,n],\quad m\in[M],n\in\mathcal{N} \tag{1}\] where \(S_{m,n}=1\) is the pilot symbol transmitted over the _n_-th subcarrier of band \(m\) (\(f_{m,n}\)), which without loss of generality is assumed to be 1; \(H[m,n]\) denotes the CFR in the same subcarrier, and the \(z[m,n]\sim\mathcal{CN}(0,\,1/\text{SNR})\) is the additive white Gaussian noise (AWGN) channel. Assuming that the propagation environment is comprised of \(K\) scatters, the CIR over the delay domain is given as \[h(\tau)=\sum_{k=1}^{K}c_{k}\delta(\tau-\tau_{k}), \tag{2}\] where \(\delta(\cdot)\) stands for Dirac's delta function, \(\tau_{k}\in[0,1/f_{s})\) is the delay associated with each path \(k\) (\(f_{s}\) - subcarrier spacing), and \(c_{k}\in\mathbb{C}\) is corresponding amplitude. The parameter of gain and delay are independent of the frequency band. On the other hand, the CFR samples are computed from the CIR via the Fourier transformation, and can be written as \[H[m,n]=\mathscr{F}\{h(\tau)\}\mid_{f=f_{m,n}}=\sum_{k=1}^{K}c_{k}e^{-j2\pi f_{ m,n}\tau_{k}}, \tag{3}\] for \(m\in[M],n\in\mathcal{N}\). Furthermore, during the transmission, the pilot samples are affected by several distortions caused by the hardware devices, which lead to the phase term \[\psi[m,n]=-2\pi(\delta_{m}nf_{s}+\phi_{m}),\quad m\in[M],n\in\mathcal{N} \tag{4}\] where \(\delta_{m}\in[0,1/f_{s})\) stands for the timing offset due to the packet detection delay (PDD) and the sampling frequency offset (SFO), and \(\phi_{m}\in[0,1)\) represents the phase offset due to the carrier frequency offset (CFO) between transmitter and receiver, and the phase offset due to switching channel bands. The two parameters, \(\delta_{m},\phi_{m}\) differ from one band to the other such that, \(\psi[m,n]\) is a linear function in each band, of the subcarrier index with different slope and constant term. As a result, the received pilot samples, including the distortion component are represented in the following form \[y[m,n] =e^{j\psi[m,n]}H[m,n]+z[m,n]\] \[=e^{j\psi[m,n]}\sum_{k=1}^{K}c_{k}e^{-j2\pi f_{m,n}\tau_{k}}+z[m,n]\] \[=\sum_{k=1}^{K}c_{k}e^{-j2\pi(f_{m,0}\tau_{k}+\phi_{m})}e^{-j2\pi nf _{s}(\tilde{\theta}_{m}+\tau_{k})}+z[m,n] \tag{5}\] where for each band the subcarriers are assumed to be equispaced with a space equal to \(f_{s}\) and \(f_{m,n}=f_{m,0}+nf_{s},n\in\mathcal{N}\), and \(f_{m,0}\) being the carrier frequency of band \(m\). The proposed spectrum splicing technique aims to estimate the CIR based on noisy and distorted pilot samples, using the following steps: 1. For each band \(m\in[M]\) estimate and remove the distortion parameters \(\delta_{m},\phi_{m}\). The estimation is performed using the sparse recovery technique of atomic norm denoising (AND). 2. Splice the clean pilot data to obtain a high-resolution estimated CIR using the orthogonal maching pursuit (OMP) sparse recovery technique. 3. Resolve ambiguities using a hand-shaking procedure between the two communication nodes. In this work, we focus on the second step of multi-band splicing and perform the estimation based on the very high throughput long training field (VHT-LTF) of 802.11ac frame. On the receiver side, time and CFO is estimated and corrected. The multi-band splicing technique consists of merging the measurements conducted over several bands, increasing the resolution of the estimated CIR by expanding the measurement bandwidth. For instance, the resolution over the delay domain obtained from the measurements over a single band is given as \((\Delta\tau)_{1}=1/Nf_{s}\), whereas over \(M\) frequencies band it increases to \((\Delta\tau)_{1}=1/MNf_{s}\). Considering that the CIR is sparse, the authors in [8] used compressed sensing, and more specifically the OMP sparse recovery method to recover the CIR. Firstly, a vector is defined containing the subcarriers per band **f**(_m_) = \([f_{m,-(N-1)/2},...,f_{m,(N-1)/2}]^{T}\) and for all the bands \(\textbf{f}=[\textbf{f}(1)^{T},...,\textbf{f}(M)^{T}]^{T}\in\mathbb{R}^{MN}\). Similarly, a vector is defined containing the clean CFR samples per band \(\tilde{\textbf{y}}(m)=[\tilde{y}[m,-(N-1)/2],...,\tilde{y}[m,(N-1)/2]]^{T}\) and for all the bands \(\tilde{\textbf{y}}=[\tilde{\textbf{y}}(1)^{T},...,\tilde{\textbf{y}}(M)^{T}]^ {T}\in\mathbb{C}^{MN}\). Furthermore, the elements of the \(\tilde{\textbf{y}}\) can be written as \[[\tilde{\textbf{y}}]_{i}=\mathscr{F}\{h_{0}(\tau)\}\mid_{[\textbf{f}_{i}}+[ \tilde{\textbf{z}}_{i}]\quad i=1,...,MN \tag{6}\] with \([\tilde{\textbf{z}}_{i}]\) representing the AWGN plus the error due to the phase-distortion removal procedure. In order to apply the OMP method, a uniform grid of size \(G\) is defined over the delay domain as \(\mathfrak{G}=\{0,1/G,...,G-1/G\}\)\(\big{\langle}f_{s}\), and a dictionary **D** as \(\textbf{D}=[\textbf{d}(0),...,\textbf{d}(G-1)]\in\mathbb{C}^{MN\times G}\), where \(G\gg MN\) and each column \(\textbf{d}(i)\) given as \[\textbf{d}(i)=\frac{1}{\sqrt{MN}}[e^{-j2\pi|\textbf{f}_{1}(}\frac{i}{G})^{/f_{ s}},...,e^{-j2\pi|\textbf{f}_{MN}(}\frac{i}{G})^{/f_{s}}]^{T}\in\mathbb{C}^{ MN}\,, \tag{7}\] where \(i=0,1,...,G-1\). For values of \(G=2MN\), or \(G=3MN\) the grid \(\mathfrak{G}\) is considered dense and the vector in (6) can be approximated to \[\tilde{\textbf{y}}\approx\textbf{D}\textbf{h}_{0}+\tilde{\textbf{z}} \tag{8}\] where \(\textbf{h}_{0}\in\mathbb{C}^{G}\) is a discrete approximation for \(h_{0}\), and is estimated using the OMP sparse recovery method and the given \(\tilde{\textbf{y}}\) samples. The OMP method is a greedy iterative algorithm, that selects a column of the dictionary **D**, at each iteration, such that it has the highest correlation with the current residual and it repeats until a convergence condition is met [19]. For each selected column, the non-zero coefficients are computed using the least-square method, such that they approximate the measurement vector \(\bar{\mathbf{y}}\). The algorithm stops once the number of the selected dictionary columns reaches the sparsity order of \(\mathbf{h}_{0}\), which is given as input. ### _Practical Channel Splicing for OFDM Systems_ In the following, we describe the system architecture and the estimation technique used for generating the CIR and CFR. The SDR-based channel sounder builds upon USRP-components to perform the over-the-air communication, and Matlab for the software implementation. The OFDM transmitter is implemented according to the 802.11ac standard. This standard support the signal bandwidths of 20, 40, 80 and 160 MHz. Despite the different signal bandwidths, the OFDM subcarrier spacing is kept fixed to 312.5 kHz, whereas the number of the subcarriers changes accordingly. Firstly, the transmitter converts the payload message into a sequence of bits, which later are binary phase shift keying (BPSK) and OFDM modulated. For over-the-air experiments, the generated baseband signal is forwarded to the USRP SDR, which upconverts it to the carrier frequency of interest. The connection between Matlab and the USRP is realized through the recently released Wireless Testbench toolbox1, which supports high-speed data transmission. Footnote 1: [https://de.mathworks.com/products/wireless-testbench.html](https://de.mathworks.com/products/wireless-testbench.html) We also use preamble detector functionality provided by the toolbox, allowing capturing only the signal of interest for offline analysis. Once packets in the air are detected by the preamble detector, the signal is captured, downconverted, and the raw data is stored into a binary file for further post-processing on the host computer. During post-processing, the time and carrier frequency offset are estimated using the frame preamble and compensated per each received packet. The channel is estimated based on the VHT-LTF sequence in the preamble using the LS estimation technique, and the obtained CIR and CFR are stored. Finally, the signal is decoded and demodulated, and the transmitted payload message is recovered. The collected CFR samples over multiple narrow frequency bands are used as input to the spectrum splicing technique for estimating the wideband channel. Channel estimation is performed using LS estimation time or frequency domain approach [15] on the VHT-LTF. The time-domain approach computes the CIR as \[\hat{h}=X^{\dagger}y \tag{9}\] where \(y\) are the received samples, \(\hat{h}\) is the CIR and \(X^{\dagger}\) is the Moore-Penrose (pseudo) inverse of the Toeplitz matrix \(X\). Likewise, the frequency-domain approach acquires the CFR as \[\hat{H}=Y./X \tag{10}\] where \(Y\) are the received samples, \(\hat{H}\) is the CFR and the \(X\) are the transmitted samples. Both CIR and CFR can be computed from each other using the fast Fourier transformation (FFT) and inverse FFT (IFFT). In the following, we describe the validation of our implementation in simulation as well as results from first over-the-air experiments in an indoor scenario. ## IV Performance Evaluation Channel splicing aims to estimate a wideband channel by combining multiple narrow-band channel measurements. This work validates the splicing technique presented in [8] in a controlled simulated environment and a real-world scenario, exploiting the flexibility of the developed tool to switch between theory and practice. Currently, the validation focuses only on the accuracy of the estimated delay, ignoring the amplitude, which will be analyzed in future work. ### _Simulations_ Splicing is validated in simulations at the frequency band 5 GHz by splitting the wide 160 MHz bandwidth into narrower sub-bands, specifically \(2\times 80\,\mathrm{MHz}\), \(4\times 40\,\mathrm{MHz}\), and \(8\times 20\,\mathrm{MHz}\) sub-bands. Simulations are run at the central frequencies listed in Table I. The generated CFRs are combined into an array and used as input to the spectrum splicing technique. The estimated CIR is compared towards the full 160 MHz CIR obtained utilizing the LS method. In the simulated environment, we up-/down-convert to the carrier frequency of 5 GHz in software and make use of a the Matlab Rayleigh channel. The channel model allows defining the number of the MPCs and the time delay for each path. We investigate the algorithm performance in two scenarios, and with different number of multipath components and sub-bands width. #### Iv-A1 Different Frequency Resolution The first scenario consists of \(K=2\) MPCs, with delays and average powers set to \(\{0,18.75\}\mathrm{ns}\) and \(\{0,-2\}\mathrm{dB}\). The splicing technique is applied over the collected CFR samples for different sub-bands width at the corresponding center frequencies. The results are presented in Figure 2 and depicts the estimated two paths (indicated by the markers) along with the computed 160 MHz CIR. Our technique correctly estimates the two 160 MHz peaks despite the sub-channel width. In the considered scenario, the CFR samples are collected over all the sub-bands. However, we further investigate the algorithm performance, looking at only half of the sub-bands. Similarly, the results are shown in Figure 3. As can be seen, the results for the \(2\times 40\,\mathrm{MHz}\) and the \(1\times 80\,\mathrm{MHz}\) show that we can correctly estimate the two strongest paths observed by the 160 MHz CIR, despite the missing samples. The results for the \(4\times 20\,\mathrm{MHz}\) experiment, on the other hand, indicate that we fail estimating correctly the second path, with a difference up to 2 samples (1 sample = 1/160MHz = 6.25 ns). #### Iv-A2 Multiple Paths In the second scenario, we consider \(K=4\) MPCs, with delays and average powers set to \(\{0,18.75,200,218.75\}\mathrm{ns}\) and \(\{0,0,-2,0\}\mathrm{dB}\). Splicing is applied on CFRs samples collected over all and \(50\%\) of the 40 MHz sub-bands, with the purpose to estimate the 160 MHz CIR. The results are presented in Figure 4 and illustrate that splicing applied over all the sub-bands correctly estimate the \(K=4\) peaks delay, shown by the markers. Yet, the algorithm fails to estimate the third peak when only \(50\%\) of the sub-bands are used. To summarize, the simulation results show that, channel splicing correctly estimates the paths delay which are distinguishable only by a wide channel, in case it is applied over the CFR samples collected over all the narrower sub-bands. The algorithm performance in some cases deteriorate when \(50\%\) of the sub-bands are utilized. As future work we aim to improve the algorithm performance when only \(50\%\) of the sub-bands are utilized which would speed up the estimation of wider channels in the range of, e.g., 1 GHz. ### _Indoor Lab-Experiments_ To verify the accuracy of the spectrum splicing technique, measurements are conducted in real-world scenarios. Two USRP N310, correspondingly the transmitter and receiver, are connected through 1 Gbps Ethernet to a host computer, in which two instances of Matlab are set up. The experiments are performed indoor, at 2.4 GHz using signal bandwidth of 80 MHz and a communication distance of 4 m. Next, the wide bandwidth is divided into \(2\times 40\,\mathrm{MHz}\), \(4\times 20\,\mathrm{MHz}\) sub-bands and raw data is collected at the center frequencies listed in Table II. Fig. 4: Estimated MPCs using spectrum splicing based on 40 MHz sub-bands for the scenario with 4 paths. Fig. 5: Estimated peaks from measurements conducted over all frequency bands and compared towards the 80 MHz CIR. Fig. 3: Estimated peaks using 50% of the sub-bands, for different sub-bands width, compared towards 160 MHz CIR. Fig. 6: Estimated peaks from measurements conducted over 50% of the sub-bands and compared towards the 80 MHz CIR. Fig. 2: Estimated peaks using different sub-bands width compared towards the 160 MHz CIR. To estimate the wide bandwidth CIR, spectrum splicing is applied on the collected CFR samples over all and half of the sub-bands. The results are presented in Figures 5 and 6, respectively. The plots depict the estimated two strongest paths for different sub-channel width (illustrated by the markers), compared towards the 80 MHz CIR. In all cases the method correctly compute the strongest 80 MHz paths in terms of delay. However, differently from simulations, the amplitude of the CIR slightly fluctuates over the collected packets at each band as a consequence of the dynamic environment, even though the transmitter and the receiver are static. This slight fluctuation impacts the OMP algorithm on delay estimation. Therefore, to study the algorithm performance, we collected 100 packets and computed the ecdf for the estimated two strongest paths for the 100 packets, over all and 50% 20 MHz sub-bands. The ground truth are the two 80 MHz peaks delay, respectively, 0 ns and 12.5 ns. The results are given in Figure 7 and show that for such narrow bands, the method sometimes overestimate the delays. Considering that the resolution of the wide band is \(1/80\,\mathrm{MHz}=12.5\,\mathrm{ns}\), the first path is sometimes overestimated up to 2 samples, whereas the second path up two 3 samples. As a future work, we aim to optimize the method, such that the impact of the amplitude variation is ignored. ## V Conclusion Following the key concepts of joint communication and sensing, we realized an OFDM-based channel sounder using spectrum splicing. Spectrum splicing allows to use measurements of multiple narrow-band subchannels to obtain precise wideband measurements. In particular, we validated the low-complexity grid-based spectrum splicing algorithm in simulations and real-world-scenario - making it feasible for internet of things (IoT) solutions. The algorithm is integrated into an IEEE 802.11ac based communication system, where CIR and CFR are estimated using the LS estimation technique. For indoor lab experiments, we implemented the system using an USRP-based SDR. We were particularly interested in the ability to obtain wideband channel properties using only a subset of the narrow-band subchannels. Our system allows a very good estimation of the multipath components in terms of time delay both in simulation as well as in indoor experiments. We see this work as a first step towards measurements of even wider band channels in the mmWave bands. In future work, we plan further experiments also covering longer distances and more dynamic scenarios. In addition, we aim to analyze the algorithm performance in computing the amplitude for the estimated multi-path components.
2308.03185
VN-Solver: Vision-based Neural Solver for Combinatorial Optimization over Graphs
Data-driven approaches have been proven effective in solving combinatorial optimization problems over graphs such as the traveling salesman problems and the vehicle routing problem. The rationale behind such methods is that the input instances may follow distributions with salient patterns that can be leveraged to overcome the worst-case computational hardness. For optimization problems over graphs, the common practice of neural combinatorial solvers consumes the inputs in the form of adjacency matrices. In this paper, we explore a vision-based method that is conceptually novel: can neural models solve graph optimization problems by \textit{taking a look at the graph pattern}? Our results suggest that the performance of such vision-based methods is not only non-trivial but also comparable to the state-of-the-art matrix-based methods, which opens a new avenue for developing data-driven optimization solvers.
Mina Samizadeh, Guangmo Tong
2023-08-06T18:33:11Z
http://arxiv.org/abs/2308.03185v1
# VN-Solver: Vision-based Neural Solver for Combinatorial Optimization over Graphs ###### Abstract. Data-driven approaches have been proven effective in solving combinatorial optimization problems over graphs such as the traveling salesman problems and the vehicle routing problem. The rationale behind such methods is that the input instances may follow distributions with salient patterns that can be leveraged to overcome the worst-case computational hardness. For optimization problems over graphs, the common practice of neural combinatorial solvers consumes the inputs in the form of adjacency matrices. In this paper, we explore a vision-based method that is conceptually novel, can neural models solve graph optimization problems by _taking a look at the graph pattern_? Our results suggest that the performance of such vision-based methods is not only non-trivial but also comparable to the state-of-the-art matrix-based methods, which opens a new avenue for developing data-driven optimization solvers. Neural Combinatorial Optimization, Computer Vision + Footnote †: journal: Computer Vision **Contributions.** We present a conceptually simple framework called the vision-based method for solving deterministic combinatorial optimization problems over graphs. The proposed framework consists of three modules: _graph embedding, image generation_, and _image classification_. The first module decides how to embed a graph in Euclidean space; the second module visualizes the embedding as an image of pixels; the third module decides if the input is a true instance. We adopt ResNet for image classification and discuss possible choices for graph embedding and image generation. Our experiments promisingly show that vision-based methods are effective with statistical significance and no less powerful than the state-of-the-art matrix-based methods for the Hamiltonian cycle problem. Our experiments also provide insights into the role of the visualization modules in determining the framework's efficacy. For example, we observe that structured visualizations are evidently better than random visualizations. ## 2. VN-Solver: A Vision-Based Method A deterministic graph optimization problem is specified by a function \(F\) that maps each graph \(G=(V,E)\) to true or false. For example, in planarity testing (K In generating an image for a given embedding, we plot nodes as solid circles and edges as solid line segments. The size is 224 by 224 in pixels, which is aligned with ResNet (He et al., 2015). The primary factors we consider are the size of the node circles and the thickness of the edges segments (Figs. 2m-2p). In addition to gray images (e.g., Figs. 2a-2p), the images may be colored in various ways. For example, in the _uniform-color_ scheme, the nodes are in one color and the edges are in another color (e.g., Figs. 2q-2l), while in the _random-color_ scheme, we randomly pick one color for each entity (i.e., node or edge) - Figs. 2u-2x. While human perception may not be very sensitive to these factors, images with different node circle sizes can be very different from each other from the perspective of machine learning models. In fact, all the visualizations in Fig. 2 are associated with the same underlying graph. It would be interesting if one can observe that some of them are more informative than others in deciding certain graph properties. ## 3. Empirical Studies In this paper, we focus on the Hamiltonian cycle problem. Based on the House of Graphs (Hamilton et al., 2016), we create two datasets, Small and Large. The Small dataset contains 4,115 graphs with 4 to 20 nodes, where 2,277 are Hamiltonian and 1,838 are non-Hamiltonian; the Large dataset consists of 13,192 graphs with 20 to 50 nodes, where 7,453 are Hamiltonian and 5,739 are non-Hamiltonian. We examine the following methods in our experiments. * **VN-Solver**. Unless otherwise specified, we have \(a/b=1\) for the circular layout and \(r=0.3\) for the spiral layout; for all visualizations, each node occupies \(2\times 2\) in pixels and the thickness of the line segments is 2 in pixels. We train ResNet using Adam with a learning rate of 0.001 and an exponential learning rate decay of 0.09. We set the maximum epoch as 200 and stop when the F1 score does not increase for 8 epochs. * **Graphormer**. We consider graph transformer (He et al., 2015), which is one of the state-of-the-art matrix-based methods. We adopt the graphormer-sim architecture composed of 12 attention layers and each layer has 8 heads. The training is done via Adam with a learning rate of 0.001, and a decay of 0.01. The early stopping is the same as that in the VN-Solver. * **Naive-Bayesian**. This is a feature-oblivious Bayesian method that makes the prediction based on the prior distribution over \(\{0,1\}\) estimated from the training samples. For example, the prediction is uniformly at random from \(\{0,1\}\), if half of the training samples are true instances. For each learning-based method, the training-testing process is repeated five times, and we report the average performance in terms of common metrics together with the standard deviation. The testing size is fixed to be 500; we experiment with sample sizes for training, where 80 (resp., 20) percent of the samples are used for training (resp., validation). ### Observations **Feasibility**. The main results can be found in Table 1. In addition, the generalization performance of VN-Solver after each training epoch is given in Fig. 3, where results for circular and spiral layouts suggest a reasonable learning process, while the random visualizations are not very informative. From these results, it is clear that VN-Solver can perform better when more training data is given or more training epochs are used. This confirms that VN-Solver indeed can learn from data towards solving the Hamiltonian cycle problem, which gives the first piece of evidence supporting the feasibility of vision-based neural combinatorial solvers. **Effectiveness.** Promisingly, the results in Table 1 also suggest that VN-Solver is not only feasible but also no less effective than the state-of-the-art matrix-based method in most cases. In particular, it is comparable to Graphormer when the visualizations are gray, and it outperforms Graphormer under the uniform-color scheme. Notably, the results of VN-Solver and Graphormer are statistically significant, as they clearly outperform Naive-Bayesian. **Coloring scheme.** One interesting observation is that colored visualization can consistently lead to better performance than gray ones do. This is especially true when the training size is not very large: with 200 training samples on the Large dataset, switching the coloring scheme from gray to uniform can increase the F1 score from 0.61 to 0.90 under the circular layout. One plausible reason is that the graph structure is more salient to image classifiers like ResNet when the nodes and edges are colored. For similar reasons, we see that switching from uniform-color scheme to random-color scheme will slightly decrease the performance. **Other factors.** Table 2(a) shows the results of VN-Solver under the circular layout with different node sizes and edge thicknesses. In general, we see that the performance becomes better when the nodes and edges are visualized with larger sizes. Of course, an extreme size can make the graph structure not recognizable - for example, \((x,y)=(100,100)\) as shown in Fig. 2p. Finally, we observe that the performance is also sensitive to the parameters of the layouts, as shown in Table 2(c) and 2(d). For the circular layout, increasing the ratio tends to decrease the performance. For the spiral layout, visualizations with a small spiral factor \(r\) can be better recognized by ResNet, which is intuitive since, for example, Fig. 2e appears to be more structured than Fig. 2h. Figure 3. The F1 score of VN-Solver after each epoch on dataset. Small. Each sub-float is labeled by the layout method and the sample size in training, and it shows the results under five different seed values. The early stopping ensures that a decent checkpoint (not necessarily the last epoch in the figure) is selected as the final model. ## 4. Further Discussions This paper presents the framework of vision-based neural solvers, which are conceptually different from the common practice based on matrix processing. The experiments have confirmed the feasibility and effectiveness of such methods for the Hamiltonian cycle problem, with the hope to pave a new avenue for developing neural combinatorial solvers. We close our paper by listing the following research issues that we believe deserve future investigations in depth. * **Wider applicability.** It is of interest to examine the feasibility of vision-based methods for other deterministic graph \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Dataset: Small**} & \multicolumn{2}{c}{100} & \multicolumn{2}{c}{200} & \multicolumn{2}{c}{1000} \\ & & AUC & Accuracy & F1 & AUC & Accuracy & F1 & AUC & Accuracy & F1 \\ \hline \multirow{8}{*}{VN-Solver} & Circular & \(0.55\pm 0.06\) & \(0.45\pm 0.03\) & \(0.62\pm 0.03\) & \(0.43\pm 0.02\) & \(0.44\pm 0.02\) & \(0.61\pm 0.02\) & \(0.86\pm 0.09\) & \(0.79\pm 0.09\) & \(\mathbf{0.78\pm 0.08}\) \\ & Spiral & \(0.59\pm 0.15\) & \(0.52\pm 0.13\) & \(0.63\pm 0.04\) & \(0.65\pm 0.14\) & \(0.54\pm 0.15\) & \(0.65\pm 0.06\) & \(0.84\pm 0.02\) & \(0.78\pm 0.02\) & \(0.76\pm 0.02\) \\ & Random & \(0.50\pm 0.01\) & \(0.49\pm 0.06\) & \(0.50\pm 0.28\) & \(0.51\pm 0.02\) & \(0.44\pm 0.02\) & \(0.61\pm 0.02\) & \(0.52\pm 0.05\) & \(0.49\pm 0.07\) & \(0.37\pm 0.34\) \\ \cline{2-10} & Circular & \(0.69\pm 0.13\) & \(0.61\pm 0.09\) & \(\mathbf{0.63}\pm 0.09\) & \(0.75\pm 0.14\) & \(0.68\pm 0.12\) & \(\mathbf{0.69}\pm 0.04\) & \(0.93\pm 0.02\) & \(0.85\pm 0.02\) & \(\mathbf{0.83\pm 0.03}\) \\ & Color & Spiral & \(0.70\pm 0.07\) & \(0.62\pm 0.05\) & \(\mathbf{0.65}\pm 0.05\) & \(0.78\pm 0.06\) & \(0.71\pm 0.06\) & \(0.72\pm 0.05\) & \(0.86\pm 0.04\) & \(0.78\pm 0.06\) & \(0.76\pm 0.07\) \\ & Random & \(0.52\pm 0.03\) & \(0.45\pm 0.0\) & \(0.62\pm 0.00\) & \(0.51\pm 0.02\) & \(0.43\pm 0.00\) & \(0.60\pm 0.00\) & \(0.51\pm 0.02\) & \(0.48\pm 0.00\) & \(0.65\pm 0.00\) \\ \cline{2-10} & Random & Circular & \(0.63\pm 0.09\) & \(0.54\pm 0.11\) & \(0.61\pm 0.03\) & \(0.65\pm 0.11\) & \(0.56\pm 0.11\) & \(0.64\pm 0.04\) & \(0.9\pm 0.02\) & \(0.83\pm 0.03\) & \(\mathbf{0.81\pm 0.02}\) \\ & color & Spiral & \(0.73\pm 0.06\) & \(0.62\pm 0.08\) & \(\mathbf{0.64}\pm 0.04\) & \(0.64\pm 0.15\) & \(0.6\pm 0.1\) & \(0.65\pm 0.03\) & \(0.85\pm 0.02\) & \(0.73\pm 0.08\) & \(0.74\pm 0.04\) \\ \hline \multirow{8}{*}{VN-Solver} & Graphormer & \(0.68\pm 0.02\) & \(0.60\pm 0.03\) & \(0.60\pm 0.14\) & \(0.70\pm 0.03\) & \(0.63\pm 0.01\) & \(0.64\pm 0.11\) & \(0.73\pm 0.03\) & \(0.66\pm 0.05\) & \(0.65\pm 0.18\) \\ & Naive-Bayesian & \(0.50\pm 0.02\) & \(0.50\pm 0.02\) & \(0.54\pm 0.03\) & \(0.50\pm 0.02\) & \(0.50\pm 0.02\) & \(0.55\pm 0.02\) & \(0.50\pm 0.02\) & \(0.50\pm 0.01\) & \(0.55\pm 0.01\) \\ \hline \hline \multirow{8}{*}{VN-Solver} & **Dataset: Large** & \multicolumn{2}{c}{100} & \multicolumn{2}{c}{200} & \multicolumn{2}{c}{1000} \\ & & AUC & Accuracy & F1 & AUC & Accuracy & F1 & AUC & Accuracy & F1 \\ \hline \multirow{8}{*}{VN-Solver} & Circular & \(0.58\pm 0.33\) & \(0.45\pm 0.02\) & \(0.62\pm 0.02\) & \(0.44\pm 0.39\) & \(0.44\pm 0.02\) & \(0.61\pm 0.02\) & \(0.96\pm 0.03\) & \(0.92\pm 0.03\) & \(0.92\pm 0.04\) \\ & Spiral & \(0.51\pm 0.19\) & \(0.53\pm 0.12\) & \(0.5\pm 0.28\) & \(0.72\pm 0.24\) & \(0.63\pm 0.23\) & \(0.72\pm 0.15\) & \(0.98\pm 0.02\) & \(0.95\pm 0.02\) & \(\mathbf{0.94\pm 0.02}\) \\ & Random & \(0.51\pm 0.17\) & \(0.48\pm 0.06\) & \(0.37\pm 0.34\) & \(0.54\pm 0.14\) & \(0.57\pm 0.09\) & \(0.26\pm 0.36\) & \(0.81\pm 0.02\) & \(0.72\pm 0.06\) & \(0.72\pm 0.03\) \\ \cline{2-10} & Circular & \(0.83\pm 0.08\) & \(0.72\pm 0.16\) & \(\mathbf{0.74}\pm 0.10\) & \(0.93\pm 0.04\) & \(0.90\pm 0.07\) & \(\mathbf{0.90\pm 0.08}\) & \(0.98\pm 0.01\) & \(0.94\pm 0.02\) & \(\mathbf{0.94\pm 0.03}\) \\ & Color & Spiral & \(0.86\pm 0.04\) & \(0.78\pm 0.03\) & \(\mathbf{0.75}\pm 0.07\) & \(0.91\pm 0.11\) & \(0.81\pm 0.18\) & \(\mathbf{0.83\pm 0.12}\) & \(0.98\pm 0.01\) & \(0.95\pm 0.01\) & \(\mathbf{0.95\pm 0.02}\) \\ & Random & \(0.53\pm 0.03\) & \(0.48\pm 0.02\) & \(0.51\pm 0.29\) & \(0.51\pm 0.06\) & \(0.47\pm 0.00\) & \(0.64\pm 0.00\) & \(0.47\pm 0.08\) & \(0.41\pm 0.02\) & \(0.58\pm 0.01\) \\ \cline{2-10} & Random & Circular & \(0.59\pm 0.18\) & \(0.54\pm 0.09\) & \(0.63\pm 0.03\) & \(0.87\pm 0.12\) & \(0.77\pm 0.17\) & \(0.80\pm 0.10\) & \(0.97\pm 0.01\) & \(0.91\pm 0.02\) & \(0.91\pm 0.03\) \\ & color & Spiral & \(0.77\pm 0.13\) & \(0.58\pm 0.16\) & \(0.64\pm 0.09\) & \(0.82\pm 0.27\) & \(0.79\pm 0.17\) & \(0.81\pm 0.10\) & \(0.98\pm 0.01\) & \(0.94\pm 0.02\) & \(0.93\pm 0.03\) \\ \hline \hline \multirow{8}{*}{VN-Solver} & Graphormer & \( optimization problems. For example, can neural solvers effectively decide graph isomorphism based on the visualizations? * **Advanced embedding methods.** What is the best graph embedding method with respect to solving a given graph optimization problem? This question is quite open and has to be discussed on a case-by-case basis. In addition to classic methods like circular and spiral layouts, one can even use generative models to design learnable embedding methods. In another issue, besides the 2D Euclidean space, one can embed graphs in any other metric space, which creates new research opportunities. * **The best visualization style.** Following the investigations on the coloring scheme, node size, and edge thickness, we are wondering how to design better visualization styles. For example, while the random-color scheme does not exhibit extra benefits for our cases (i.e., the Hamiltonian cycle problem), images of perceptual styles with possible semantic meanings might be better recognized by machine learning models thereby offering better generalization performance. Immediately and somehow surprisingly, techniques for image style analysis [9, 12] become a potential component of neural solvers for graph optimization (See Fig. 4 for an example).